Is AI killing the tech industry?

Last update: 15/03/2026
Author Isaac
  • AI does not destroy the technology industry, but it reshapes the market, employment, and hardware demand, creating winners and losers.
  • There is a large gap between the discourse on AI and its actual use: there is a lack of training, clear use cases, time, and leadership structures.
  • The key is no longer accessing technology, but governing it: internal models, standards, enabling regulation, and shared responsibility.
  • The value shifts from producing to discerning: AI multiplies capabilities where there is human judgment, digital culture, and clear rules.

Impact of artificial intelligence on the technology industry

The emergence of artificial intelligence has opened up a huge debate: Is AI killing the tech industry or is it just turning it upside down? The feeling of vertigo is real: careers are in question, business models are faltering, and there's an avalanche of tools that seems unstoppable. But if we quiet the noise a bit, what we find isn't so much a technological apocalypse as a change in the rules of the game.

Today almost anyone can access advanced AI models from the browser, but This democratization does not mean that all organizations are making good use of these capabilities.AI is straining hardware demand, changing the job market, forcing a rethink of data governance, and testing the true digital maturity of companies. The key is no longer whether we use AI, but how we integrate it, who governs it, and what we are willing to delegate to it.

Is AI really harming the tech industry?

One of the first visible consequences of this wave is the impact on hardware: AI models consume a brutal amount of computing power and memory.This has led to a race to develop specific chips and processors. This pressure has diverted resources from the manufacture of more traditional components, creating bottlenecks and increasing the cost of some basic consumer technology.

This shift of investment towards specialized AI infrastructure has meant that Many technological products that were once relatively affordable are now more expensive or harder to obtain.For end users and small businesses, this translates into slower equipment renewal and a sense that the market is focusing on AI and neglecting other everyday technological needs.

In the workplace, the disruption is most noticeable in entry-level positions: Many junior tasks in development, support, or content are being partially or fully automatedThis could discourage new professionals who don't see a clear fit if they don't specialize in data, AI, or automation from the outset. The industry risks dangerously diminishing the pool of generalist talent that typically sustains long-term technological evolution.

However, at the same time, entirely new fields are opening up. Profiles are emerging related to data engineering, MLOps, algorithmic governance, model auditing, AI ethics, and the design of customized solutions. Companies that understand this shift don't see AI as an enemy, but as an engine for reshaping their value proposition.combining automation with high value-added services.

An example of this approach is that of companies that develop custom AI-powered software to optimize processes, improve customer experience, or strengthen cybersecurityThese organizations integrate advanced machine learning techniques with consulting services, cloud migration (AWS, Azure) and business intelligence (Power BI and similar), focusing technology on generating better decisions and not just cost cuts.

AI and businesses: between discourse and reality

If we look at the SMEs and professionalsThe data is clear: There's much more noise about AI than actual day-to-day use.Recent reports on the Spanish business landscape show that around 45% of companies admit to never using AI in their activity, while only around 11% claim to use it regularly.

Between those extremes, 36% report using AI occasionally for specific tasksThis reflects a scenario of isolated tests, standalone pilots, and little structural integration. Most of these companies consider themselves "fairly digitized": almost eight out of ten say they have a medium or high level of digital maturity, but when you look at the specific tools they use, the picture changes.

In practice, Only around 14% claim to have specific AI software integrated into their systemsIn other words, basic digitization (ERP, e-invoicing, cloud-based office applications) has progressed, but the adoption of AI and intelligent automation continues to lag. The technology is no longer science fiction, but it's still not part of the standard operating procedures in many organizations.

  Windows 10 File Explorer Not Working. Causes, Solutions and Alternatives

This gap between discourse and reality shows that The challenge is no longer access to technology, but knowing what to use it for and how to fit it into real processesCompanies that simply "add AI" as a marketing ploy, without clear use cases or value metrics, end up generating internal noise and frustration rather than competitive advantages.

At the same time, sector studies indicate that AI is increasingly perceived as a matter of national competitiveness, not just individual company competitiveness.Managers and administrations agree that its impact spans productivity, employment, education and institutional quality, and that a shared strategy is needed so that it does not remain a collection of scattered experiments.

The biggest barrier: education, culture, and time

When companies are asked why they don't take the leap, the majority answer is emphatic: The main barrier to implementing AI is the lack of knowledge and trainingAround 37% of organizations cite a lack of specific skills as the number one obstacle to progress.

In addition to that deficit, there are other very specific obstacles: 26% perceive AI as a technology that is too complex from a technical point of view25% say they don't see clear use cases that justify the investment. In other words, it's not so much that the technology isn't available, but rather that many decision-makers don't know how to implement it in their business.

In addition to these doubts, The day-to-day routine works against transformationApproximately 24% mention that they simply don't have time to implement new tools, 22% believe the cost can be high, and around 21% point to resistance to change within their own organization or among their clients. Inertia plays a significant role, especially in small and medium-sized enterprises.

All of this paints a picture where The barrier is much more cultural and organizational than purely technologicalMost companies have heard of AI and recognize its potential, but lack the basic practical literacy to safely experiment, measure results, and scale what works.

This training deficit is not limited to technical profiles: Only a very small percentage of companies have implemented comprehensive AI training programs for their entire workforceMore than 20% do not have any structured program, and the training that does exist is concentrated on IT or data teams, leaving out managers and middle managers who are precisely the ones who prioritize investments and redesign processes.

Too many hours spent on manual tasks that AI could alleviate

Paradoxically, while companies are hesitant about AI, They continue to dedicate an enormous amount of time to manual and repetitive tasks.Recent studies estimate, for example, that the average time spent on manual invoicing is around 26,3 hours per month, accounting around 32,3 hours and personnel management around 24,5 hours.

It is precisely those routine jobs where Automation and AI could bring about a significant leap in productivity.These solutions include automatic data extraction, accounting reconciliations, document classification, report generation, and top-level customer service via well-trained chatbots. However, their adoption remains limited and fragmented.

The paradox is that, when asked about the benefits they expect from digitization, Companies do clearly identify the advantagesThe goals are to reduce time spent on administrative tasks, decrease errors and costs, and even improve customer service. They know the theory; the problem lies in turning those desires into concrete projects, with clear responsibilities and follow-up.

Here, AI is starting to create a divide between companies: Those who use it only to "save time" and gain a little more efficiency, and those who combine it with a redesign of how they make decisionsIn these latter cases, algorithms not only automate, but also provide structured information to make better, faster, and more contextual decisions.

In that environment, Professional value is shifting from “producing” to “discerning”With tools capable of generating dozens of text, image, or analysis proposals in seconds, human work shifts to selecting what is worthwhile, what fits the strategy, and what risks are involved. Thinking isn't replaced; the point in the process where it becomes essential is simply changed.

The role of consultancies and law firms in the age of AI

Consulting firms and professional offices occupy a key position in this whole puzzle, especially for SMEs and the self-employed. Currently, Most companies still see their advisor as a mere administrative managerfocused on tax and bureaucratic obligations rather than supporting digital transformation.

  Fix Bluetooth Not Working Error On Windows 11

The data suggests that approximately 49% of companies perceive their advisor as a tax and documentation processorWhile only around 5% consider it a true “digital partner” capable of guiding them in the use of new technologies, including AI. The opportunity for evolution is enormous.

Interestingly, The consultancies themselves acknowledge that they are not yet fully prepared for this change of role.Nearly 35% admit they need more training in digital and automation tools to effectively lead their clients' digital transformation. In other words, they too are stuck in manual tasks that could be delegated to intelligent systems.

If these offices rely on AI for automatic document classification, the generating draft reports or predictive analysis of accounting data, They can free up time to offer a much more strategic service., based on proactive advice, financial planning or early risk detection.

In this new scenario, the true value of consulting will not be in filling out forms, but in interpreting data, explaining scenarios, and helping their clients decide what to automate, what to keep in human hands, and how to govern the systems they adoptAI thus becomes a lever for professional repositioning, not a direct competitor.

What AI do we have today: specific, generative, and far from science fiction

It's important to clarify what we're talking about when we talk about AI, because Not everything circulating in the public debate corresponds to the technology that is actually available.The vast majority of current systems are "narrow" or specific AI: models designed for specific tasks, without general understanding or awareness of any kind.

Within this category we find machine learning, deep learning, and generative AIIt is capable of producing text, images, audio, or code from large volumes of data. All of this is already deployed in commercial products and everyday services, from virtual assistants to recommendations on platforms.

In contrast, concepts such as AI general (AGI), capable of reasoning autonomously and flexibly like a human beingThese remain, to this day, a theoretical horizon and an object of research, but not an operational reality. Confusing these two levels fuels exaggerated fears and commercial smokescreens that hinder sound decision-making.

The relevant thing is that The AI ​​innovation process is already fully underway in almost every sector and country.The pace is so fast that the main discussion has ceased to be technical (what can be done) and has become organizational and social: how is all this governed, who assumes responsibility, and with what criteria is it deployed in companies, administrations, and everyday life?

In the personal sphere, generative AI is becoming established as a ubiquitous productivity toolIt involves drafting, summarizing documents, translating, suggesting ideas, and supporting learning. These functions will be integrated almost seamlessly into applications and devices, creating a new form of "digital illiteracy": not understanding how they work and ending up as passive users of systems whose logic we don't comprehend.

Real risks: errors, biases, and loss of control

The accelerated deployment of AI brings with it very specific and already visible risks: Incorrect information but expressed with complete certainty, biases inherited from training data, lack of transparency in some models, or serious impacts on employment and privacyThese are not futuristic problems, but issues of the present.

These risks, to a large extent, They stem not so much from the technology itself as from the lack of clear criteria on how it is designed, adopted, and monitoredWithout control structures, AI tends to amplify the quality—or lack thereof—of the organizations that use it: where there are clear processes and robust leadership, it multiplies capabilities; where chaos reigns, it accelerates errors and conflicts.

In this context, it is important to distinguish between regulation and governanceOverly detailed or premature regulation can ultimately stifle research and innovation, especially in a global environment where other countries operate with more flexible frameworks. The example of the European AI Act illustrates part of this dilemma: its complex risk-level classification and prescriptive obligations have been criticized for being late and insufficient in the face of a rapidly evolving landscape.

At the same time, Operating without any kind of regulatory framework or common standard is not a realistic option either.The complete absence of rules fosters opaque practices, irresponsible use of data, and deployments of systems that can cause significant harm before anyone is held accountable.

What is beginning to prevail in the international debate is a mixed approach: Governance based on self-responsibility, technical standards, and external oversight, supported by regulation that demands transparency and accountability without dictating line by line how technology should be developed.The competition is no longer just about having the best models, but about making your standards a global benchmark.

  Maintenance Area Functions: Objectives, Goals and More

AI governance models: from the enterprise to global agreements

In this area of ​​governance, several operating models can already be distinguished. The first is the corporate model of internal governance, adopted by large technology companies and data-intensive organizations: explicit ethical principles are defined, internal review committees are created, impact assessments are carried out and before launching models into production.

In this scheme, The responsibility lies with the organization itself.This model assumes the legal, economic, and reputational consequences if its AI systems cause harm or infringe on rights. It is a flexible model compatible with rapid innovation, but it depends on adequate incentives and the existence of external oversight mechanisms to prevent it from becoming mere marketing.

A second approach relies on the governance based on technical and professional standardsDriven by international organizations and industry consortia, standards are being developed on data quality, robustness, security, traceability, and risk management. Although often voluntary, their adoption becomes a de facto requirement for operating in certain markets when mandated by clients or regulatory bodies.

Meanwhile, a new trend is gaining momentum. multi-stakeholder governance modelIn this framework, businesses, governments, the scientific community, and civil society work together to reach consensus on principles and best practices. Initiatives such as the Partnership on AI or the Global Partnership on Artificial Intelligence are examples of these spaces where diverse interests are aligned and international legitimacy is given to certain criteria for responsible use.

Finally there is the regulated governance modelwhich does not replace the previous ones, but rather gives them coherence and coverage. Here the priority is not to regulate the technology in detail, but to require organizations to have clear, documented and verifiable governance models: who decides what, with what data, with what controls and who is responsible when something goes wrong.

Training and responsibility: the piece that is often missing

Beyond committees and regulations, the expansion of AI makes it clear that the biggest challenge is human: How do we train people to use these tools without losing their judgment or autonomy?Governance cannot be limited to systems and organizations; it must include the end users themselves.

AI literacy is not just about knowing how to program models: This involves understanding its limits, the drift of the model and their biases, when to trust an answer and when to question itIn an environment saturated with automatically generated content, that critical capacity becomes essential to avoid falling into a blind dependence on what the screen says.

Effective governance therefore requires shared responsibility in use, supported by continuous trainingCompanies need their teams—technical, managerial, and operational—to understand enough to integrate AI into their processes without delegating everything to the algorithm. Citizens require educational tools to avoid confusing plausibility with reality.

Today it is observed that many organizations are operating with AI without clear rules of the gameAround one in four companies have no internal policy on responsible use, almost 40% do not follow standards or certifications, and a significant fraction admit that they do not adequately manage data privacy in their AI projects. This combination of improvisation and lack of knowledge is what truly puts both the technology industry and public trust at risk.

Ultimately, the question of whether AI is "killing" the tech industry falls short: Rather than destroying it, it is forcing it to rebuild itself, to review its business model, its relationship with talent, and its rules of trust.The difference between winning and losing in this context will not be who has the most brilliant algorithm, but who is able to combine technology, training, governance and responsibility to turn AI into a sustainable competitive advantage, and not a passing fad that generates more noise than value.

uses of AI in cybercrime
Related article:
Uses of AI in cybercrime and how to defend against it