IT Sector Update : Early insights into AI’s impact on the labor market by Motilal Oswal Financial Services Ltd
Key finding: Real world usage trails model capabilities; coding and customer service the most exposed
Anthropic recently released a report titled ‘Labor Market Impacts of AI: A New Measure and Early Evidence,’ examining how AI is being used across occupations and what it may mean for the labor market. The debate on AI and jobs has largely focused on what the technology could automate, but the report suggests the real picture is more gradual. The key findings that stand out are: 1) There remains a wide gap between AI’s theoretical capability and its actual usage (Exhibit 1). For example, in computer and math-related roles, AI could theoretically assist in nearly 94% of tasks, but observed usage today covers only about one-third, driven by enterprise constraints such as legacy systems and workflow integration. 2) Coding and customer service roles remain the most exposed, with a growing share of tasks being assisted by AI tools. Our view: The large gap between AI’s theoretical capability and its observed usage is partly due to enterprise implementation constraints. In practice, adoption depends less on what AI can do and more on how easily organizations can integrate it into existing workflows and technology stacks. As a result, AI deployment is naturally easier in greenfield environments, while legacy-heavy brownfield systems slow enterprise-wide AI scale-up.
AI-observed usage is far from reaching its theoretical capability
* Theoretical vs. Observed AI usage: Theoretical coverage refers to tasks where AI could theoretically help complete the work faster (for example, an LLM completing a task in half the time).
* Observed coverage means how much these tasks are actually being performed using AI in real work settings today (see Exhibit 1). In simple terms, the theoretical limit shows what AI can do, while observed usage shows what AI is doing right now.
* Across occupations, AI’s practical usage is still a fraction of its theoretical capability. For instance, in computer and math-related roles, LLMs could theoretically assist in about 94% of tasks, but actual observed usage currently covers only around one-third of them.
Coding and customer service-based occupations most exposed to AI
* Observed exposure is likely to expand with increasing adoption, advancing capabilities, and deeper deployment of AI. ? Exhibit 2 highlights the top 10 occupations exposed under this measure. Computer programmers and customer service personnel remain among the most exposed roles.
* Given that Claude is extensively used for coding, computer programmers show ~75% observed exposure, with tasks such as writing, updating, and maintaining software programs increasingly assisted by AI.
* Similarly, tasks performed by customer service representatives, including providing information, taking orders, and handling complaints, are increasingly visible in first-party API traffic on AI platforms. Observed exposure for this role stands at ~70%.
* At the lower end, nearly 30% of workers show zero exposure, largely in occupations involving physical labor such as agriculture, construction, and transportation.
Our view: Greenfield deployment enables faster AI adoption than brownfield set-ups
* Greenfield vs. brownfield environments matter: The gap between theoretical adoption and actual usage is partly explained by greenfield vs. brownfield environments. New cloud-native/digital-native firms can redesign workflows around AI from the start. In contrast, large enterprises operate with legacy systems, layered processes, and compliance frameworks, which slows down technology adoption.
* Our analysis of API calls for Claude and token usage for OpenAI reveals two key things: 1) software engineering is ground zero for AI invasion – 50% of all API calls target software engineering and 2) AI is currently being used only by cloud-first/AI-native enterprises. Of the top 20 token users for OpenAI, 90% are new-age companies.
* This tells us that AI deployment today is easier in greenfield environments. Large enterprises operate differently. Most of them run on systems built over 20-30 years, with applications that are layered, integrated, and customized. In such brownfield environments, deploying AI at scale requires integration with legacy stacks, data cleanup, and governance alignment.
* About ~60-80% of enterprise IT budgets still go toward maintenance. This means AI-led productivity gains often depend on prior modernization. Without addressing legacy complexity, scaling AI beyond pilot use cases becomes difficult
For More Research Reports : Click Here
For More Motilal Oswal Securities Ltd Disclaimer
http://www.motilaloswal.com/MOSLdisclaimer/disclaimer.html
SEBI Registration number is INH000000412
More News
Energy Sector Update : US DOC announces preliminary CVD of 126% on solar imports from India ...
