CCC Blog

CCC Blog

 

Unleashing Enterprise AI: Key Insights from IBM’s Lisa Amini at CCC’s Computing Futures Symposium
2025-07-18 16:20 UTC by Catherine Gill

See the Full Recording of Lisa Amini’s keynote at the CCC’s 2025 Computing Futures Symposium. 


In May of 2025, the Computing Community Consortium’s Computing Futures Symposium hosted an insightful keynote speech by Lisa Amini, Director of Data & AI Platforms Research, and an IBM Distinguished Engineer. With a background spanning over 25 years at IBM in areas such as Data & AI, stream processing, and distributed systems, Amini offered a comprehensive look at the rapid advancements in agentic and generative AI, and their growing impact on the enterprise. Her address highlighted a critical shift in AI research and development, global innovation trends, and the transformative potential of AI within businesses.

 

The Accelerating Pace of AI Progress

Amini began by highlighting several key indicators signaling the accelerating pace of AI progress. A first indicator is the time it takes for AI systems to surpass human proficiency on benchmark datasets. Historically, challenges like speech and handwriting recognition took decades of investment to reach human performance levels. However, more complex tasks like language understanding are now being mastered by AI in a matter of years, and in some cases, even months. This rapid progress is evident in benchmarks like GLUE and SuperGLUE, which were once considered beyond AI capabilities but human level performance was quickly surpassed. This indicates both accelerating progress, and challenges in setting next generation benchmarks. 

 

This accelerated progress, Amini explained, while sometimes surprising, is directly linked to predictable advancements in compute power, data availability, and algorithms. Doubling compute power used to train AI, which once averaged 20 months, now occurs within six months. Amini pointed out that innovations in using lower-quality data for self-supervision and synthesizing new data have avoided data plateauing. Similarly, compute and algorithms continue to advance, and the continued improvement from increasingly larger models indicates that scaling laws are still shifting. In summary, while we are experiencing dramatic advances, many AI challenges remain to be solved and there are significant opportunities for new enterprises and emerging markets to leapfrog more developed counterparts. 

 

AI in the Enterprise: Unlocking Trillions in Value

Amini transitioned to the application of AI within enterprises, citing its projected enhancement of human productivity and the unlocking of $16 trillion in value by 2030, according to a study conducted by PwC. This economic impact, she stressed, is already underway. For example, AI and automation have helped enable IBM to drive $3.5 billion in productivity gains across the company in the last 2 years. 

 

Amini detailed AI’s evolving enterprise deployment, moving from generative AI for content creation and conversational AI assistants for human productivity, to agentic AI, which can act more autonomously. She noted a particular fascination with the progression from AI agents working individually to systems of collaborative agents which work together to tackle complex problems while balancing at times conflicting priorities (i.e., energy optimization vs. speed). At IBM Research AI, efforts concentrate on building open AI models like Granite, infusing AI into data platforms for greater automation and efficiency, and developing agents for enterprise tasks (e.g., code patches, system reliability, and cloud cost optimization). A key observation is that smaller, more cost and energy efficient models, such as IBM’s Granite, can be fine-tuned to achieve comparable performance to much larger ones on key enterprise tasks. 

 

The Transformative Power of Enterprise Assistants and Agentic DataOps

Amini provided a compelling example of a type of enterprise assistant already in broad use across IBM. These assistants allow users to ask questions in natural language, retrieving information from a wide variety of internal sources, ranging from HR policy documents, to sales data, to process mining, and more. They can also interact with tools and APIs to perform tasks humans would normally perform.  A significant, unexpected benefit is the ability to swap out underlying systems (e.g., changing HR platforms) without requiring users to relearn interfaces, as they continue to interact with the assistant in natural language.

 

Amini also introduced IBM’s data agents research, and more specifically agentic data-ops, where semi-autonomous agents collaborate to manage complex enterprise data estates. These data agents work in the background to make the data estate more self-service, efficient, and responsive to data consumers, and to automate or assist with tasks normally performed by data engineers, data stewards, data analysts, and data governance engineers. These advanced systems of data agents tackle tasks such as data discovery, data product curation, data product evaluation, data test set generation, data flow issue identification, diagnosis and remediation. This line of research requires not only creating the system of AI agents, but also improving the planning and reasoning skills of underlying AI models.

 

The Road Ahead: Wildcards and Critical Skills

Amini concluded by discussing “wild cards” that could significantly impact the future of AI:

  • Quantum Computing: While not yet leveraged pervasively for AI challenges, quantum computers continue to mature and could create an inflection point for AI.
  • Data Center Energy Consumption: The astonishing energy demands of AI inference highlight a critical sustainability challenge that might impede future AI progress, or lead to new energy solutions.
  • Enterprise Data Utilization: With only an estimated 1% of enterprise data currently being leveraged by AI, more comprehensive access and utilization could unlock immense untapped potential.
  • AI Applied to AI Research: Using AI to conduct AI research, for example, hypothesizing experiments and generating needed code, models and test data, could lead to a dramatically different regime of AI development.

 

Finally, during the Q&A, Amini offered valuable advice for computer science students. She stressed the importance of consistent hands-on experience with the rapidly evolving AI frameworks, models, and agentic paradigms. Internships and work on real-world projects, including open source, with large scale data and systems were highlighted as crucial for gaining practical skills. Amini also strongly advocated for more rigorous education in experimental design, emphasizing its critical role in efficiently conducting increasingly expensive and complex AI experiments.

 

The keynote underscored that the field of AI is in an exceptionally dynamic and exciting period, characterized by unprecedented acceleration and profound implications for both research and industry.

See the full recording of Amini’s keynote speech here.

 


 

Content mobilized by FeedBlitz RSS Services, the premium FeedBurner alternative.