What is the state of AI (and where is it going)?

At the Future of Business with AI Summit held at the Massachusetts Institute of Technology (MIT) in mid-April 2024, there was little evidence of the “trough of disillusionment.” Industry leaders from across software, hardware, and even traditionally conservative sectors, such as banking and utilities, described an ongoing rapid expansion of artificial intelligence (AI) capabilities and applications. The summit was a parade of advancements, from smarter, more personalized AI to new technologies that are breaking through compute bottlenecks.

Here were the key takeaways from the summit and what software startup leaders should do about them:

The State of AI: “What ‘trough of disillusionment?’”

Is the “AI bubble about to burst?” The conference’s speakers didn’t seem to think so. They offered much evidence to the contrary, seen most tellingly in the stories of business leaders in traditionally risk-adverse sectors.

For example, Karen Stroup, Chief Digital Officer of global payment processor WEX, shared that she had transitioned this year from “begging and cajoling for funds” to being “’pushed by the Board’ to explore what all AI can do.” She said, “AI is not an option anymore, it’s an imperative. And so I no longer have to advocate for it. I’m actually asked to be bolder.”

Stroup has taken that remit to deploy AI across business operations and customer interactions. At WEX, AI customizes customer communications. It also automates the review process for claims, prioritizing them based on criteria such as size and complexity to speed up processing times and reduce staff workload. AI is even reviewing contracts.

Mike Henry has led a similar initiative at lender Home Trust, in his role as Chief Digital & Strategy Officer: “AI has been in banks for decades, initially as incredibly specialized and niche applications. But now, AI is accessible and usable by everyone, marking a significant shift in its application across the industry.”

Even the utmost risk-averse sectors are embracing AI. Igor Jablokov, CEO of AI enterprise search provider Pryon, told the MIT audience about how his company deployed a large language model for high-stakes knowledge management at a U.S. nuclear power plant.

Meanwhile, AI providers aren’t taking their foot off the gas. The conference featured a parade of experts from leading AI platforms like Google, Amazon, Meta, and OpenAI detailing their work in improving AI models and discovering new capabilities. There were also 70 featured startups, all either developing new hardware or software for AI.

Where AI’s Going: Expect More Leaps in Capability

Almost universally, the summit’s speakers said they expect to see AI capabilities jump within the next year. They coalesced around a few common predictions: increasing personalization, greater “common sense,” and deeper specialization.

Greater Agency & Personalization

Improvements in per-user memory will enable AI agents to deliver more personalized interactions with users, the conference speakers said. We’ve already seen early iterations of this recently roll out in ChatGPT and Pi. Combine that with AI’s growing ability to operate outside the confines of a single chat thread, and you get a world in which AI agents are acting autonomously on a human user’s behalf.

“Improving agency will empower AI agents to autonomously orchestrate a wider range of tasks across various modalities and platforms,” noted Perplexity AI CEO Aravind Srinivas. “Where is Perplexity going? True multi-modal concierge: talk to the agent and get a response no matter where you are.”

Vinod Khosla, early investor in OpenAI, predicted a similar world, in which “most use of the internet will be by agents, not by humans. There’ll be billions and maybe tens of billions of agents running around, multiple ones for each of us doing specialized things.”

More Common Sense

In part, AI will achieve greater agency by “acquiring more common sense.” Yann Lecun, Turing Award-winner and Chief AI Scientist at Meta said his team is “training systems on mental world models so they have some concept of what happens in the world when you take an action. With this, they can plan what to do.” He joked, “a good goal is to get AI as smart as your cat.”

Deeper Specialization

Maybe the most talked about trend at the conference was that of specialization. Some speakers argued that the most effective AI of the future would be those tailored to specific functions and industries. Peter Grabowski, Lead of Gemini Applied Research at Google, explained, “We’re training smaller LLMs on small sets of examples to get to market faster… Customizability and the ability to tune to highly specialized tasks are proving to be the top two selection criteria of our enterprise customers.”

Google isn’t the only AI provider “long” on specialization. Perplexity.ai’s Srinivas cited their deft use of various specialized models as the key to their success: “We use one model to understand a query, another model to decompose it into a bunch of queries, one model to summarize everything you’ve retrieved, one model to come up with next questions. All working in parallel on Perplexity.”

Though there might be a lot of models working in the background, they’re all in service of providing users a seamless experience: “The user doesn’t care which models you use… they care about how you get the answer. Our model isn’t our moat. How we orchestrate various models, how we reduce the latency, is our moat.”

Overall, the summit showed how AI’s trajectory is geared towards more nuanced and finely tuned applications, reflecting a maturing industry that continues to push deeper into various complex facets of our lives and work.

Complications: Issues the AI Sector is Working to Solve

Nobody argued that achieving this ambitious vision for AI would be easy.

Breaking Through Hardware and Software Bottlenecks

Summit speakers lamented compute bottlenecks, owing to both an insufficient supply of hardware and inefficient software.

Yann Lecun said these bottlenecks were throttling his team’s development of Meta AI: “One of the issues that we’re facing, of course, is the supply of LPUs [Language Processing Units] (and the cost of them)… Another issue is actually scaling out the running algorithm so that they can be parallelized on lots and lots of GPUs. Development around this has been kind of slow in the community.”​​

One player answering the call for more (and more efficient) hardware is Groq. Dinesh Maheshwari, shared how the company’s LPUs serialize natural language processing to gain extraordinary efficiency compared to the more widely used GPUs. He said “the Groq LPU Inference Engine is already at least 10X more energy efficient than GPUs, because its assembly line approach minimizes off-chip data flow.”

AI firms are working to resolve compute bottlenecks on the software side, too. MIT-incubated startup Liquid AI shared a new, more efficient approach to foundational models. “Large foundation models that are being trained today are extremely energy hungry.” said CEO Ramin Hasani, “Using Liquid AI for GenAI, you can build foundation models that are 10x-20x more efficient than GPTs [Generative Pre-trained Transformer models] and gain 10x-1000x faster inference times.” They’ve built their model from first principles of flexible or “liquid,” neural networks.

Balancing Safety, Utility, and Progress in AI

While developers navigate practical limits of today’s software and hardware, others in the sector are working on more nuanced issues. Aleksander Madry, Head of Preparedness at OpenAI, talked about the dual necessity of safety and preparedness, “We definitely should think about AI safety [i.e., protecting against bias, hallucinations, and other undesirable AI behavior]. But we use this word ‘preparedness’ to also prepare for the changes that will come… What does [AI] mean for the labor market? What does it mean for cybersecurity and so on? Preparedness means making sure that the downsides that this technology can bring do not happen but also the upsides will happen.”

OpenAI wasn’t the only provider to talk about how tricky it can be pioneering AI. Peter Danenberg, a Software Engineer who has worked on Google Gemini since its early days as “Bard,” said his team engaging in a delicate balancing act: “There’s this bizarre sort of optimization problem between safety and utility… to give an example, somebody wanted to do some multimodal analysis on monuments, and they couldn’t use the model for about 75% of the [images] because there was a human face in that picture.” He shared this anecdote as advice to startups who are building their own models. “That’s just one of the things that you have to be aware of when you go to market.”

Gathering and Cleaning Data

At the center of AI, both its progress and application, is data. Unfortunately, given the state of data in most organizations, data is proving a key problem to solve. “Putting data into action is ‘the whole game,’” said Raj Aggarwal, GM of GenAI at Amazon. “But the problem is that the data is messy: It’s in different structures. If you’re on-prem or multi-cloud it’s even messier. To go from prototype to production, you need to resolve these issues. You need to resolve these issues to get data into the prompt.” He ended his contribution joking: “Hopefully there’s an LLM that can help to do this.”

Implications: What Software Startups Should Take Away From the Summit

Putting it all together, software startups might change how they’re engaging with AI based on a few key takeaways from the MIT’s Future of Business Summit. Whether it applies to your software product or your business operations, here’s what to consider:

If you haven’t yet, make AI a strategic priority now. If you don’t yet have a strategy and structured program associated with implementing AI into your software and/or business operations, make one now. Even if it’s narrow in scope, you should work against a plan.

If you don’t, your company will get left behind by startups and enterprises alike. At the conference, we heard from more than 70 startups who are rapidly deploying highly capable software on the foundation of this latest wave of AI. And even ultra-regulated, slow-moving enterprises are making AI a firm-wide strategic priority. The goal is to achieve unprecedented customer value and record-setting operational efficiency.

Make build/buy decisions with deference to the speed, dynamism, and diffusion of this space.

Many at the conference expect more leaps in model capability and reasoning within months. They also described an ongoing explosion of model specialization.

Consider the following:

  • Build now or buy and wait? Do you need to build your own AI capability now or can you buy into a solution that offers a 70% solution today, with the promise of improvements soon?
  • Refactor a foundation model or go specialized? Should you modify a cost- and compute-intensive foundation model to your particular use case or look for smaller, more specialized models that might suit your needs?
  • How do you plan for scale? How do you cope with today’s compute bottlenecks with an eye on soon-to-come step changes in speed and cost efficiencies?

Stay tuned to the risks and evolve your mitigating strategies.

Regulations, principles, other key guidelines are rapidly taking shape, with major milestones expected throughout this year. Make somebody at your organization responsible for determining what’s relevant to your business and maintaining internal AI use and development standards that help to mitigate ever-evolving risks and threats.

Get ready for true AI agency and orchestration.

Many at the summit expect AI to soon do much more on its own. That has implications for how you run your business, structure your staff, and serve your customers:

  • A business on AI. What speed, scale, and efficiency should you expect from your business when it gains the operating leverage of AI?
  • Staff and AI as partners. Howdo you create space for AI agents in your structure and processes? And how do you train your staff to delegate lower-order work and take on a broader, higher-order functional scope?
  • Serving customers and their AI agents. Evaluate how you might start serving the needs of your human customers and the AI copilots who will be engaging with your website, salespeople, product, and other key touchpoints on their behalf. For example, is all your website content accessible to both human readers and agents in a world where retrieval mechanisms still run mainly on text?

Looking Ahead to Next Year’s Summit

As discussions at the Future of Business with AI Summit ended, clear directives for the future of artificial intelligence emerged. There was little talk of an AI downturn, showcasing instead a technology that’s realizing new frontiers in enterprise. Leaders from various sectors proved this latest wave of AI to be not merely an emerging tool but an essential component of modern business strategy. The takeaway for software startups is unmistakable: prioritize AI now, with the agility to seize on rapid advancements and expanding capabilities of AI. And expect agents who can deliver increasingly personalized and autonomous interactions.

At next year’s summit, we might expect speakers to describe how they spent the year resolving the complexities of hardware limitations, data quality, and the balance between innovation and safety. But again, the overarching discussion will likely be one of opportunity and imperative.

Jared Brickman

Jared Brickman is Senior Director of the Marketing Center of Excellence at leading software investor Insight Partners, where he advises CMOs of the firm’s 500+ portfolio companies on how to go-to-market. Learn More →

Speaking & Interviews

Invite Jared to speak on this and more at your event, on your show, in your publication, etc.

Create a website or blog at WordPress.com