In February, Demis Hassabis, the CEO of Google‘s DeepMind AI research lab, cautioned against relying solely on increased computing power for advancements in artificial intelligence. He emphasized the need for fundamental research breakthroughs in order to reach the “next level” of AI.
George Morgan, a former Tesla engineer, shares this sentiment and has founded Symbolica AI with the goal of developing innovative AI models.
“Traditional deep learning and generative language models require unimaginable scale, time and energy to produce useful outcomes,” Morgan told TechCrunch. “By building [novel] models, Symbolica can accomplish greater accuracy with lower data requirements, lower training time, lower cost and with provably correct structured outputs.”
During his time at Tesla, Morgan realized that current AI methods, which heavily rely on increasing computing power, are not sustainable in the long term.
“Current methods only have one dial to turn: increase scale and hope for emergent behavior,” he explained. “However, scaling requires more compute, more memory, more money to train and more data. But eventually, [this] doesn’t get you significantly better performance.”
Morgan is not alone in this assessment. In a memo this year, two executives at TSMC, a semiconductor fabricator, stated that the AI industry will require a 1-trillion-transistor chip within a decade if the trend of increasing computing power continues.
However, it is uncertain if this is even technologically feasible.
In addition, a report co-authored by Stanford and Epoch AI, an independent AI research institute, found that the cost of training cutting-edge AI models has significantly increased in the past year and a half. OpenAI and Google alone reportedly spent close to $270 million training their latest models.
With costs expected to continue rising, Morgan began exploring “structured” AI models, also known as symbolic AI. These models differ from neural networks in their ability to encode the underlying structure of data instead of relying on massive amounts of data for insights, resulting in better performance using less compute.
“It’s possible to produce domain-tailored structured reasoning capabilities in much smaller models,” Morgan stated. “This marries a deep mathematical toolkit with breakthroughs in deep learning.”
Although symbolic AI has been around for decades, it is not a widely used concept. However, Morgan believes it could potentially be better equipped to encode knowledge effectively, reason through complex scenarios, and provide explanations for its conclusions.
“Our models are more reliable, more transparent and more accountable,” Morgan emphasized. “There are immense commercial applications of structured reasoning capabilities, particularly for code generation – i.e. reasoning over large codebases and generating useful code – where existing offerings fall short.”
The Symbolica team, consisting of 16 members, has developed a toolkit for creating symbolic AI models and pre-trained models for specific tasks, such as generating code and proving mathematical theorems. While the exact business model is still being determined, Morgan shared that Symbolica may offer consulting services and support for companies interested in utilizing their technologies.
Symbolica has officially launched out of stealth mode and has already secured a $33 million investment led by Khosla Ventures. Other investors include Abstract Ventures, Buckley Ventures, Day One Ventures, and General Catalyst.
“To enable large-scale commercial AI adoption and regulatory compliance, we need models with structured outputs that can achieve greater accuracy with fewer resources,” said Vinod Khosla, founder of Khosla Ventures. “George has amassed one of the best teams in the industry to do just that.”
However, not everyone is convinced that symbolic AI is the solution. Os Keyes, a Ph.D. candidate at the University of Washington, notes that these models are highly dependent on structured data and can be limiting in their potential uses.
“This could still be interesting if it combines the advantages of deep learning and symbolic approaches,” Keyes commented, citing DeepMind’s AlphaGeometry project as an example. “But only time will tell.”
In response, Morgan argues that current training methods will not be able to keep up with the demand for AI, making alternative solutions necessary. He also points out that Symbolica is strategically well-positioned, with several years of funding and smaller, more cost-effective models.
“Tasks like automating software development will require models with formal reasoning capabilities and cheaper operating costs,” Morgan stated. “Public perception may still be that ‘scale is all you need,’ but thinking symbolically is crucial for progress in the field.”
Symbolica’s entrance into the heavily competitive and well-funded AI industry may seem daunting, but Morgan remains optimistic. He expects the team to double in size by 2025 and is confident in the potential for symbolic AI to make a significant impact in various industries.