The scope of Mark Zuckerberg’s interest in AI startups is becoming increasingly clear, with a growing list of potential acquisitions under discussion.
According to sources, the Meta CEO has recently considered acquiring several companies, including Safe Superintelligence (SSI), founded by Ilya Sutskever, as well as Thinking Machines Lab, led by former OpenAI CTO Mira Murati, and Perplexity, a competitor of Google in the AI industry. Although these discussions did not progress to formal offers due to varying disagreements over pricing and strategic direction, they indicate Zuckerberg’s proactive approach to reinvigorating Meta’s AI division.
Details regarding the new team Zuckerberg is assembling are beginning to emerge. Daniel Gross, co-founder and CEO of SSI, along with former GitHub CEO Nat Friedman, are set to co-lead the Meta AI assistant initiative. They will report to Alexandr Wang, recently hired by Zuckerberg for over $14 billion. Wang recently bid farewell to his Scale team and joined Meta’s offices, where he is meeting with key executives and continuing to expand the new AI team that Zuckerberg has tasked him to build. An official announcement regarding the team is expected soon.
Conversely, Sutskever, Murati, and Perplexity CEO Aravind Srinivas have opted to seek additional funding at higher valuations instead of joining Meta. Sutskever recently raised billions for SSI, with both Meta and Google reportedly investing in the venture. Murati has similarly raised substantial funding, though neither is close to launching a product. Meanwhile, Srinivas is in the process of raising around $500 million for Perplexity.
Company representatives from the involved parties have not commented or did not respond before publication. Reports from The Information and CNBC detailed Zuckerberg’s discussions with Safe Superintelligence, while Bloomberg reported on the talks with Perplexity.
Zuckerberg’s recruitment drive reflects an urgent need to refine Meta’s AI strategy, highlighting a highly competitive landscape for top AI talent. Insiders report that substantial compensation offers exceeding nine to ten figures are becoming common in the industry. Notably, some senior employees at OpenAI are already earning in this range following the company’s rapid valuation increase.
In response to this talent poaching, OpenAI CEO Sam Altman has expressed some concern. His recent appearance on his brother’s podcast, where he claimed “none of our best people” are leaving for Meta, may have been intended to project confidence but could unintentionally undermine his former colleagues. His comments on Meta’s heavy investment in talent potentially harming company culture drew scrutiny, especially given OpenAI’s own significant hiring expenditures.
“We believe glasses represent the optimal form factor for AI”
In a recent Zoom call with Alex Himel, Meta’s VP of wearables, he reflected on discussions he had just had with Alexandr Wang, the newly appointed chief of AI at Meta.
“There are now quite a few ‘Alexes’ I communicate with regularly,” Himel quipped, kicking off a conversation centered around the rollout of Meta’s new glasses in partnership with Oakley. “I just attended my first meeting with him, and it was somewhat amusing trying to discern who was talking with several participants in the room. Then it dawned on me that it was Alex.”
The following Q&A has been edited for conciseness and clarity:
How did your recent meeting with Alex go?
Our discussion focused on how to maximize the effectiveness of AI for glasses. There are specific opportunities for application in glasses that simply do not translate to smartphones. The challenge we face is finding the right equilibrium, as AI can serve an incredibly broad audience or can be tailored for particular use cases.
The balance is crucial since various aspects of the Llama models may not apply to glasses, while certain features, such as egocentric video capabilities, are essential for realizing ambitious use cases that wouldn’t otherwise come to fruition.
You’re marketing this new set of glasses with Oakley as “AI glasses.” Is that the new categorization for this product line? Are they truly AI glasses, rather than just smart glasses?
We refer to the category specifically as AI glasses. You had a good long demonstration of Orion, which is evidence that this category needs the right field of view and a display to overlay digital content. Our perspective has changed in that we believe we can achieve scalability quicker, and AI is central to making this possible.
Currently, our primary use cases for the glasses include audio functions—such as calls, music, and podcasts—and capturing photos and videos. Data from our active user base shows that these features have been the leading attractions since launch, with audio as the top engagement driver, followed closely by photography and videography.
AI has consistently ranked third since the beginning. As we expand market presence—now in 18 regions—and introduce new functionalities, AI’s usage is steadily climbing. Our largest software investment is in AI capabilities, as we firmly believe glasses are the ideal format for AI, given that they are worn continuously and can perceive one’s surroundings.
Is your intention for AI to eventually surpass audio and photo functions as the most utilized feature of the glasses, or is that not your goal?
Statistically, we could reach a tie at best. Our goal is to see AI increasingly adopted by a broader audience and utilized more frequently. There’s a clear opportunity for enhancement in audio quality and image capture, while AI presents greater potential for improvement.
How much AI processing is conducted onboard the glasses versus in the cloud? I would assume physical constraints play a significant role.
We now have billion-parameter models that can operate on the device. As a result, we are increasingly shifting more processing onboard the glasses, alongside conducting some functions on a connected smartphone.
During Apple’s recent WWDC announcements, they introduced features like Wi-Fi Aware APIs, which we eagerly anticipate testing, allowing for smooth media transfers without cumbersome prompts. Moreover, enhancements in background processor access could facilitate image processing during media transfer, facilitating seamless syncing similar to that on Android devices.
Do you envision the market for these new Oakley glasses matching or exceeding that of Ray-Ban glasses, or will they remain more specialized in their appeal due to their athletic focus?
Our collaboration with EssilorLuxottica has been robust, and Ray-Ban is their flagship brand. The iconic Wayfair style stands out among their offerings. For our original Ray-Ban Meta glasses, we partnered with their most successful style to ensure wide appeal.
Oakley stands as their second largest brand, with a significant user base featuring popular styles such as the Holbrook. The HSTN frame, which we’re introducing, is a well-regarded model within the sporty segment. We are noting increased usage of Ray-Ban Meta glasses in active settings, marking our entry into the performance category.
What are your thoughts on Google’s recent revelations regarding their XR glasses and partnerships in the eyewear sector?
Our longstanding partnership with EssilorLuxottica spans five years, leading to a synchronized collaboration that has yielded expedited results for the Oakley Meta glasses, brought to fruition in under nine months.
The demonstrations conducted by Google appeared quite impressive and engaging. However, they did not present any specific product announcements, so I can only summarize my evaluation in a general sense. It is flattering to see others recognizing our progressive traction in this field and seeking to join the conversation.
Regarding AR glasses, what insights have you gained from the Orion demonstrations shared with the public?
Our efforts in this realm have been vigorous, and we have attained significant internal milestones for the next version aimed at market launch. A key takeaway from our demonstrations has been the effectiveness of the interaction model employing eye tracking and the neural band technology. Having experienced it myself during March Madness, it allowed me to essentially watch games in a virtual setting, significantly enhancing the enjoyment of the experience.
In Other News
- TikTok continues to operate in the U.S. despite legal challenges. President Trump has extended the deadline for enforcing legislation aimed at banning the China-owned platform. This situation underscores the complexities of leveraging this significant power over American tech companies that provide operational support for TikTok, complicating the legal landscape.
- Amazon’s workforce is expected to shrink due to AI implementation. By publicizing an employee memo, Amazon CEO Andy Jassy has indicated the company’s initiative to leverage AI for operational efficiency. With about 30% of Amazon’s coding already generated by AI, there is a clear direction toward automating human-centric roles such as sales and customer service.
Additional Resources
Explore more:
- An overview from Coatue on the current tech markets: Presentation details.
- Insights on the evolving nature of AI labs.
- Exploring “The OpenAI Files.”
- Speculation regarding the debut foldable iPhone.
- A fascinating look at the Dream Recorder device.
- A video deep dive into tech presentations: Good Work’s exploration.
- Insights from YC’s AI Startup event featuring: Andrej Karpathy and Elon Musk.
If you have not subscribed yet, consider joining Technology News for complete access to Command Line and all our in-depth reporting.
Feedback is always welcome, particularly from those who may have opted against offers from Zuckerberg. Feel free to respond here or contact me securely via Signal.
Thank you for subscribing.