xAI seeking $1 billion in equity
In July this year, Elon Musk, the billionaire entrepreneur, Tesla CEO, SpaceX CEO, and Owner of X, launched his brand new AI startup, xAI.
In a recent filing with the Securities and Exchange Commission on Tuesday, X.AI Corp., operating as xAI, disclosed its intention to raise $1 billion through an equity offering. The company has successfully raised over $134 million so far through equity financing. The filing indicated that xAI "has entered into a binding and enforceable agreement for the purchase and sale" of the remaining $865 million.
Though, not too long after the news broke, Musk disregarded the statement, posting on X that: “We are not raising money right now.”
The recent SEC filing by xAI follows a month after Musk's announcement that the company's "Grok" chatbot, designed to be a competitor to OpenAI's ChatGPT and Google's Bard, has begun beta testing.
Musk, who initially co-founded OpenAI, the creator of ChatGPT, in 2015 and left its board in 2018, established xAI as a contender in the AI field against major players like ChatGPT and Google. He has expressed criticism of the AI initiatives of these large tech companies, particularly around what he perceives as censorship issues.
Musk has claimed that his vision for xAI is “to understand the true nature of the universe.”
Grok
According to xAI’s official announcement of Grok, it is “an AI modelled after the Hitchhiker’s Guide to the Galaxy, so intended to answer almost anything and, far harder, even suggest what questions to ask!
“Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humour!
“A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the X platform.”
The core of Grok is powered by Grok-1, its Large Language Model (LLM), that has been developed over the past four months. According to xAI, this model has undergone numerous iterations during this period.
Following the launch of xAI, the company initially developed a prototype LLM, Grok-0, with 33 billion parameters. This initial version was comparable to LLaMA 2 (70B) in standard language model benchmarks, despite utilising only half the training resources. In the past two months, it claims that the model has achieved advancements in its reasoning and coding abilities, culminating in Grok-1. This language model is “significantly more powerful”, scoring 63.2% on the HumanEval coding task and 73% on the MMLU, demonstrating its capabilities.