Elon Musk says Tesla is almost finished designing its AI5 chip and has already begun early work on its AI6 processor. He added that chip generations AI7, AI8 and AI9 are planned, and that his team is “aiming for a 9 month design cycle.”
The comments came in a post on X, and they mark a fresh step in Tesla’s move toward custom AI hardware for vehicles, robots and data centers.
AI5 nears design finish, AI6 follows
Tesla’s AI5, often described as the successor to its current AI4 or Hardware 4 platform, has now reached the final design stage. And AI5 samples are expected in 2026, with volume production aimed around mid‑2027 after earlier delays.
AI5 targets a large performance jump over AI4. Internal estimates that AI5 could deliver up to about 40 times better performance for certain inference workloads versus earlier Tesla hardware, while cutting power use and cost compared with top-end Nvidia parts used for similar tasks.
Work on AI6 has already started, Musk says, as Tesla pivots resources from its now‑wound‑down Dojo supercomputer project. In past comments, Musk indicated that engineering choices “converged” on AI6 as the next major compute platform, with some sources suggesting a target of roughly double AI5 performance once it reaches scale.
Dual manufacturing plan
Tesla is not planning to rely on a single manufacturer for these chips. AI5 is expected to be produced at both TSMC and Samsung facilities, with variants tuned to each foundry’s process but intended to run Tesla’s software stack in the same way.
For AI6, Tesla has signed a multiyear deal worth about $16.5 billion with Samsung, tied to production at the company’s advanced fab in Taylor, Texas. The agreement runs into the 2030s and is aimed at high-volume manufacturing using leading-edge nodes. Musk has said Tesla engineers will work side by side with Samsung teams in Texas to speed problem‑solving and improve line efficiency.
Tesla is building these chips for more than its cars. The AI5 and AI6 generations are expected to form a common platform for Full Self‑Driving (Supervised) in vehicles, for the Optimus humanoid robot, and for certain data center workloads.
That approach contrasts with earlier years, when Tesla used more distinct hardware paths for in‑car systems and for training infrastructure. Now, a shared architecture could cut engineering overhead, simplify software development and make it easier to roll improvements across products. For Optimus, AI6‑class chips tuned for lower power draw so they can fit inside mobile, battery‑powered robots.
Some reporting notes hints that AI8 could support uses in orbit, including on SpaceX platforms, though neither company has detailed formal deployment plans.
Positioning against Nvidia
Tesla is increasing its in‑house chip push, yet Musk has stated that the company will keep buying Nvidia hardware for AI training. In recent remarks, he said Tesla is “not about to replace Nvidia” and will continue to rely on Nvidia GPUs in data centers while using Tesla chips mainly for inference in vehicles, robots and some internal clusters.
Though a nine‑month design cycle is shorter than common timelines in high‑end semiconductors, which can stretch to 18 to 24 months from concept to tape‑out, Tesla wants to refresh its AI hardware at a pace more similar to smartphones, with frequent, incremental gains instead of long gaps between generations.
You may also like to read:
- Tesla teases AI5 chip to challenge Blackwell, costs cut by 90% »
- Tesla completes design review for next-gen AI chips »
- Tesla unveils AI5 chip, 40 times faster than AI4 »