OpenAI & Jony Ive's AI necklace rumored to have iPod shuffle form factor
According to analyst Ming-Chi Kuo, OpenAI and Jony Ive are planning a neck-worn AI device with a similar form factor to the iPod Shuffle.

Former Apple designer Jony Ive
TF International Securities analyst Ming-Chi Kuo revealed on X that the new AI device from Jony Ive and OpenAI is expected to enter mass production in 2027. In his post, he said the prototype is slightly larger than Humane's (failed) AI Pin but remains as compact and elegant as an iPod Shuffle.
The device won't include a display and is designed to be worn around the neck, using cameras and microphones for environmental awareness. It will connect to smartphones and PCs for computing power and display output, positioning it squarely in the emerging category of ambient, screenless AI.
OpenAI acquires Jony Ive's startup to bring hardware vision to life
Kuo's tweet followed news that OpenAI is acquiring Ive's hardware startup, LoveFrom's subsidiary "io," in a deal worth around $6.5 billion. OpenAI plans to release its first products from the collaboration in 2026, with full-scale production coming the following year.
My industry research indicates the following regarding the new AI hardware device from Jony Ive's collaboration with OpenAI:
1. Mass production is expected to start in 2027.
2. Assembly and shipping will occur outside China to reduce geopolitical risks, with Vietnam currently the pic.twitter.com/5IELYEjNyV-- (Ming-Chi Kuo) (@mingchikuo)
The partnership is intended to bring Ive's industrial design expertise into OpenAI's ecosystem as the company moves beyond software and into the physical world.
By manufacturing the device outside China, OpenAI and Ive are also signaling a deliberate move to avoid geopolitical risks. That supply chain shift echoes Apple's own recent efforts to diversify production beyond China.
What it means for Apple and the future of AI devices
This project represents a major push into what analysts call "physical AI," where artificial intelligence moves off the screen and into wearable, voice-activated, and context-aware devices. While companies like Meta and Google have dabbled in ambient computing, OpenAI has lacked a hardware strategy -- until now.
For Apple users, the new device could be the first serious alternative to AirPods or Apple Watch for passive, always-available AI support. The lack of a screen, paired with camera and audio input, suggests a future where interaction happens naturally, without the need to pull out a phone or look at a display.
That's a sharp contrast with Apple's Vision Pro headset or iPhone-first approach, and it could pressure Apple to accelerate its own ambient computing roadmap. The iPod Shuffle comparison is a deliberate callback to a time when Apple changed how we interacted with music by making hardware almost invisible.
OpenAI and Ive appear to be chasing that same level of cultural integration, this time for AI. Whether it becomes the next iPhone or the next AI novelty will depend on execution, ecosystem, and how ready the public is to embrace a new kind of wearable intelligence.
Read on AppleInsider
Comments
Ok it's dead then.
People like to look at stuff.
If you want to speak to a device you can do that to your phone or watch already.
And you can look at images on them.
The End.
The iPod nano eventually took on that last square iPod shuffle form factor, with a touchscreen, and the clip.
By comparison, roughly 75% of the population wears a baseball cap occasionally. That would be a much better place to put a piece of tech. It is more securely mounted there, and much more stable in terms of motion. Most likely Jony Ive has considered this, and I suspect that's where it's actually going, despite Kuo's prognostication.
If it offloads processing to a local iPhone or other device or has a built-in neural chip, it would be much faster and more private but would need pretty advanced models to run on low-end hardware.
It would be useful for students and in business. A student studying could be stuck on something and would normally ask a teacher for help. The AI device would see the screen and the student can point to the issue and ask it. If it needs to display something, it can show on a phone or computer screen. If it has agent capability, it can control the screen and type things.
The same applies in business. Someone might be processing company earnings reports and need to make a presentation comparing the data. You could open the earnings reports for each year, have the camera look at it and tell it to load this data into Excel and create a graph showing the net income growth.
Someone working in Photoshop could describe actions, remove this object, lighten the photo, crop it to landscape, add a text caption with a suitable font and it can do it.
The main thing Sam Altman alluded to improving on was having to take out a computer, load up a browser, open a chat window, type in a problem and wait for a reply. They want interaction with AI to be more efficient than this and this will broaden its appeal.
They have to focus on improving things that matter to people and are common sources of inefficiency.
Jony Ive says Rabbit and Humane made bad products | The Verge