Apple was captivated by the potential of the project, along with then-CEO Steve Jobs. According to Tom Gruber, cofounder of Siri, Jobs was personally involved in every aspect of the acquisition, ensuring its success within Apple. Despite this enthusiasm, some Apple executives from that period describe the digital assistant as fundamentally flawed and not fully equipped for the tasks Apple had envisioned. Initially, Siri functioned well but was limited to specific functional areas.
Former Apple executive Richard Williamson described the original Siri as a demonstration model that performed well for a select number of users but could not be effectively scaled to Apple’s user base. He noted that the original implementation involved some illusions. Williamson elaborated that the early version of Siri lacked true artificial intelligence capabilities, labeling it a “hot mess” and explaining that it relied on simple keyword matching without natural language processing or contextual understanding.
Reports suggest that even with advancements in artificial intelligence, Siri’s reliability in real-world scenarios remains questionable. The underlying issue seems to be how Apple’s approach differs from industry norms. Unlike competitors, Apple’s focus on privacy and data stewardship may be impacting the development of its digital assistant. Gruber highlighted that Apple values privacy highly, which could lead to conflicts in enhancing Siri’s functionality without compromising user data privacy.
Siri is often perceived as less intuitive than competitors like Google Assistant because of its limited access to user data, affecting its perceived intelligence and natural interactions. This issue remains a challenge in the next iteration of Siri.
The forthcoming Siri model will consist of two main components. A small language model will operate directly on iPhones, while more complex queries will be forwarded to OpenAI, contingent upon user permission. It is estimated that Apple’s on-device AI systems involve approximately 3 billion parameters. In comparison, OpenAI’s GPT-4 reportedly contains around 1.8 trillion parameters, making it vastly larger. While models like DeepSeek, known for efficiency, have entered the scene, they still comprise a significant number of parameters, estimated at 671 billion.