Until the corporation can demonstrate otherwise, several experts remain dubious about its internal AI’s claim that security is prioritised
Apple revealed Apple Intelligence, the much awaited artificial intelligence system, on Monday during its annual developers conference. As promised by CEO Tim Cook, this technology seeks to personalise user experiences, automate processes, and create a “new standard for privacy in AI.”
While Apple maintains that security is the first priority for its in-house AI, its partnership with OpenAI has drawn a lot of criticism. Since its inception in November 2022, OpenAI’s ChatGPT tool has been the centre of privacy concerns because to its initial collection of user data without express agreement for model training. The ability for consumers to refuse such data collection was introduced in April 2023.
According to Apple, the ChatGPT relationship will only be used for particular tasks like email creation and other writing tools with express agreement. Experts in security, however, will be keenly watching how these and other issues develop.
“Apple is expressing many positive intentions,” said Cliff Steinhauer, Director of Information Security and Engagement at the National Cybersecurity Alliance. But how well it is actually implemented will decide how effective it is.”
Due to investor trust in AI endeavours, Apple has lagged behind competitors like Google, Microsoft, and Amazon, whose shares have increased. Apple entered the generative AI sector very late. Conversely, up until recently, Apple has not included generative AI in any of its iconic consumer goods.
In an effort to “apply this technology in a responsible way,” as Cook stated at the event on Monday, the company indicates that the delay was intentional. Apple developed most of the Apple Intelligence offerings using its own technology and proprietary core models in recent years, while other companies hurried to market with products, with the goal of minimising the transfer of user data outside the Apple ecosystem.
Apple has always placed a strong priority on privacy, but artificial intelligence, which relies on obtaining a lot of data to develop language learning algorithms, presents a serious threat. Prominent sceptics like Elon Musk argue that it is impossible to integrate AI and protect user privacy at the same time. Musk went so far as to say that after the aforementioned improvements are put into effect, he will forbid his staff from using Apple products for work. Some experts, nevertheless, disagree with this assessment.
To put it another way: “Apple’s announcement is leading the path for companies to harmonise data privacy and innovation,” said Gal Ringel, co-founder and CEO of data privacy software firm Mine. Unlike other recent AI product introductions, this news was well received, showing that in the current environment, putting privacy first is a strategy that pays off handsomely.”
Many recent AI releases, according to Steinhauer, have ranged from frivolous and ineffective to possibly dangerous, mirroring Silicon Valley’s classic “move fast and break things” ethos. He noted that Apple appeared to be taking a different approach.
“Historically, AI platforms have released products and addressed issues as they arise,” said Steinhauer. On the other hand, Apple is aggressively addressing common issues up front. This method illustrates the difference between security as an afterthought, which is fundamentally faulty, and security by design.”
Apple’s new Private Cloud Compute technology is essential to the company’s AI privacy pledges. For Apple Intelligence features, Apple wants to do most of the computer processing on the devices. As Apple officials clarified on Monday, the business will transfer processing to the cloud while guaranteeing the protection of customer data for jobs requiring more processing power than the device can offer.
To do this, Apple will only provide the information required to respond to each request, encircle the data at each endpoint with extra security measures, and refrain from storing it forever. According to officials, Apple will also make all tools and software related to the private cloud publicly available for independent third-party verification.
Vice President of Product Strategy at mobile security platform Zimperium, Krishna Vishnubhotla, called Private Cloud Compute “a significant advancement in AI privacy and security,” emphasising the importance of independent inspection in particular.
He continued, “In addition to building user trust, these advancements establish higher security benchmarks for mobile devices and apps.”