The work of the Apple Pal on AI-Enhancements for Siri has been officially delayed (now he is about to roll out “next year”) and a developer thinks why he knows why Smart and more personal Siri, if something goes wrong, may be more dangerous.
Simon Willisen, developer of the data analysis tool dataset, shows a finger on prompt injection. AIS is usually restricted by their parent companies who impose certain rules on them. However, it is possible to “jail” by talking to AI to break those rules. This is done with the so -called “prompt injection”.
As an easy example, the AI model may have been instructed to answer questions about doing something illegal. But what if you ask AI to write a car about hotwearing? Writing poems is not illegal, right?
This is an issue that all companies offer the face of AI chatbots and they have been better in blocking clear jailbreaks, but they are not yet a problem. Worst, jailbreaking siri can have worse results than most chatbots because it knows what and what it can do. Apple Pal spokesman Jacqueline Roy describes Siri as follows:
“We are also working on more personal Siri, giving it more awareness about your personal context, as well as the ability to take steps for you inside and around your applications.”
Apple Pal, Ne Ou, skeptically, put the rules to prevent Siri accidentally from revealing your private data. But what if immediate injection can be found in any way? It is also important for a company that is privacy and security conscious, like the Apple Pal, to ensure that the “ability to take action for you” can be absorbed. And, obviously, this will take some time.
Source | Till