Preserving Privacy and Security in a Generative AI World
AI and privacy: How on-device AI enhances privacy and security
Welcome to AI on the Edge, our new OnQ series that delivers the latest on-device artificial intelligence insights and trends. Hear from our most active subject matter experts on the dynamic, ever-expanding subject of AI.
The rapid adoption of generative artificial intelligence (AI) has opened a new world promising explosive creativity, convenience and productivity. With large language models (LLMs), as well as language-vision models (also known as LVMs), creating a wide variety of content like more precise search results, beautiful pieces of art, personalized advertising campaigns and new software code, generative AI is already delivering on these promises.
However, must it come at a cost to privacy and security?
Are AI and privacy at odds with one another?
Not necessarily. With on-device generative AI, where the generative AI model runs on your personal device, like a smartphone, personal computer (PC) or extended reality headset, you can get the best of both worlds: AI with privacy and security at the same time.
When running generative AI models hosted in the cloud, interactions with those models can become public. Information provided to the models — including the query and context surrounding it, or data used to fine-tune the model — can be exposed, creating concerns around AI and privacy.
For enterprise use cases, this includes any proprietary data or even source code that is either used as queries for the model or generated by the model — clearly this situation is unacceptable.
On-device generative AI can mitigate these AI privacy and security issues.
Why on-device AI helps with data privacy and security
On-device AI helps protect users’ information since queries involving personal data remain on the device. On-device security features (e.g., data and communications encryption, as well as password and biometric protected access) on edge devices, such as smartphones or PCs, are already trusted under certain circumstances to protect sensitive personal and corporate information.
Consequently, generative AI models that are hosted on device can rely on those same on-device security features to improve data security and privacy for queries and outputs. In these situations, since inference, and in some cases fine-tuning, utilize on-device memory, storage and processing resources, the models can also utilize the local data to increase personalization and accuracy of both the input and output of the model with a similar level of trust.
Travel convenience with on-device generative AI
Consider the following example: A user is traveling and looking for good dinner options. Even with a non-generative AI solution, devices already utilize the user’s current location to search the Internet and provide nearby dining options. With a generative AI-based solution however, the user might want the chat assistant to not only look for good dinner options but also utilize personal data like food and restaurant rating preferences, food allergies, meal plan data, budget and calendar information to select a nearby four-star restaurant that has nutritional options compatible with the user’s meal plan.
Once a suitable option is found, the user might then want the assistant to reserve a table at a time that is open in the user’s calendar. In this situation, the assistant only reaches out to the cloud for the list of possible restaurants from which to choose as well as making the actual reservation while keeping the queries and all personal information secure and private.
Software development assistant with on-device generative AI
Another example showing the benefit of on-device generative AI is a software developer who needs to create new source code for a product. In order to accomplish this, proprietary company data, as well as existing code, will be required as input to the generative AI model. Again, it is easy to see how a coding assistant running solely on the developer’s laptop would help ensure that the proprietary intellectual property is not exposed to risks outside of the company’s cybersecurity tolerance.
Retirement planning assistant with on-device generative AI
Another wide-reaching example of the need for AI privacy is in retirement planning. In the United States alone, it is predicted that by 2030 all members of the baby boomer generation will be at least 65 years old — a portion of the population that is estimated to be about 73 million people.1 Immediately following that wave of retirees are multiple generations who have come to understand the importance of a well-funded retirement portfolio. As more and more people approach retirement age globally and as retirees’ life expectancy continues to increase, the cost to retire is likewise increasing. Personal portfolio management will be critical in maximizing returns on investment and given this growing need, qualified financial advisors are likely to be inundated. On-device AI could help alleviate this by putting a retirement planning assistant in the palm of an investor's hand to educate and provide at least the first few levels of support which would help streamline the process once the qualified financial advisor becomes involved.
Using a conversational interface, the investor could provide the assistant with their current personal financial position in terms of age, savings, current investments, real estate, income, expenses, risk tolerance and investment goals. Based on this information, the assistant could come back with questions to further refine the input parameters. Given these parameters, the assistant could then provide educational information, investment strategies, recommended funds and other investment vehicles to consider. The assistant could also provide scenario analysis using both conversational and graphical outputs based on questions from the investor such as “What if I live into my 90s?” or “I just got a new job, how does this affect my current plan?”
The assistant could then use the investor’s location, level of investment and risk tolerance to provide a list of nearby financial professionals to help the investor take these preliminary strategies, refine them and ultimately implement them.
Security and privacy in a generative AI world is key for building consumer trust
In all of these examples, it is easy to see how the user might not want a cloud-hosted chatbot to access such private information but would be comfortable with an on-device generative AI model to make decisions based on local information available. Running the generative AI models on device can allow the user to take advantage of the benefits of the models without exposing personal or proprietary information.
Users may prefer that not only the results, but even the data contained in the prompts that initiate the queries is protected. As such, on-device inference provides users an opportunity to enjoy AI without exposing their data to cloud-hosted models.
When running generative AI models on device, existing technological protections are leveraged to allow for the use of on-device personal and corporate information as inputs to these models without the security and privacy concerns associated with cloud-hosted models. By enabling on-device generative AI, not only is the promise of increased creativity, convenience and productivity realized, but it also improves upon what generative AI models running solely on the cloud can deliver.
This begs the question, how do we as an industry enable on-device generative AI? Stay tuned for future blog posts where AI on the Edge will further explore the factors that will accelerate on-device generative AI adoption.
1. America Counts Staff (Dec 10, 2019). 2020 Census Will Help Policymakers Prepare for the Incoming Wave of Aging Boomers. Retrieved on Sep 5, 2023 from https://www.census.gov/library/stories/2019/12/by-2030-all-baby-boomers-will-be-age-65-or-older.html