Post

90% of People Don’t Trust AI with Their Data

90% of People Don’t Trust AI with Their Data

90% of People Don’t Trust AI with Their Data

People are using AI, but they don’t trust it. In our latest privacy pulse survey, we gathered 1,200 responses from readers of the Malwarebytes newsletter earlier this year. A staggering 90% of respondents expressed concerns about AI using their data without consent. This worry is reshaping how individuals interact with the internet: 88% do not share personal information freely with AI tools like ChatGPT and Gemini, while 84% have refrained from sharing personal health information.

Additionally, 43% have stopped using ChatGPT, and 42% have ceased using Gemini. This distrust didn’t begin with AI; it has been a long-standing concern regarding personal information. The survey revealed that 92% are worried about their data being misused by corporations, a slight increase from 89% in 2025. Furthermore, 74% are anxious about government access to their personal data.

Years of data breaches and questionable tracking practices have eroded our confidence in organizations to safeguard our information. AI tools are perceived differently because of the nature of our interactions. When we share thoughts, meeting notes, and personal dilemmas with an AI assistant, it feels intimate and conversational, even though we know we’re interacting with a bot. This makes the uncertainty surrounding AI’s data handling more personal and immediate.

Many questions arise: Where are our prompts stored? Are they used to train the AI? How long are they retained? Can anyone within the company access them? Can they be sold or leaked? Companies are rushing to implement AI features without adequate security checks.

However, there is a glimmer of hope as individuals take action. The survey found that 63% of respondents feel resigned that their personal data is already out there and cannot be retrieved. Last year, this figure was 74%, indicating a slight improvement in feelings of helplessness. Respondents reported practical steps to limit their data exposure, including reducing or stopping their use of certain platforms due to privacy concerns. 44% have stopped using Instagram, 37% have ceased using Facebook, and 49% have stopped using TikTok. Others are sharing less personal information online or avoiding sensitive topics in digital conversations, with 88% stating they do not freely share personal information with AI tools.

There is also an increased adoption of privacy-protective tools, with 46% using a VPN (up from 42% in 2025) and 71% using an ad blocker for online browsing (up from 69%). Additionally, 76% use multi-factor authentication (up from 69%), and 82% opt-out of data collection whenever possible (up from 75%). While these actions do not erase historical data trails, they help limit new exposure.

Read full article

This post is licensed under CC BY 4.0 by the author.