Biggest risks of gen AI in your private life: ChatGPT, Gemini, Copilot

Many consumers are enamored with generative AI, using new tools for all sorts of personal or business matters. 

But many ignore the potential privacy ramifications, which can be significant.

From OpenAI’s ChatGPT to Google’s Gemini to Microsoft Copilot software and the new Apple Intelligence, AI tools for consumers are easily accessible and proliferating. However the tools have different privacy policies related to the use of user data and its retention. In many cases, consumers aren’t aware of how their data is or could be used.

That’s where being an informed consumer becomes exceedingly important. There are different granularities about what you can control for, depending on the tool, said Jodi Daniels, chief executive and privacy consultant at Red Clover Advisors, which consults with companies on privacy matters. “There’s not a universal opt-out across all tools,” Daniels said.

The proliferation of AI tools — and their integration in so much of what consumers do on their personal computers and smartphones — makes these questions even more pertinent. A few months ago, for example, Microsoft released its first Surface PCs featuring a dedicated Copilot button on the keyboard for quickly accessing the chatbot, following through on a promise from several months earlier. For its part, Apple last month outlined its vision for AI — which revolves around several smaller models that run on Apple’s devices and chips. Company executives spoke publicly about the importance the company places on privacy, which can be a challenge with AI models.

Here are several ways consumers can protect their privacy in the new age of generative AI.

Ask AI the privacy questions it must be able to answer

Before choosing a tool, consumers should read the associated privacy policies carefully. How is your information used and how might it be used? Is there an option to turn off data-sharing? Is there a way to limit what data is used and for how long data is retained? Can data be deleted? Do users have to go through hoops to find opt-out settings?

It should raise a red flag if you can’t readily answer these questions, or find answers to them within the provider’s privacy policies, according to privacy professionals.

“A tool that cares about privacy is going to tell you,” Daniels said.

And if it doesn’t, “You have to have ownership of it,” Daniels added. “You can’t just assume the company is going to do the right thing. Every company has different values and every company makes money differently.”

She offered the example of Grammarly, an editing tool used by many consumers and businesses, as a company that clearly explains in several places on its website how data is used

Keep sensitive data out of large language models

Some people are very trusting when it comes to plugging sensitive data into generative AI models, but Andrew Frost Moroz, founder of Aloha Browser, a privacy-focused browser, recommends people not put in any types of sensitive data since they don’t really know how it could be used or possibly misused.

This is true for all types of information people might enter, whether it’s personal or work-related. Many corporations have expressed significant concerns about employees using AI models to help with their work, because workers may not consider how that information is being used by the model for training purposes. If you’re entering a confidential document, the AI model now has access to it, which could raise all sorts of concerns. Many companies will only approve the use of custom versions of gen AI tools that keep a firewall between proprietary information and large language models.

Individuals should also err on the side of caution and not use AI models for anything non-public or that you wouldn’t want to be shared with others in any capacity, Frost Moroz said. Awareness of how you’re using AI is important. If you are using it to summarize an article from Wikipedia, that might not be an issue. But if you’re using it to summarize a personal legal document, for example, that’s not advisable. Or let’s say you have an image of a document and you want to copy a particular paragraph. You can ask AI to read the text so you can copy it. By doing so, the AI model will know the content of the document, so consumers need to keep that in mind, he said.

Use opt-outs offered by OpenAI, Google

Each gen AI tool has its own privacy policies and may have opt-out options. Gemini, for example, allows users to create a retention period and delete certain data, among other activity controls.

Users can opt out of having their data used for model training by ChatGPT. To do this, they need to navigate to the profile icon on the bottom-left of the page and select Data Controls under the Settings header. They then need to disable the feature that says “Improve the model for everyone.” While this is disabled, new conversations won’t be used to train ChatGPT’s models, according to an FAQ on OpenAI’s website.

There’s no real upside for consumers to allow gen AI to train on their data and there are risks that are still being studied, said Jacob Hoffman-Andrews, a senior staff technologist at Electronic Frontier Foundation, an international non-profit digital rights group. 

If personal data is improperly published on the web, consumers may be able to have it removed and then it will disappear from search engines. But untraining AI models is a whole different ball game, he said. There may be some ways to mitigate the use of certain information once it’s in an AI model, but it’s not fool-proof and how to do this effectively is an area of active research, he said. 

Opt-in, such as with Microsoft Copilot, only for good reasons

Companies are integrating gen AI into everyday tools people use in their personal and professional lives. Copilot for Microsoft 365, for example, works within Word, Excel and PowerPoint to help users with tasks like analytics, idea generation, organization and more.

For these tools, Microsoft says it doesn’t share consumer’ data with a third party without permission, and it doesn’t use customer data to train Copilot or its AI features without consent. 

Users can, however, opt in, if they choose, by signing into the Power Platform admin center, selecting settings, tenant settings and turning on data sharing for Dynamics 365 Copilot and Power Platform Copilot AI Features. They enable data sharing and save.

Advantages to opting in include the ability to make existing features more effective. The drawback, however, is that consumers lose control of how their data is used, which is an important consideration, privacy professionals say.

The good news is that consumers who have opted in with Microsoft can withdraw their consent at any time. Users can do this by going to the tenant settings page under Settings in the Power Platform admin center and turning off the data sharing for Dynamics 365 Copilot and Power Platform Copilot AI Features toggle.

Set a short retention period for generative AI for search

Consumers might not think much before they seek out information using AI, using it like they would a search engine to generate information and ideas. However, even searching for certain types of information using gen AI can be intrusive to a person’s privacy, so there are best practices when using tools for that purpose as well. If possible, set a short retention period for the gen AI tool, Hoffman-Andrews said. And delete chats, if possible, after you’ve gotten the sought-after information. Companies still have server logs, but it can help reduce the risk of a third-party getting access to your account, he said. It may also reduce the risk of sensitive information becoming part of the model training. “It really depends on the privacy settings of the particular site.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment