What You Need to Know About Grok AI and Your Privacy

But X also makes it clear the onus is on the user to judge the AI’s accuracy. “This is an early version of Grok,” xAI says on its help page. Therefore chatbot may “confidently provide factually incorrect information, missummarize, or miss some context,” xAI warns.

“We encourage you to independently verify any information you receive,” xAI adds. “Please do not share personal data or any sensitive and confidential information in your conversations with Grok.”

Grok Data Collection

Vast amounts of data collection are another area of concern—especially since you are automatically opted in to sharing your X data with Grok, whether you use the AI assistant or not.

The xAI’s Grok Help Center page describes how xAI “may utilize your X posts as well as your user interactions, inputs and results with Grok for training and fine-tuning purposes.”

Grok’s training strategy carries “significant privacy implications,” says Marijus Briedis, chief technology officer at NordVPN. Beyond the AI tool’s “ability to access and analyze potentially private or sensitive information,” Briedis adds, there are additional concerns “given the AI’s capability to generate images and content with minimal moderation.”

While Grok-1 was trained on “publicly available data up to Q3 2023” but was not “pre-trained on X data (including public X posts),” according to the company, Grok-2 has been explicitly trained on all “posts, interactions, inputs, and results” of X users, with everyone being automatically opted in, says Angus Allan, senior product manager at CreateFuture, a digital consultancy specializing in AI deployment.

The EU’s General Data Protection Regulation (GDPR) is explicit about obtaining consent to use personal data. In this case, xAI may have “ignored this for Grok,” says Allan.

This led to regulators in the EU pressuring X to suspend training on EU users within days of the launch of Grok-2 last month.

Failure to abide by user privacy laws could lead to regulatory scrutiny in other countries. While the US doesn’t have a similar regime, the Federal Trade Commission has previously fined Twitter for not respecting users’ privacy preferences, Allan points out.

Opting Out

One way to prevent your posts from being used for training Grok is by making your account private. You can also use X privacy settings to opt out of future model training.

To do so select Privacy & Safety > Data sharing and Personalization > Grok. In Data Sharing, uncheck the option that reads, “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning.”

Even if you no longer use X, it’s still worth logging in and opting out. X can use all of your past posts—including images—for training future models unless you explicitly tell it not to, Allan warns.

It’s possible to delete all of your conversation history at once, xAI says. Deleted conversations are removed from its systems within 30 days, unless the firm has to keep them for security or legal reasons.

No one knows how Grok will evolve, but judging by its actions so far, Musk’s AI assistant is worth monitoring. To keep your data safe, be mindful of the content you share on X and stay informed about any updates in its privacy policies or terms of service, Briedis says. “Engaging with these settings allows you to better control how your information is handled and potentially used by technologies like Grok.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment