TechScape: Why I can’t stop writing about Elon Musk | Technology

“I hope I don’t have to cover Elon Musk again for a while,” I thought last week after I sent TechScape to readers. Then I got a message from the news editor. “Can you keep an eye on Elon Musk’s Twitter feed this week?”

I ended up doing a close-reading of the world’s most powerful posting addict, and my brain turned to liquid and trickled out of my ears:

His shortest overnight break, on Saturday night, saw him logging off after retweeting a meme comparing London’s Metropolitan police force to the Nazi SS, before bounding back online four and a half hours later to retweet a crypto influencer complaining about jail terms for Britons attending protests.

But somehow I was still surprised by what I found. I knew the rough contours of Musk’s internet presence from years of covering him: a three-way split between shilling his real businesses, Tesla and SpaceX; eager reposting of bargain-basement nerd humour; and increasingly rightwing political agitation.

Following Musk in real-time, though, revealed the ways his chaotic mode has been warped by his shift to the right. His promotion of Tesla is increasingly inflected in culture war terms, with the Cybertruck in particular promoted with language that makes it sound like buying one will help defeat the Democrats in the US presidential election this November. The bargain-basement nerd humour mentioned above is tinged with an anger at the world for not thinking he’s the coolest person in it. And the rightwing political agitation is increasingly extreme.

Musk’s involvement in the disorder in the UK seems to have pushed him further into the arms of the far right than ever before. This month has seen him tweet at Lauren Southern for the first time, a far-right Canadian internet personality who is most famous in the UK for earning a visa ban from Theresa May’s government over her Islamophobia. More than just tweet: he also supports her financially, sending around £5 a month through Twitter’s subscription feature. Then there was the headline-grabbing retweet of Britain First’s co-leader. On its own, it could have been chalked up to Musk not knowing the pond in which he was swimming; two weeks on, the pattern is more clear. These are his people, now.

Well, that’s OK then

A neat example of the difference between scientific press releases and scientific papers, today from the AI world. The press release, from the University of Bath:

AI poses no existential threat to humanity – new study finds.

LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe.

This means they remain inherently controllable, predictable and safe.

The paper, from Lu et al:

Large language models, comprising billions of parameters and pre-trained on extensive web-scale corpora, have been claimed to acquire certain capabilities without having been specifically trained on them … We present a novel theory that explains emergent abilities, taking into account their potential confounding factors, and rigorously substantiate this theory through over 1,000 experiments. Our findings suggest that purported emergent abilities are not truly emergent, but result from a combination of in-context learning, model memory, and linguistic knowledge.

Our work is a foundational step in explaining language model performance, providing a template for their efficient use and clarifying the paradox of their ability to excel in some instances while faltering in others. Thus, we demonstrate that their capabilities should not be overestimated.

The press release version of this story has gone viral, for predictable reasons: everyone likes seeing Silicon Valley titans punctured, and AI existential risk has become a divisive topic in recent years.

But the paper is several steps short of the claim the university’s press office wants to make about it. Which is a shame, because what the paper does show is interesting and important anyway. There is lots of focus on so-called “emergent” abilities with frontier models: tasks and capabilities that didn’t exist in the training data but which the AI system demonstrates in practice.

Those emergent abilities are concerning to people who worry about existential risk, because they suggest that AI safety is harder to guarantee than we’d like. If an AI can do something it’s not been trained to do, then there’s no easy way to guarantee a future AI system is safe: you can leave things out of the training data but it might work out how to do them anyway.

The paper demonstrates that, at least in some situations, those emergent abilities are nothing of the sort. Instead, they’re an outcome of what happens when you take an LLM like GPT and hammer it into the shape of a chatbot, before asking it to solve problems in the form of a question-and-answer conversation. That process, the paper suggests, means the chatbot can’t ever truly be given “zero-shot” questions, where it has no prior training: the art of prompting ChatGPT is inherently one of teaching it a bit about what form the answer should take.

It’s an interesting finding! Not quite one that proves the AI apocalypse is impossible, but – if you want some good news – one that suggests it’s unlikely to happen tomorrow.

skip past newsletter promotion

Training pains

Nvidia is accused of ‘unjust enrichment’. Photograph: Dado Ruvić/Reuters

Nvidia scraped YouTube to train its AI systems. Now that’s coming back to bite:

A federal lawsuit alleges that Nvidia, which focuses on designing chips for AI, took YouTube creator David Millette’s videos for its AI-training work. The suit charges Nvidia with “unjust enrichment and unfair competition” and seeks class action status to include other YouTube content creators with similar claims.

Nvidia unlawfully ‘scraped’ YouTube videos to train its Cosmos AI software, according to the suit, filed Wednesday in the Northern District of California. Nvidia used software on commercial servers to evade YouTube’s detection to download ‘approximately 80 years’ worth of video content per day’, the lawsuit says, citing an Aug 5 404 media report.

This lawsuit is unusual in the AI world if for no other reason than the fact that Nvidia was slightly taciturn about its sources of training data. Most AI companies that have faced lawsuits have been proudly open about their disregard for copyright limitations. Take Stable Diffusion, which sourced its training data to the open-source LAION dataset. Well:

[Judge] Orrick found the artists had reasonably argued that the companies violate their rights by illegally storing work and that Stable Diffusion, the AI image generator in question, may have been built ‘to a significant extent on copyrighted works’ and was ‘created to facilitate that infringement by design’.

Of course, not every AI company plays on an even field here. Google has a unique advantage: everyone gives it consent to train its AI on their material. Why? Because otherwise you get booted off search entirely:

Many site owners say they can’t afford to block Google’s AI from summarising their content.

That’s because the Google tool that sifts through web content to come up with its AI answers is the same one that keeps track of web pages for search results, according to publishers. Blocking Alphabet Inc’s Google the way sites have blocked some of its AI competitors would also hamper a site’s ability to be discovered online.

Ask me anything

What was I thinking? Ask me this and any other tech-related question.

One more, self-indulgent, note. After 11 years, I’m leaving the Guardian at the end of this month, and 2 September will be my last TechScape. I’ll be answering reader questions, big and small, as I sign off, so if there’s anything you’ve ever wanted an answer on, from tech recommendations to industry gossip, then hit reply and drop me an email.

If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Tuesday.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment