Career consultant says Gen Z are misusing AI to generate cover letters

Picture Alliance | Picture Alliance | Getty Images

Gen Z are digital natives and have quickly adopted AI, using it for everything from assignment research to planning holidays.

But, it seems, they’ve been making mistakes along the way.

Shoshana Davis, a Gen Z career expert and founder of the career consultancy Fairy Job Mother, told CNBC Make It in an interview that the generation (generally defined as those born between 1996 and 2012) have become too reliant on AI tools like ChatGPT to generate cover letters and job application answers.

“So I speak to businesses and employers who hire anything from like 10 to 1000s of Gen Z every year,” Davis said. “And one of the main challenges that I’m seeing at the moment is the use of AI, specifically ChatGPT, and it’s not being used in the right way, and it’s not being used effectively.”

Davis explained that “employers are getting hundreds of the exact same cover letters word for word,” or answers to job application questions that are the same, and suspect that ChatGPT use is in play.

In fact, 45% of job seekers have used AI to build, update, or improve their resume, a Canva survey published in January of 5,000 hiring managers and 5,000 job seekers from the U.K., U.S., India, Germany, Spain, France, Mexico and Brazil found.

And it appears that Gen Z is leaning the most on AI, according to a February Grammarly survey of 1,002 knowledge workers and 253 business leaders. It reported that 61% of Gen Z said they can’t imagine doing work tasks without using generative AI ­— the most out of any of the generations.

Davis said that we should definitely “embrace technology and AI” but said copying answers from ChatGPT can hurt your chances of getting a job.

A Resume Genius survey of 625 hiring managers found that over half disliked AI-generated resumes and would consider it a red flag that would make them less likely to hire a candidate.

‘100 identical responses’

One of the reasons why copying ChatGPT’s responses is an ineffective way of using AI is that the chatbot does not always provide reliable information.

One initial issue with ChatGPT was that its knowledge base was limited to data released before September 2021 but this was resolved in September 2023, its owner OpenAI announced on X.

“ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers,” it says on the company website. “It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content.”

Davis shared a recent story from an employer she works with who was hiring for a brand marketing position, and asked a question in the job application about candidates’ favorite fitness-related product launches in the past year.

“They said they got about 100 identical responses of ‘my favorite campaign launch was Peloton’ and the employer was like ‘ultimately that was ChatGPT, but then also equally Peloton was released like four or five years ago’,” Davis said. The employer was referring to an ad campaign from Peloton in 2020.

Davis said that young people “need to educate themselves” on how to use ChatGPT properly and not just to copy answers.

‘It should be used as a tool, not a replacement’

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment