April 20, 2024
A.I

AI is fundamentally a “labor replacement tool”

Welcome to AI This Week, Gizmodo’s weekly deep dive into what’s been happening in artificial intelligence.

For months I have been insisting on one particular point, which is that AI tools, as they are currently being implemented, are mostly good at one thing: replacing human employees. He “AI revolution” It has been primarily corporate, an insurrection against the bases that take advantage of new technologies to reduce a company’s overall workforce. The biggest AI vendors have been very open about this—Admitting again and again that new forms of automation will allow human jobs to be reused as software.

We got another dose of that this week, when Google’s DeepMind founder Mustafa Suleyman sat down with an interview with CNBC. Suleyman was in Davos, Switzerland, for the World Economic Forum meeting. annual meetingwhere was the AI reportedly the most popular topic of conversation. During her interview, news anchor Rebecca Quirk asked Suleyman if AI was “going to replace humans in the workplace in massive numbers.”

The CTO’s response was: “I think in the long term (over many decades) we have to think very carefully about how we integrate these tools because, left completely in the hands of the market… they are fundamentally tools that replace the hand.” working”.

And there it is. Suleyman makes this sound like a hazy future hypothesis, but it is obvious that such “labor replacement” is already happening. The technology and media industries, which are exceptionally exposed With AI-related job losses threatened, huge layoffs occurred last year, just as AI was “coming online.” In the first few weeks of January alone, well-established companies like Google, Amazon, YouTube, Salesforce and others have announced more aggressive layoffs that have been explicitly linked to greater deployment of AI.

He general consensus In corporate America, it appears that companies should use AI to operate more agile teams, which can be bolstered by small groups of AI-savvy professionals. These AI professionals will become an increasingly sought-after class of workers, as they will offer the opportunity to reorganize corporate structures around automation, thus making them more “efficient.”

For businesses, the benefits of this are obvious. There is no need to pay for a software program or provide you with health benefits. She will not get pregnant and have to take six months off to care for her newborn child, nor will she ever become dissatisfied with her working conditions and try to start a union campaign in the break room.

The billionaires marketing this technology have made vague rhetorical gestures about things like universal basic income as a cure for the inevitable displacement of workers that will occur, but only a fool would think that these are anything more than empty promises designed to prevent some kind of uprising of the lower classes. The truth is that AI is a technology created by and for the world’s managers. The frenzy in Davos this week – where the world’s richest fawned over him like Greek peasants discovering Promethean fire – is just the latest reminder of that.

Image from the article titled DeepMind Co-Founder: AI is Fundamentally a

Photo: Stefan Wermuth/Bloomberg (fake images)

Question of the day: What is OpenAI’s excuse for becoming a defense contractor?

The short answer to that question is: not very good. This week, it was revealed that the influential organization A.I. He was working with the Pentagon. develop new cybersecurity tools. OpenAI had previously promised not to join the defense industry. Now, after a quick edit of its terms of service, the billion-dollar company is moving full steam ahead on developing new toys for the world’s most powerful military. After being confronted by this rather drastic turn, the company’s response was basically: ¯\_(ツ)_/¯ …“Because we previously had what was essentially a blanket ban on military use, a lot of people thought that would prohibit a lot of these use cases, which people think are very aligned with what we want to see in the world ” a company spokesperson. he told Bloomberg. I’m not sure what the hell that means, but it doesn’t sound particularly convincing. Of course, OpenAI is not alone. Today, many companies are rushing to market their AI services to the defense community. It makes sense that a technology that has been referred Considered the “most revolutionary technology” seen in decades, it would inevitably be absorbed by the US military industrial complex. Given what other countries they are already doing it With AI, I imagine this is just the beginning.

More headlines this week

  • FDA Approved New AI-Powered Device That Helps Doctors Look for Signs of Skin Cancer. The Food and Drug Administration has given its approval to something called DermaSensor, a unique handheld device which doctors can use to scan patients for signs of skin cancer; The device leverages AI to perform “rapid assessments” of legions of skin and assess whether they look healthy or not. While there are many silly uses for AI, experts claim that In fact, AI could be quite useful in the medical field.
  • OpenAI is establishing links with higher education. OpenAI has been trying to reach out with its tentacles to all strata of society and the latest sector to be affected is higher education. This week, the organization announced that had forged a partnership with Arizona State University. As part of the partnership, ASU will gain full access to ChatGPT Enterprise, the enterprise version of the company’s chatbot. ASU also plans to create a “personalized AI tutor” that students can use to help them with their schoolwork. The university is also planning a “rapid engineering course” which I assume will help students learn how to ask a chatbot a question. Useful things!
  • The Internet is already infested with AI-generated garbage. TO new report from 404 Media shows that Google is algorithmically pushing AI-generated content from a large number of suspicious websites. Those websites, the report shows, are designed to hoover up content from other legitimate websites and then repackage them using algorithms. The entire scheme revolves around automating content production to generate advertising revenue. This regurgitated garbage is then promoted by Google’s news algorithm to appear in search results. Joseph Cox writes that the “presence of AI-generated content in Google News indicates” that “Google may not be prepared to moderate its news service in the era of consumer-facing AI.”

Leave a Reply

Your email address will not be published. Required fields are marked *