ChatGPT, Bard, Copilot, and much more… As you can see, the number of artificial intelligence in our lives is rapidly increasing. Nowadays, companies are working on developing their own AI chatbots or integrating their capabilities into their services. Of course, this process is much more difficult than you can imagine.

This is because developing an artificial intelligence model requires not only hardware and software knowledge but also training it with billions of pieces of information obtained from millions of different sources. However, accessing such information isn’t always straightforward within legal boundaries, as highlighted by The New York Times’ lawsuit against OpenAI and Microsoft for copyright infringement. Here are the details…

The New York Times vs. OpenAI & Microsoft over Copyright Infringement

New York Times claims that millions of their articles were used without permission in training language models such as ChatGPT and Microsoft Copilot. They’re concerned about the impact on journalism because AI can create content much faster than humans. If AI leverages data from platforms like The New York Times, it could cause substantial financial harm to these publications.

chatgpt

The company stated that they had contacted both OpenAI and Microsoft, but couldn’t reach an agreement. Despite the unstoppable progress in AI tech, New York Times is right about certain aspects. Firstly, using copyrighted content for commercial purposes is definitely illegal. Moreover, low-quality content created by AI poses a real threat to journalism.

While search engines like Google prioritize original content, it can be difficult to distinguish between AI and human made content. Artificial intelligence poses a threat to various professions, with journalism appearing as one of the most vulnerable. Of course, what the future holds remains uncertain for now. To find out more, it seems we have to wait for the outcome of New York Times’ lawsuit against OpenAI and Microsoft.

RELATED:

(via)