“Midjourney Takes on Copyright Police in Exciting AI Showdown”

Generative AI models like Midjourney’s are trained on an enormous number of examples — e.g. Some vendors have taken a proactive approach, inking licensing agreements with content creators and establishing “opt-out” schemes for training data sets. The problem with benchmarks: Many, many AI vendors claim their models have the competition met or beat by some objective metric. Anthropic launches new models: AI startup Anthropic has launched a new family of models, Claude 3, that it claims rivals OpenAI’s GPT-4. AI models have been helpful in our understanding and prediction of molecular dynamics, conformation, and other aspects of the nanoscopic world that may otherwise take expensive, complex methods to test.

Keeping up with the fast-paced world of AI is no easy task. But until AI can take on this responsibility for us, here’s a convenient wrap-up of recent news and developments in the realm of machine learning.

Last week, AI startup Midjourney quietly made a change to their terms of service regarding their policy on IP disputes. While the updated language may have seemed to simply align with legal standards, it also reflects a growing confidence among AI vendors that they will prevail in legal battles with creators whose work is used to train their models.

According to vendors, using copyrighted material falls under the legal doctrine of fair use as long as it transforms the original work. But not all creators agree, especially as studies show that AI models can essentially “copy and paste” their training data. We’ve seen some vendors proactively secure licensing agreements and establish “opt-out” options for their training data, while others promise to cover legal fees for any copyright lawsuits. Midjourney, on the other hand, has been more bold in their approach, even creating a list of artists whose work they planned to use without permission.

Some argue that Midjourney is taking a calculated risk, as a ruling against fair use could have dire consequences for the company. Despite their current success, the cost of legal fees could be devastating.

Here are some other notable stories in the AI world from the past few days:

  • Some creators on Instagram called out a director for using someone else’s work in a commercial without credit.
  • EU authorities are ramping up efforts to prevent electoral interference by asking major tech companies to explain their prevention strategies.
  • Google Deepmind has trained an AI agent to understand natural language commands by observing many hours of 3D gameplay.
  • A growing number of AI vendors claim their models outperform the competition, but critics argue that their metrics are flawed.
  • AI2 Incubator, a spin-off of the nonprofit Allen Institute for AI, has secured a whopping $200 million in funding for their startup program.
  • India’s government is struggling to determine appropriate regulation for the AI industry.
  • AI startup Anthropic has released a new family of models that they claim rival OpenAI’s GPT-4.
  • A study from the Center for Countering Digital Hate found a concerning increase in AI-generated disinformation, particularly deepfake images related to elections, on Twitter over the past year.
  • OpenAI plans to dismiss all claims made by X CEO Elon Musk in a recent lawsuit, suggesting that Musk had minimal impact on the company’s success.
  • Amazon’s new AI chatbot, Rufus, has received criticism for its limited capabilities and performance.

But that’s not the only area where AI is making strides. In the field of molecular research, AI models are proving to be useful tools for understanding and predicting molecular dynamics and conformation. Models like AlphaFold are revolutionizing this field, but their predictions still require verification.

Microsoft has also introduced a new model called ViSNet, which focuses on predicting structure-activity relationships between molecules and biological activity. While still in its early stages, this model shows promise for researchers looking to tackle complex science problems.

In light of the COVID-19 pandemic, researchers at the University of Manchester are using AI to identify and predict new variants of the virus. By analyzing large genetic datasets, they hope to have an early warning system for emerging variants.

But as advancements in AI design continue, concerns surrounding safety and ethics have been raised. A number of researchers have joined together to call for responsible regulation that allows legitimate research to continue while targeting malicious actors.

In another fascinating use of AI, atmospheric scientists at the University of Washington have challenged the conventional understanding of emissions in the wake of the Soviet Union’s collapse. By analyzing satellite imagery over Turkmenistan using AI, they discovered that emissions actually increased during that time, contradicting previous assumptions.

Finally, language models are not immune to bias and limitations. Researchers found that even when translating between languages that are not English, some models still rely heavily on English concepts and representations. This highlights the importance of diversifying datasets used to train these models.

Avatar photo
Ava Patel

Ava Patel is a cultural critic and commentator with a focus on literature and the arts. She is known for her thought-provoking essays and reviews, and has a talent for bringing new and diverse voices to the forefront of the cultural conversation.

Articles: 888

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *