The CHIPS Act can be seen as a direct result of a number of pressing geopolitical issues.
The above, coupled with long-standing efforts to revitalize U.S. industry, spurred on economic efforts to reshore manufacturing.
While the CHIPS Act was still winding its way through Capitol Hill, Intel announced plans to open a $10 billion manufacturing facility just outside of Columbus, Ohio.
It says it expects those efforts will create 20,000 construction and 10,000 manufacturing jobs — music to the ears of an administration keenly focused on monthly jobs reports.
Notably, Intel recently pushed back the manufacturing start date of its New Albany, Ohio, plant two years to 2027, citing changes to the business environment.
This week in AI, Google paused its AI chatbot Gemini’s ability to generate images of people after a segment of users complained about historical inaccuracies.
Google’s ginger treatment of race-based prompts in Gemini didn’t avoid the issue, per se — but disingenuously attempted to conceal the worst of the model’s biases.
Yes, the data sets used to train image generators generally contain more white people than Black people, and yes, the images of Black people in those data sets reinforce negative stereotypes.
That’s why image generators sexualize certain women of color, depict white men in positions of authority and generally favor wealthy Western perspectives.
Whether they tackle — or choose not to tackle — models’ biases, they’ll be criticized.
Google has apologized (or come very close to apologizing) for another embarrassing AI blunder this week, an image generating model that injected diversity into pictures with a farcical disregard for historical context.
While the underlying issue is perfectly understandable, Google blames the model for “becoming” over-sensitive.
But if you ask for 10, and they’re all white guys walking goldens in suburban parks?
Where Google’s model went wrong was that it failed to have implicit instructions for situations where historical context was important.
These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.
Substack has industry-leading newsletter tools and a platform that independent writers flock to, but its recent content moderation missteps could prove costly.
Earlier last year, Substack CEO Chris Best failed to articulate responses to straightforward questions from the Verge Editor-in-Chief Nilay Patel about content moderation.
The interview came as Substack launched its own Twitter (now X)-like microblogging social platform, known as Notes.
Substack authors are at a crossroadsIn the Substack fallout, which is ongoing, another wave of disillusioned authors is contemplating jumping ship from Substack, substantial readerships in tow.
It’s unfortunate that Substack’s writers and readers now have to grapple with yet another form of avoidable precarity in the publishing world.