policy

Advancing General Purpose Robotics: The Impact of Generative AI

Mit Policy Comp 01 Press
Given the frequency with which their developers toss around the phrase “general purpose humanoids,” more attention ought to be paid to the first bit. After decades of single purpose systems, the jump to more generalized systems will be a big one. The use of generative AI in robotics has been a white-hot subject recently, as well. One of the biggest challenges on the road to general purpose systems is training. The proliferation of multi-purpose systems would take the industry a step closer to general purpose dream.

Concerns Arise About Inadequate AI Policy from Unreliable Congress, Says Helen Toner

Helen Toner Strictlyvc
Earlier that same year, the National Institute of Standards and Technology, which establishes federal technology standards, published a roadmap for identifying and mitigating the emerging risks of AI. But Congress has yet to pass legislation on AI — or even propose any law as comprehensive as regulations like the EU’s recently enacted AI Act. Colorado recently approved a measure that requires AI companies to use “reasonable care” while developing the tech to avoid discrimination. Consider this example: in many state laws regulating AI, “automated decision making” — a term broadly referring to AI algorithms making some sort of decision, like whether a business receives a loan — is defined differently. Toner thinks that even a high-level federal mandate would be preferable to the current state of affairs.

“Unfamiliar to DC’s political elite, Y Combinator sets its sights on changing their awareness”

Luther Lowe
“So many folks in D.C. don’t actually know what it is,” he remarked. When Graham put out a call for startup applications, a dozen startups got into YC’s debut class. Lowe didn’t confirm where that was a strategy on Tan’s part, but he praised Tan for his warmness and his dedication. After educating the D.C. market, YC aims to leverage its influence, particularly in areas like competition policy. And if we don’t do that, then it’s pretty easy to see how this plays out,” Lowe said.

Check out Apple’s updated right to repair stance in light of their new iPhone policy

Youtube Thumb Text 2
Apple’s stance on the right to repair has now become more accommodative, with the company now supporting used parts for iPhone 15 repairs that can include the camera, display, and battery. While Apple’s move is welcome to many, it does answer a series of questions: If your iPhone breaks, should you have the right to fix it? If you want to fix your iPhone, should you be able to do that yourself, or be forced to go to the manufacturer? And if you are going to fix your iPhone yourself — or pay a third-party to help — should you be able to use whatever parts will work? Apple pushed back vocally against criticism of parts pairing, and has recently backed laws in several states that enshrine consumer repair options.

“Enhancing Deepfakes Surveillance: Meta’s AI Playbook Introduces Increased Labeling and Reduced Takedowns”

Gettyimages 944719854
Meta has announced changes to its rules on AI-generated content and manipulated media following criticism from its Oversight Board. So, for AI-generated or otherwise manipulated media on Meta platforms like Facebook and Instagram, the playbook appears to be: more labels, fewer takedowns. “Our ‘Made with AI’ labels on AI-generated video, audio and images will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content,” said Bickert, noting the company already applies ‘Imagined with AI’ labels to photorealistic images created using its own Meta AI feature. Meta’s blog post highlights a network of nearly 100 independent fact-checkers which it says it’s engaged with to help identify risks related to manipulated content. These external entities will continue to review false and misleading AI-generated content, per Meta.

“Encouraging Women to Embrace AI: Insights from Kristine Gloria of the Aspen Institute on Embracing Curiosity in the Field”

Women In Ai Gloria
We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Kristine Gloria leads the Aspen Institute’s Emergent and Intelligent Technologies Initiative — the Aspen Institute being the Washington, D.C.-headquartered think tank focused on values-based leadership and policy expertise. What are some issues AI users should be aware of? What is the best way to responsibly build AI? How can investors better push for responsible AIOne specific task, which I admire Mozilla Ventures for requiring in its diligence, is an AI model card.

A Call for Responsible AI Practices: Brandie Nonnecke of UC Berkeley Urges Investors to Take Action for Women in AI

Women In Ai Nonnecke
Nonnecke also co-directors the Berkeley Center for Law and Technology, where she leads projects on AI, platforms and society, and the UC Berkeley AI Policy Hub, an initiative to train researchers to develop effective AI governance and policy frameworks. I’ve been working in responsible AI governance for nearly a decade. First, The University of California was the first university to establish responsible AI principles and a governance structure to better ensure responsible procurement and use of AI. For women entering the AI field, my advice is threefold: Seek knowledge relentlessly, as AI is a rapidly evolving field. Investors have the power to shape the industry’s direction by making responsible AI practices a critical factor in their investment decisions.

“Google Issues Warning to 10 Indian Companies for Avoiding Play Store Fees, Threatens App Removal”

Gettyimages 942662750
Google said on Friday it will start removing apps from its Play Store in India if developers fail to comply with its payment policy, taking a definitive stand weeks after the top Indian court granted the Android-maker with relief. Without naming them, Google said 10 companies in India, including “many well-established ones,” have not paid Google Play’s fee despite being provided with three years to prepare. “After giving these developers more than three years to prepare, including three weeks after the Supreme Court’s order, we are taking necessary steps to ensure our policies are applied consistently across the ecosystem, as we do for any form of policy violation globally,” the company wrote in a blog post. “Enforcement of our policy, when necessary, can include removal of non-compliant apps from Google Play.”This is a developing story. More to come.

“Developing Effective Solutions to AI Concerns: Policy Recommendations by Amba Kak”

Women In Ai Kak
Amba Kak is the executive director of the AI Now Institute, where she helps create policy recommendations to address AI concerns. She was also a senior AI advisor at the Federal Trade Commission and previously worked as a global policy advisor at Mozilla and a legal advisor to India’s telecom regulator on net-netruality. How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry? The tech industry, and AI in particular, remains overwhelmingly white and male and geographically concentrated in very wealthy urban bubbles. By exposing the power dynamics that the tech industry tries very hard to conceal.

“AI and Privacy: A Senior Counsel’s Perspective on the Role of Women in the Field”

Women In Ai Richardson
Rashida Richardson, senior counsel, AI at MastercardBriefly, how did you get your start in AI? On the second point, law and policy regarding AI development and use is evolving. How can investors better push for responsible AI? Investors can do a better job at defining or at least clarifying what constitutes responsible AI development or use, and taking action when AI actor’s practices do not align. Currently, “responsible” or “trustworthy” AI are effectively marketing terms because there are no clear standards to evaluate AI actor practices.