leadership

“Fueling Up for Success: Revolutionary Changes to Space Diversity with Fresh Leadership and K-12 National Space Day Focus”

Space Workforce 2030 Space Day
“When we think about our nation’s IP and leadership globally, it’s synonymous with leadership in space,” Stricklan told me in an interview before the occasion. To that end Space Workforce 2030 has started with the basics: collecting and understanding the data in order to establish a baseline. Stricklan told me that thousands of teachers have signed on and they expect to see a lot of engagement next month. They have different touchpoints to get to those that just don’t understand there could be a future for them in a STEM-related career.” Stricklan told me. You can learn more about the Space Workforce 2030 effort here.

Exploring the Significance of Stability AI’s CEO Resignation for the Future of AI Startups

Stability Ai Emad Ceo Tc Minute
What do you call an AI company that is suffering from very public gyrations regarding its business health, place in the market, and leadership structure? Well, you might call it Stability AI. Stability AI’s latest leadership shakeup is no joke, with its CEO Emad Mostaque departing to work on AI products that are less centralized — which is to say, owned and built by a single company, like, say, Stability AI. The startup’s fundraising journey is well-known to tech folks, while its best-known product — Stable Diffusion — is known even more broadly. We dig into all that and more in today’s TechCrunch Minute:

Safety Measures Strengthened: OpenAI Grants Board with Final Authority over Risky AI

Openai Pattern 01
OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. In-production models are governed by a “safety systems” team; this is for, say, systematic abuses of ChatGPT that can be mitigated with API restrictions or tuning. Frontier models in development get the “preparedness” team, which tries to identify and quantify risks before the model is released. So, only medium and high risks are to be tolerated one way or the other. For that reason OpenAI is making a “cross-functional Safety Advisory Group” that will sit on top of the technical side, reviewing the boffins’ reports and making recommendations inclusive of a higher vantage.