questions

“Uncovering the Art of VC Pitching: A Deep Dive with Wing Venture’s Sara Choi at TechCrunch Early Stage 2024”

Sarachoi Postheader 1200x628
Crafting the perfect venture capital pitch is so simple that there’s an industry of consultants to help founders get their decks in order. TechCrunch has a long-running series of Pitch Deck Teardowns to help founders, and you can find an infinite number of Twitter threads on the subject. Enter Wing Venture Capital’s Sara Choi, who will give a talk at TechCrunch Early Stage 2024 this April and take audience questions on how to pitch. After all, when venture capital is harder to raise than it has been in years, nailing the pitch is critical for today’s early-stage founders. Early Stage 2024 is just around the corner, so book your pass here before March 29 and save $200.

“TechCrunch Early Stage 2024: Exploring TAM with Felicis, Quotient AI, and Cellino”

Tam Panel
We’re getting closer to this year’s Early Stage get-together in Boston, which means that it’s time to add three more names to our ever-expanding list of whip-smart speakers coming to present, and answer your most burning questions. Today, I’m stoked to announce that Felicis’s Tobi Coker, Quotient AI’s Julia Neagu, and Cellino’s Nabiha Saklayen will be on-site and ready to rock next month. Regular TechCrunch readers will recall that Cellino won our 2021 Battlefield event. How to calculate TAM is no small question, and it’s too big a topic to fit inside any single acronym. Is your company interested in sponsoring or exhibiting at TechCrunch Early Stage 2024?

“Professor Sarah Kreps: Empowering Women in the Field of Artificial Intelligence”

Women In Ai Kreps
Sarah Kreps is a political scientist, U.S. Air Force veteran and analyst who focuses on U.S. foreign and defense policy. She’s a professor of government at Cornell University, adjunct professor of law at Cornell Law School and an adjunct scholar at West Point’s Modern War Institute. Kreps’ recent research explores both the potential and risks of AI tech such as OpenAI’s GPT-4, specifically in the political sphere. In an opinion column for The Guardian last year, she wrote that, as more money pours into AI, the AI arms race not just across companies but countries will intensify — while the AI policy challenge will become harder. Developing AI in these publicly interested way seemed like a valuable contribution and interesting interdisciplinary work for political scientists and computer scientists.

Examining the Inadequate Insights from the Majority of AI Benchmarks

Gettyimages 176980461
Here’s why most AI benchmarks tell us so littleOn Tuesday, startup Anthropic released a family of generative AI models that it claims achieve best-in-class performance. The reason — or rather, the problem — lies with the benchmarks AI companies use to quantify a model’s strengths — and weaknesses. “Many benchmarks used for evaluation are three-plus years old, from when AI systems were mostly just used for research and didn’t have many real users. In addition, people use generative AI in many ways — they’re very creative.”It’s not that the most-used benchmarks are totally useless. However, as generative AI models are increasingly positioned as mass market, “do-it-all” systems, old benchmarks are becoming less applicable.

“Streamlining Business Intelligence Tools for Increased Efficiency: How LLMs are Revolutionizing Data Management”

Fluent Team Photo
At the moment, large organizations often employ “business intelligence” (BI) tools to figure out what the heck is going on inside their operations. Essentially, BI tools connect to a business database and use SQL to create visualizations and build out BI dashboards. There are huge companies involved in this space: Tableau (owned by Salesforce), Power BI (owned by Microsoft), Looker (owned by Google), and QuickSight (owned by Amazon) to name just a handful. And how is this marketing campaign performing.” He said other players in the market target data users, whereas Fluent targets the business market, not data. For example, Metabase is an open-source analytics and business intelligence application that allows users to create dashboards more easily.

Curious Inquiry: EU Raises Concerns Over Meta’s “Pay or Be Tracked” Consent Approach

Gettyimages 1975387103
Now the EU is asking questions about Meta’s ‘pay or be tracked’ consent modelMeta’s controversial pay or be tracked ‘consent’ choice for users the European Union is facing questions from the European Commission. Meta’s ad-free subscription is controversial because under EU data protection law consent must be informed, specific and freely given if it’s to be valid. Now the EU itself is stepping in with an RFI under the DSA, the bloc’s recently updated ecommerce rulebook. In follow-up questions last month, the MEPs criticized internal market commissioner, Thierry Breton, for what they couched as “inadequate answers” — repeating their ask for a clear verdict on Meta’s ‘pay or consent’ model. We also reached out to Ireland’s DPC for an update on its review of Meta’s consent or pay model — which has been ongoing for around six months.

“Unlock the Musical Mysteries with Spotify’s Innovative ‘Song Psychic’ – A Magical Fortune-Teller for Your Soul”

Song Psychic
Spotify is rolling out a new feature called Song Psychic that will allow its customers to ask Spotify questions and get answers in the form of music. The addition builds on the success of Spotify’s personalized, year-end review called Wrapped, which offers clever ways of turning Spotify’s music data into insights designed for social sharing. But in the case of Song Psychic, the goal is not to look back and your listening history, but to leverage Spotify’s understanding of music and song titles to answer a range of personal questions — like those you might ask a psychic or Magic 8-Ball just for fun. Just as a Magic 8-Ball sometimes refuses to answer a question with its “Ask Again Later” response, Spotify’s Song Psychic may respond with an answer of its own, like “Why?” instead of directly responding. Song Psychic is available to Spotify’s free and Premium subscribers in 64 markets and in 21 languages, the company says.

. “Experience the Power of Leo AI on Android: Brave’s Revolutionary Assistant Now Accessible!”

Brave Browser
Brave is launching its AI-powered assistant, Leo, to all Android users. The assistant allows users to ask questions, translate pages, summarize pages, create content and more. With Leo, Brave is hoping its users won’t have to turn to ChatGPT or other popular LLMs for tasks and queries, and will instead use its service instead. If you’re not seeing Brave Leo for Android yet, that’s because it will be rolled out in phases over the next few days. Brave isn’t the only browser company to recently launch an AI assistant, as Opera launched an AI assistant called Aria last year.

Google Brings Stack Overflow’s Knowledge Base to Google Cloud’s Gemini Platform

Gettyimages 1898398177
The launch partner for this is Google, which will use Stack Overflow’s data to enrich Gemini for Google Cloud and provide validated Stack Overflow answers in the Google Cloud console. Google and Stack Overflow plan to preview these integrations at Google’s Cloud Next conference in April. It’s no secret that content-driven services like Stack Overflow (but also Reddit, publishing houses etc.) While Google and Stack Overflow aren’t discussing the financial terms of this partnership, it’s worth noting that this is not an exclusive partnership. Google will also bring Stack Overflow right into the Google Cloud console and will allow developers to see answers and ask questions right from there.

Robots Provide ‘Trash’ Answers for Voting and Elections Questions

Gettyimages 1204951069
A number of major AI services performed poorly in a test of their ability to address questions and concerns about voting and elections. Their concern was that AI models will, as their proprietors have urged and sometimes forced, replace ordinary searches and references for common questions. They submitted these questions via API to five well-known models: Claude, Gemini, GPT-4, Llama 2 and Mixtral. The AI model responses ranged from 1,110 characters (Claude) to 2,015 characters, (Mixtral), and all of the AI models provided lengthy responses detailing between four and six steps to register to vote. GPT-4 came out best, with only approximately one in five of its answers having a problem, pulling ahead by punting on “where do I vote” questions.