Model

Overcoming the Challenge of Constructing a Feasible Pricing Strategy for Generative Artificial Intelligence Capabilities

Gettyimages 1548038240
In October, Box unveiled a new pricing approach for the company’s generative AI features. Instead of a flat rate, the company designed a unique consumption-based model. Each user gets 20 credits per month, good for any number of AI tasks that add up to 20 events, with each task charged a single credit. If the customer surpasses that, it would be time to have a conversation with a salesperson about buying additional credits. Spang says, for starters, that in spite of the hype, generative AI is clearly a big leap forward, and software companies need to look for ways to incorporate it into their products.

“Nabla Secures Additional $24M Funding for Revolutionary AI Tool Supporting Physicians”

Nabla Office
Nabla has been working on an AI copilot for doctors and other medical staff. Nabla then uses a large language model refined with medical data and health-related conversations to identify the important data points in the consultation — medical vitals, drug names, pathologies, etc. The company uses a combination of an off-the-shelf speech-to-text API from Microsoft Azure and its own speech-to-text model (a refined model based on the open-source Whisper model). Once the LLM has processed the transcript, Nabla de-pseudonymizes the output. However, doctors can give their approval and ask for the patient consent to share medical notes with Nabla so that they can be used to correct transcription errors.

“Revolutionizing Robot Training: Google’s Groundbreaking Methods Utilizing Video and Extensive Language Models”

Unnamed
Google’s DeepMind Robotics researchers are one of a number of teams exploring the space’s potential. The newly announced AutoRT is designed to harness large foundational models, to a number of different ends. In a standard example given by the DeepMind team, the system begins by leveraging a Visual Language Model (VLM) for better situational awareness. A large language model, meanwhile, suggests tasks that can be accomplished by the hardware, including its end effector. LLMs are understood by many to be the key to unlocking robotics that effectively understand more natural language commands, reducing the need for hard-coding skills.

Arkon Energy Secures $110M Investment for Expanding U.S. Bitcoin Mining Capacity and Introducing AI Cloud Service in Norway

Gettyimages 539954410 1
Arkon Energy, a data center infrastructure company, closed a $110 million private funding round to expand its operations, the company’s CEO Josh Payne shared exclusively with TechCrunch. “These sites appeal to both bitcoin miners and AI [or] machine learning clients who have very high power computing demands,” Payne said. “We are essentially a landlord who owns the underlying infrastructure assets.”Arkon’s business model focuses on strategically acquiring distressed data center assets across the globe. “The current and future demand for data center capacity of all types that we are seeing globally, but especially in the U.S., is unprecedented and monumental. Arkon aims to fill that gap by providing the underlying infrastructure layer that the AI sector relies on.

“Akron Energy Secures $110M Investment to Expand U.S. Bitcoin Mining Capabilities and Introduce AI Cloud Services in Norway”

Gettyimages 539954410
Akron Energy, a data center infrastructure company, closed a $110 million private funding round to expand its operations, the company’s CEO Josh Payne shared exclusively with TechCrunch. “These sites appeal to both bitcoin miners and AI [or] machine learning clients who have very high power computing demands,” Payne said. “We are essentially a landlord who owns the underlying infrastructure assets.”Akron’s business model focuses on strategically acquiring distressed data center assets across the globe. “The current and future demand for data center capacity of all types that we are seeing globally, but especially in the U.S., is unprecedented and monumental. Akron aims to fill that gap by providing the underlying infrastructure layer that the AI sector relies on.

“Unleashing Supercomputer Power: EU’s Plan to Boost AI Startup Training”

Gettyimages 578578380
The plan is for “centers of excellence” to be set up to support the development of dedicated AI algorithms that can run on the EU’s supercomputers, they added. AI startups are more likely to be accustomed to using dedicated compute hardware provided by US hyperscalers to train their models than tapping the processing power offered by supercomputers as a training resource. Using its supercomputing resources to fire up AI startups specifically has emerged as a more recent strategic priority after the EU president’s announcement of the compute access for AI model training program this fall. It’s still early days for the EU’s ‘supercompute for AI’ program so it’s unclear whether there’s much model training upside to report off of dedicated access as yet. But the early presence of Mistral in the EU’s supercomputing access program may suggest an alignment in the thinking.

Safety Measures Strengthened: OpenAI Grants Board with Final Authority over Risky AI

Openai Pattern 01
OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. In-production models are governed by a “safety systems” team; this is for, say, systematic abuses of ChatGPT that can be mitigated with API restrictions or tuning. Frontier models in development get the “preparedness” team, which tries to identify and quantify risks before the model is released. So, only medium and high risks are to be tolerated one way or the other. For that reason OpenAI is making a “cross-functional Safety Advisory Group” that will sit on top of the technical side, reviewing the boffins’ reports and making recommendations inclusive of a higher vantage.

The Advent of Superhuman AI: OpenAI’s Mission to Develop Control Tools

Tc Backlight 1
OpenAI formed the Superalignment team in July to develop ways to steer, regulate and govern “superintelligent” AI systems — that is, theoretical systems with intelligence far exceeding that of humans. Superalignment is a bit of touchy subject within the AI research community. “I think we’re going to reach human-level systems pretty soon, but it won’t stop there — we’re going to go right through to superhuman systems … So how do we align superhuman AI systems and make them safe? But the approach the team’s settled on for now involves using a weaker, less-sophisticated AI model (e.g. Well, it’s an analogy: the weak model is meant to be a stand-in for human supervisors while the strong model represents superintelligent AI.

Triumphant Victory: Epic Games Emerges Victorious in Antitrust Conflict Against Google – What Awaits in the Future?

Gettyimages 1264804120
Instead, what is immediately changing as a result of this ruling is the legality surrounding the app store business model itself — and potentially others. “What we know right now is that this is going to impact the walled garden business model Google and Apple and other companies have enjoyed for a while,” Swanson said. In fact, the legal risk from this business model may encourage other businesses to change, even without being dragged to court. Apple didn’t regularly engage in side deals (though it considered one with Netflix) nor did it pay developers to launch on its app store instead of theirs, as Apple only offers one route to app distribution: the App Store. “Just because it is your business model does not mean it is legal or that it’s right,” VanMeter pointed out.