Sameer Brij Verma, a high-profile investor at the Indian venture firm Nexus, will be leaving the fund later this year, he confirmed to TechCrunch.
Verma plans to launch his own venture firm, with the inaugural fund expected to have a corpus of at least $150 million, a source familiar with the matter said.
The timing of his departure is also peculiar as Nexus raised a $700 million fund, its largest, just last year.
Verma has been working with his portfolio startups to hand over board seats to other partners at Nexus for several months, the source said, requesting anonymity.
Verma plans to launch his own fund by the end of the year where he plans to adopt a strategy that sets him apart from other investment firms in India.
Nala set out to offer remittance services, it’s building a B2B payment platform too It says, this is to guarantee reliability to its app users and businesses making payments into and out of AficaPayments company Nala pivoted to offer remittance service in 2021, tapping the growing money transfer market in Africa, and demand for reliable and affordable services.
For markets like Kenya, they have integrated with mobile money service M-Pesa enabling users living in the diaspora to pay local bills directly.
However, building the service on the payment rails of other providers meant that the fintech could not guarantee dependability.
This drove the decision to develop its own platform that directly integrates with banks and mobile money providers.
The remittance business growth coincides with reports that remittance flows to sub-Saharan Africa will continue on a growth trajectory.
At its GTC conference, Nvidia today announced Nvidia NIM, a new software platform designed to streamline the deployment of custom and pre-trained AI models into production environments.
NIM takes the software work Nvidia has done around inferencing and optimizing models and makes it easily accessible by combining a given model with an optimized inferencing engine and then packing this into a container, making that accessible as a microservice.
Nvidia is already working with Amazon, Google and Microsoft to make these NIM microservices available on SageMaker, Kubernetes Engine and Azure AI, respectively.
Some of the Nvidia microservices available through NIM will include Riva for customizing speech and translation models, cuOpt for routing optimizations and the Earth-2 model for weather and climate simulations.
“Created with our partner ecosystem, these containerized AI microservices are the building blocks for enterprises in every industry to become AI companies.”
While the rest of you are out there touching the proverbial and literal grass, the world’s developers are jamming into conference halls to find out what the next year holds for AI and OSes.
Things kick off next week with NVIDIA’s GTC, with the next few months holding Microsoft Build, Apple’s WWDC and, of course, Google I/O.
Invites just dropped for the latter, which is set for May 14 and 15 at Shoreline Amphitheater in Mountain View, California — the usual spot.
We’ve still got two months to book travel, but we’ll be there (I might pack a hat this time).
While the show is aimed specifically at developers for Google’s various operating systems, things customarily kick off with a Sundar-led keynote.
Apple released VisionOS 1.1 on Thursday with the most notable feature being improved personas of users.
Additionally, users can also now set up their personas without holding the device through Settings>Persona and selecting “Hands-free Capture” mode after initial steps for setup.
The new version of VisionOS also introduces Mobile Device Management (MDM), for enterprises to manage their devices.
This enables admins to set up devices for custom configuration, install apps at scale, and perform a remote erase of the device.
“We know that in order to unlock all of [the power of the Apple Vision Pro], businesses are going to want to manage these devices at scale.
Turnitin laid off staff earlier this year, after CEO forecast AI would allow it to cut headcountPeople worry that advances in AI will lead to job losses, but rarely does a company’s CEO openly admit that AI will help to reduce their headcount.
TechCrunch learned that Turnitin laid off around 15 people earlier this year, as part of broader organizational changes.
Klarna recently announced that its AI Assistant can do the job of 700 workers, shocking the industry.
(Klarna later clarified that the customer service workers the AI was replacing were hired from outsourcing firms, not direct employees.)
Turnitin confirmed its layoffs in a statement to TechCrunch, but not the headcount:
India has approved allocating up to $15.2 billion (1.26 trillion Indian rupees) to build three new semiconductor plants, including its first semiconductor fab facility — part of the country’s big bid to take on China, Taiwan and other countries in the chip race.
On Thursday, the Indian cabinet approved the country’s first semiconductor fab facility set up by the salt-to-software conglomerate Tata Group and Taiwan’s Power Chip, which will be established in the Dholera region of Gujarat.
The Indian IT minister Ashwini Vaishnaw told reporters at a media briefing in New Delhi that the construction work for the semiconductor fab will start within 100 days.
“A typical semiconductor fab, construction is a three-four-year time frame.
This will be the country’s third semiconductor unit and will be able to produce 48 million chips per day.
Saining Xie, a computer science professor at NYU, began the research project that spawned the diffusion transformer in June 2022.
Diffusion models typically have a “backbone,” or engine of sorts, called a U-Net.
In other words, larger and larger transformer models can be trained with significant but not unattainable increases in compute.
The current process of training diffusion transformers potentially introduces some inefficiencies and performance loss, but Xie believes this can be addressed over the long horizon.
“I’m interested in integrating the domains of content understanding and creation within the framework of diffusion transformers.
Massive training data sets are the gateway to powerful AI models — but often, also those models’ downfall.
Morcos’ company, DatologyAI, builds tooling to automatically curate data sets like those used to train OpenAI’s ChatGPT, Google’s Gemini and other like GenAI models.
“However, not all data are created equal, and some training data are vastly more useful than others.
History has shown automated data curation doesn’t always work as intended, however sophisticated the method — or diverse the data.
The largest vendors today, from AWS to Google to OpenAI, rely on teams of human experts and (sometimes underpaid) annotators to shape and refine their training data sets.
The deal’s latest hurdle is the European Commission, which has set a February 14 deadline to reach a final decision.
According to a new report, the EU regulatory body is set to vote against acquisition, citing the perceived anti-competitive nature of deal.
In July, Amazon announced that it was lowering its asking price from $61 to $51.75 per share.
The day the initial deal was announced, iRobot cut its headcount by 10% — around 140 people – as part of a restructure.
As of this writing, share prices have dipped below $20 – one-third of where things where when the deal was announced.