With the AI infrastructure push reaching staggering proportions, there’s extra strain than ever to squeeze as a lot inference as doable out of the GPUs they’ve. And for researchers with experience in a selected method, it’s a good time to lift funding.
That’s a part of the driving power behind Tensormesh, launching out of stealth this week with $4.5 million in seed funding. The funding was led by Laude Ventures, with further angel funding from database pioneer Michael Franklin.
Tensormesh is utilizing the cash to construct a industrial model of the open supply LMCache utility, launched and maintained by Tensormesh co-founder Yihua Cheng. Used properly, LMCache can scale back inference prices by as a lot as 10x — an influence that’s made it a staple in open supply deployments and drawn in integrations from heavy hitters like Google and Nvidia. Now Tensormesh is planning to parlay that tutorial fame right into a viable enterprise.
The core of the product is the key-value cache (or KV cache), a reminiscence system used to course of advanced inputs extra effectively by condensing them right down to their key values. In conventional architectures, the KV cache is discarded on the finish of every question — however Tensormesh co-founder and CEO Junchen Jiang argues that this is a gigantic supply of inefficiency.
“It’s like having a really sensible analyst studying all the information, however they neglect what they’ve discovered after every query,” says Jiang.
As a substitute of discarding that cache, Tensormesh’s techniques maintain on to it, permitting it to be redeployed when the mannequin executes the same course of in a separate question. As a result of GPU reminiscence is so treasured, this may imply spreading knowledge throughout a number of completely different storage layers, however the reward is considerably extra inference energy for a similar server load.
The change is especially highly effective for chat interfaces, since fashions want to repeatedly refer again to the rising chat log because the dialog progresses. Agentic techniques have the same problem, with a rising log of actions and objectives.
In idea, these are adjustments AI corporations can execute on their very own — however the technical complexity makes it a frightening process. Given the Tensormesh staff’s work researching the method and the intricacy of the element itself, the corporate is betting there will probably be plenty of demand for an out-of-the-box product.
“Protecting the KV cache in a secondary storage system and reused effectively with out slowing the entire system down is a really difficult downside,” says Jiang. “We’ve seen individuals rent 20 engineers and spend three or 4 months to construct such a system. Or they’ll use our product and do it very effectively.”
