Rendered at 20:19:25 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
spencer9714 21 hours ago [-]
Interesting concept. One thing I’m curious about if I’m in a cohort for something like DeepSeek V3 and another user spins up a heavy 24/7 job, how do you keep TTFT from degrading? vLLM’s continuous batching helps, but there’s still a physical limit with shared VRAM/compute. I’ve been grappling with this exact 'noisy neighbor' issue while building Runfra. We actually ended up moving toward a credit per task model on idle GPUs specifically to avoid that resource contention entirely.
Curious how you’re thinking about isolation here. Is there any hard guarantee on a 'slice' of the GPU, or is it mostly just handled by the vLLM scheduler?
freedomben 1 days ago [-]
This is an excellent idea, but I worry about fairness during resource contention. I don't often need queries, but when I do it's often big and long. I wouldn't want to eat up the whole system when other users need it, but I also would want to have the cluster when I need it. How do you address a case like this?
jrandolf 1 days ago [-]
We implement rate-limiting and queuing to ensure fairness, but if there are a massive amount of people with huge and long queries, then there will be waits. The question is whether people will do this and more often than not users will be idle.
mogili1 1 days ago [-]
Rate limit essentially is a token limit
ibejoeb 1 days ago [-]
It depends on how it's implemented. If it's a fixed window, then your absolute ceiling is tokens/windows in a month. If it's a function of other usage, like a timeshare, you're still paying for some price for a month and you get what you get without paying more per token. There's an intrinsic limit based on how many tokens the model can process on that gpu in a month anyway, even if it's only you.
delusional 21 hours ago [-]
Time x capacity is also a limit. There's always a limit.
freedomben 1 days ago [-]
Is there any way to buy into a pool of people with similar usage patterns? Maybe I'm overthinking it, but just wondering
ssl-3 1 days ago [-]
I think it'd be best to pool with people with different patterns, not the same patterns. Perhaps it would be best to pool with people in different timezones, and/or with different work/sleep schedules.
If everyone in a pool uses it during the ~same periods and sleeps during the ~same periods, then the node would oscillate between contention and idle -- every day. This seems largely avoidable.
(Or, darker: Maybe the contention/idle dichotomy is a feature, not a bug. After all, when one has control of $14k/month of hardware that is sitting idle reliably-enough for significant periods every day, then one becomes incentivized to devise a way to sell that idle time for other purposes.)
vineyardmike 21 hours ago [-]
This is basically why the big companies can sell subscriptions for cheaper than API costs. First priority can go to API users, lower priority subscription users get slotted in as space/SLO allows, and then sell the remaining idle GPU to batch users and spare training. Oh and geography shift as necessary for different nations working hours.
petterroea 1 days ago [-]
To be fair this is the price you pay for sharing a GPU. Probably good for stuff that doesn't need to be done "now" but that you can just launch and run in the background. I bet some graphs that show when the gpu is most busy could be useful as well
pokstad 1 days ago [-]
This problem sounds like an excellent opportunity. We need a race to the bottom for hosting LLMs to democratize the tech and lower costs. I cheer on anyone who figures this out.
mememememememo 18 hours ago [-]
This is classic queuing theory, rate limits etc. I don't have an answer but I would look there.
taraindara 13 hours ago [-]
What if you could group multiple of them. Long queries run on the group that’s commonly doing those. Shorter queries que faster because they’ll execute faster.
zozbot234 23 hours ago [-]
Ultimately the most sensible way of handling this is you end up with "surge pricing" for the highest-priority tokens whenever the inference platform is congested, over and above the base subscription (but perhaps ultimately making the subscription a bit cheaper).
cyanydeez 23 hours ago [-]
Also, cache ejection during contention qill degrade everyones service.
I question whether they actually understand LLMs at scale.
zozbot234 23 hours ago [-]
I suppose it's meant to be a "minimum viable" third-party inference platform, where you're literally selling subscription-based access (i.e. fixed price, not PAYGO by token) to a single GPU cluster, and then only once enough users subscribe to make it viable (which is very nice from them, it works like a Kickstarter/group coupon model and creates a guaranteed win-win for the users). But they could easily expand to more than just the minimum cluster size, which would somewhat improve efficiency. (Deepseek themselves scale out their model over huge amounts of GPUs, which is how they manage to price their tokens quite cheap.)
QuantumNomad_ 1 days ago [-]
> How does billing work?
> When you join a cohort, your card is saved but not charged until the cohort fills. Stripe holds your card information — we never store it. Once the cohort fills, you are charged and receive an API key for the duration of the cohort.
Have any cohorts filled yet?
I’m interested in joining one, but only if it’s reasonable to assume that the cohort will be full within the next 7 days or so. (Especially because in a little over a week I’m attending an LLM-centered hackathon where we can either use AWS LLM credits provided by the organizer, or we can use providers of our own choosing, and I’d rather use either yours or my own hardware running vLLM than the LLM offerings and APIs from AWS.)
I’d be pretty annoyed if I join a cohort and then it takes like 3 months before the cohort has filled and I can begin to use it. By then I will probably have forgotten all about it and not have time to make use of the API key I am paying you for.
jrandolf 1 days ago [-]
No cohorts have been filled yet. We're still early. We are seeing reservations pick up quickly, but I'd be able to give you a more concrete estimate of fill velocity after about a week.
That said, we're planning to add a 7-day window: if a cohort doesn't fill within 7 days of your reservation, it cancels automatically and your card is released. We don't want anyone's payment method sitting in limbo indefinitely.
tcdent 19 hours ago [-]
This is a fantastic idea.
On a nonzero number of occasions I have priced the cost of running an inference server with a model that is actually usable and the annual cost is astronomical.
mmargenot 1 days ago [-]
This is a great idea! I saw a similar (inverse) idea the other day for pooling compute (https://github.com/michaelneale/mesh-llm). What are you doing for compute in the backend? Are you locked into a cohort from month to month?
kaoD 1 days ago [-]
How is the time sharing handled? I assume if I submit a unit of work it will load to VRAM and then run (sharing time? how many work units can run in parallel?)
How large is a full context window in MiB and how long does it take to load the buffer? I.e. how many seconds should I expect my worst case wait time to take until I get my first token?
jrandolf 1 days ago [-]
vLLM handles GPU scheduling, not sllm. The model weights stay resident in VRAM permanently so there's no loading/unloading per request. vLLM uses continuous batching, so incoming requests are dynamically added to the running batch every decode step and the GPU is always working on multiple requests simultaneously. There is no "load to VRAM and run" per request; it's more like joining an already-running batch.
TTFT is under 2 seconds average. Worst case is 10-30s.
kaoD 1 days ago [-]
> The model weights stay resident in VRAM permanently so there's no loading/unloading per request.
Yes, I was thinking about context buffers, which I assume are not small in large models. That has to be loaded into VRAM, right?
If I keep sending large context buffers, will that hog the batches?
1 days ago [-]
jrandolf 1 days ago [-]
Not if you are the only one. We have rate limits to prevent this in case, idk, you share your key with 1000 people lol.
ninjha 1 days ago [-]
> how many work units can run in parallel
not original author but batching is one very important trick to make inference efficient, you can reasonably do tens to low hundreds in parallel (depending on model size and gpu size) with very little performance overhead
OJFord 10 hours ago [-]
Especially with only 1mo commitment, what happens if there's a lot of churn after the first month – more people leave a cohort than are waiting for one? The whole cohort is then waiting for it to fill again before it restarts? And will people waiting for the next cohort to fill automatically be reassigned to the last (now not full) one anyway, or would there then be multiple partially filled cohorts for a single spec?
I like the idea, I just wouldn't want my subscription to suddenly be on hold because a peer decided to stop theirs.
varunr89 1 days ago [-]
$40/mo for deepseek r1 seems steep compared to a pro sub on open ai /claude unless you run 24x7. im not sure how sharing is making this affirdable.
lelanthran 1 days ago [-]
> $40/mo for deepseek r1 seems steep compared to a pro sub on open ai /claude unless you run 24x7.
"Running 24x7" is what people want to do with openclaw.
mrklol 12 hours ago [-]
Seems like they have a rate limit so it is kinda the same as normal subs - don’t really see the advantage yet
lelanthran 5 hours ago [-]
> Seems like they have a rate limit so it is kinda the same as normal subs - don’t really see the advantage yet
It's not really the same "limit", AIUI.
SLLM: Being capped to the rate would make your openclaw run slowly, but still be able to work 24x7.
Normal Subs: Hiting the limit means your openclaw doesn't run at all for hours.
wongarsu 6 hours ago [-]
Presumably the rate limit is much higher
mememememememo 18 hours ago [-]
Yes you don't choose this for the price. But because you want to control yout dependencies.
tensor-fusion 1 days ago [-]
Interesting direction. One adjacent pattern we've been working on is a bit less about partitioning a shared node for more tokens, and more about letting developers keep a local workflow while attaching to an existing remote GPU via a share link / CLI / VS Code path. In labs and small teams we've found the pain is often not just allocation, but getting access into the everyday workflow without moving code + environment into a full remote VM flow. Curious whether your users mostly want higher GPU utilization, or whether they also want workflow portability from laptops and homelabs. I'm involved with GPUGo / TensorFusion, so that's the lens I'm looking through.
vova_hn2 1 days ago [-]
1. Is the given tok/s estimate for the total node throughput, or is it what you can realistically expect to get? Or is it the worst case scenario throughput if everyone starts to use it simultaneously?
2. What if I try to hog all resources of a node by running some large data processing and making multiple queries in parallel? What if I try to resell the access by charging per token?
Edit: sorry if this comment sounds overly critical. I think that pooling money with other developers to collectively rent a server for LLM inference is a really cool idea. I also thought about it, but haven't found a satisfactory answer to my question number 2, so I decided that it is infeasible in practice.
jrandolf 1 days ago [-]
1. It's an average.
2. We have sophisticated rate limiter.
poly2it 1 days ago [-]
Does it take user time zones into account?
jrandolf 1 days ago [-]
Yes
artificialprint 24 hours ago [-]
Didn't make sense to launch multiple 10 and 40 bucks subscriptions right at the start, because now they are competing with each other.
Also mobile version is a bit broken, but good idea and good luck!
jrandolf 24 hours ago [-]
I'm feeling it Mr. Crabs.
p_m_c 1 days ago [-]
Do you own the GPUs or are you multiplexing on a 3rd party GPU cloud?
jrandolf 24 hours ago [-]
Multiplexing on a GPU cloud.
singpolyma3 1 days ago [-]
25 t/s is barely usable. Maybe for a background runner
lelanthran 1 days ago [-]
> 25 t/s is barely usable. Maybe for a background runner
That's over a 1000 words/s if you were typing. If 1000 words/s is too slow for your use-case, then perhaps $5/m is just not for you.
I kinda like the idea of paying $5/m for unlimited usage at the specified speed.
It beats a 10x higher speed that hits daily restrictions in about 2 hours, and weekly restrictions in 3 days.
singpolyma3 1 days ago [-]
Sure if it was just a matter of typing. But in practise it means sitting and staring for minutes at nothing happening with a "thinking" until something finally happens.
I mean my local 122b is only 20t/s so for background stuff it can be used for that. But not for anything interactive IME.
lelanthran 23 hours ago [-]
> I mean my local 122b is only 20t/s so for background stuff it can be used for that. But not for anything interactive IME.
What are you running that local 122b on? I mean, this looks attractive to me for $5/m running unlimited at 20t/s-25t/s, but if I could buy hardware to get that running locally, I don't mind doing so.
singpolyma3 21 hours ago [-]
Framework desktop
dreamdayin9 12 hours ago [-]
what is the main moat of your idea? privacy?
otherwise it looks like a less flexible API compared to what chutes.ai or openrouter.ai providing. and they have TEE instances, which are more private.
also why did u decide on launching V3 instead of some much more exciting models revealed recently like MiMo-V2-Pro or Arcee's Trinity Large?
Lalabadie 1 days ago [-]
This is the most "Prompted ourselves a Shadcn UI" page I've seen in a while lol
I dig the idea! I'm curious where the costs will land with actual use.
jrandolf 1 days ago [-]
Thanks lol. I actually like Shadcn's style. It's sad that people view it as AI now.
rendaw 14 hours ago [-]
Once you're in a cohort how do you actually use it?
jrandolf 3 hours ago [-]
You get an API key
yoavm 13 hours ago [-]
> Prices start at $5/mo for smaller models.
Is there actually any $5/mo offering? It seems like the cheapest models start at $10.
avereveard 22 hours ago [-]
Interesting there's a trickle of low intensity job one can always get running but like glm own plan is $30/mo and something about 300tps now I know that one is subsidized but still.
wavemode 15 hours ago [-]
> nobody is charged until the cohort fills
So then what happens if some people's payment method fails once you do charge?
lelanthran 15 hours ago [-]
> So then what happens if some people's payment method fails once you do charge?
I expect its a pre-auth, like car rental companies do; a pre-auth gives you a code from the card issuer and an expiry. The issuer will reserve the amount on the cardholders account, and only perform the transaction to the merchant once the merchant sends a second message with the pre-auth code.
spuz 1 days ago [-]
It seems crazy to me that the "Join" button does not have a price on it and yet clicking it simply forwards you to a Stripe page again with no price information on it. How am I supposed to know how much I'm about to be charged?
jrandolf 1 days ago [-]
That was an error on our part lol. We'll update with the price.
trick-or-treat 12 hours ago [-]
I see a (poorly) vibe-coded dashboard. I should be seeing a splash page with some marketing copy that draws me in. It's an interesting idea but a less that half baked (5% baked?) implementation and you wasted goodwill by posting it before it was ready.
tapvt 6 hours ago [-]
For my part, the code quality of the next.js dashboard isn't even something I'd evaluate.
I instantly get a quick, functional-appearing view of the offering. I can picture how I might interact with it and what mental gymnastics stand between right now and me pulling out a credit card.
I don't see marketing fluff that bring up more questions than it provides answers. I can be pretty certain I won't wind up in a sales funnel from hell, also.
scottcha 1 days ago [-]
Pretty cool idea, but whats the stack behind this? As 15-25 tok/s seems a bit low as expected SoA for most providers is around 60 tok/s and quality of life dramatically improves above that.
IanCal 1 days ago [-]
Can you explain the benefits over something like openrouter?
jrandolf 1 days ago [-]
24/7 LLM for $10/month.
johndough 23 hours ago [-]
Isn't this a bad deal? Or is there an error in my math?
For $40, I'd get 20 tok/s * 2.6M seconds per month = 52M tokens of DeepSeek v3.2 per month if I run it 24/7, which is not realistic for most workloads.
On OpenRouter [1], $40 buys 105M tokens from the same model, which is more than 52M tokens, and I can freely choose when to use them.
20 tok/s is an average. It can be more, it can be less. If you are running off-peak I'm sure you'd get some crazy number.
KMnO4 6 hours ago [-]
That doesn’t matter when you have the average. Even if you are somehow able to get 10000tok/s during off peak times, by virtue of how averages work, you’re still only getting 52M tokens per month (as calculated above).
gravypod 17 hours ago [-]
Why wouldn't developers just do llm arbitrage against openrouter if it is a better deal?
victorbjorklund 13 hours ago [-]
For the same reason people don’t do server arbitrage because Hetzner is cheaper than AWS.
jrandolf 16 hours ago [-]
The problem is different. OpenRouter is a router to LLMs. It doesn't solve GPU underutilization.
gravypod 15 hours ago [-]
What I am saying is if your system lets me pay $x/token and open router lets me pay $y/token if x<y then someone could make money just by providing those tokens through the open router API. That would either drive up demand for your systems increasing costs or drive up supply on open router decreasing costs. Eventually the costs would converge, no?
23 hours ago [-]
spuz 1 days ago [-]
Is this not a more restricted version of OpenRouter? With OpenRouter you pay for credits that can be used to run any commercial or open-source model and you only pay for what you use.
jrandolf 1 days ago [-]
OpenRouter is a little different. We are trying to experiment with maximizing a single GPU cluster.
MuffinFlavored 24 hours ago [-]
> Running DeepSeek V3 (685B) requires 8×H100 GPUs which is about $14k/month. Most developers only need 15-25 tok/s.
> deepseek-v3.2-685b, $40/mo/slot for ~20 tok/s, 465 slots total
> 465 users × 20 tok/s = 9,300 tok/s needed
> The node peaks at ~3,000 tok/s total. So at full capacity they can really only serve:
> 3,000 ÷ 20 = 150 concurrent users at 20 tok/s
> That's only 32% of the cohort being active simultaneously.
artificialprint 24 hours ago [-]
People work 8 hours a day presumably, I guess they are banking on this idea
ycui1986 17 hours ago [-]
only works if the users are evenly distributed around the globe (which is likely more of less the case). if the user concentrates in on century, the token rate will be terrible.
RIMR 1 days ago [-]
I read the FAQ, and I can't imagine this is going to work the way you want it to. It fundamentally doesn't make sense as a business model.
I can sign up for a cohort today, but there's not even a hint of how long it will take the cohort to fill up. The most subscribed cohort is only at 42% (and dropping), so maybe days to weeks? That's a long time to wait if you have a use case to satisfy.
And then the cohort expires, and I have to sign up for another one and play the waiting game again? Nobody wants that level of unreliability.
Also, don't say "15-25 tok/s". That is a min-max figure, but your FAQ says that this is actually a maximum. It makes no sense to measure a maximum as a range, and you state no minimum so I can only assume that it is 0 tok/s. If all users in the cohort use it simultaneously, the best they're getting is something like 1.5 tok/s (probably less), which is abyssmal.
You mention "optimization", but I have no idea what that means. It certainly doesn't mean imposing token limits, because your FAQ says that won't happen. If more than 25 users are using the cohort simultaneously, it is a physical impossibility to improve performance to the levels you advertise without sacrificing something else, like switching to a smaller model, which would essentially be fraud, or adding more GPUs which will bankrupt you at these margins. With 465 users per cohort, a large chunk of whom will be using tools like OpenClaw, nobody will ever see the performance you are offering.
The issue here is you are trying to offer affordable AI GPU nodes without operating at a loss. The entire AI industry is operating at a loss right now because of how expensive this all is. This strategy literally won't work right now unless you start courting VCs to invest tens to hundreds of millions of dollars so you can get this off the ground by operating at a loss until hopefully you turn a profit at some point in the future, but at that point developers will probably be able to run these models at home without your help.
jrandolf 23 hours ago [-]
Going on ChatGPT.com and using their AI for 24 hours doesn't mean you are actually using their LLM for 24 hours. It's only live for as long as the output is being generated. You reading, waiting for tool calls, etc. don't count toward concurrency. Factor in time-zones, lunch times, etc...it's more likely that we'd have an underutilization problem.
For filling up the cohorts, I agree and we're launching for a week to gather feedback.
mogili1 1 days ago [-]
Can you show a comparison of cost of we went per token pricing.
bluerooibos 21 hours ago [-]
So shared hosting for LLMs?
moralestapia 1 days ago [-]
This is great, thanks!
I personally would like something like this but with "regular" GPU access. Some people still use them for something other than LLMs ^^.
latchkey 3 hours ago [-]
hotaisle.xyz has amd mi300x VMs for $1.99/gpu/hr. on-demand, billed by the minute.
(i'm the ceo)
jrandolf 24 hours ago [-]
There is vast.ai!
moralestapia 23 hours ago [-]
Wow!
I recall hearing about them years ago.
Good to see they're thriving!
peter_d_sherman 1 days ago [-]
What a brilliant idea!
Split a "it needs to run in a datacenter because its hardware requirements are so large" AI/LLM across multiple people who each want shared access to that particular model.
Sort of like the Real Estate equivalent of subletting, or splitting a larger space into smaller spaces and subletting each one...
Or, like the Web Host equivalent of splitting a single server into multiple virtual machines for shared hosting by multiple other parties, or what-have-you...
I could definitely see marketplaces similar to this, popping up in the future!
It seems like it should make AI cheaper for everyone... that is, "democratize AI"... in a "more/better/faster/cheaper" way than AI has been democratized to date...
Anyway, it's a brilliant idea!
Wishing you a lot of luck with this endeavor!
esafak 1 days ago [-]
Like vast.ai and TensorDock, and presumably others.
adamsilvacons 6 hours ago [-]
[dead]
aplomb1026 22 hours ago [-]
[dead]
maxbeech 23 hours ago [-]
[dead]
sacrelege 1 days ago [-]
[dead]
aritzdf 1 days ago [-]
[flagged]
trvz 23 hours ago [-]
[flagged]
copperx 23 hours ago [-]
There's a big difference between non-compliant, illegal, and criminal.
calvinsun1102 8 hours ago [-]
“Unlimited tokens” is doing a lot of heavy lifting here.
This feels less like a pricing breakthrough and more like shifting the abstraction down to GPU sharing — which most developers probably don’t want to think about.
Curious how usable this actually feels under contention.
aerhardt 8 hours ago [-]
This comment reeks of AI.
Out of your other three comments in the your entire account’s history, two of them are pretty structurally identical: quote hook + tangentially related question.
What is the ultimate play for all these AI accounts? Warming them up for future astroturfing and marketing? Manipulating upvotes?
Curious how you’re thinking about isolation here. Is there any hard guarantee on a 'slice' of the GPU, or is it mostly just handled by the vLLM scheduler?
If everyone in a pool uses it during the ~same periods and sleeps during the ~same periods, then the node would oscillate between contention and idle -- every day. This seems largely avoidable.
(Or, darker: Maybe the contention/idle dichotomy is a feature, not a bug. After all, when one has control of $14k/month of hardware that is sitting idle reliably-enough for significant periods every day, then one becomes incentivized to devise a way to sell that idle time for other purposes.)
I question whether they actually understand LLMs at scale.
> When you join a cohort, your card is saved but not charged until the cohort fills. Stripe holds your card information — we never store it. Once the cohort fills, you are charged and receive an API key for the duration of the cohort.
Have any cohorts filled yet?
I’m interested in joining one, but only if it’s reasonable to assume that the cohort will be full within the next 7 days or so. (Especially because in a little over a week I’m attending an LLM-centered hackathon where we can either use AWS LLM credits provided by the organizer, or we can use providers of our own choosing, and I’d rather use either yours or my own hardware running vLLM than the LLM offerings and APIs from AWS.)
I’d be pretty annoyed if I join a cohort and then it takes like 3 months before the cohort has filled and I can begin to use it. By then I will probably have forgotten all about it and not have time to make use of the API key I am paying you for.
That said, we're planning to add a 7-day window: if a cohort doesn't fill within 7 days of your reservation, it cancels automatically and your card is released. We don't want anyone's payment method sitting in limbo indefinitely.
On a nonzero number of occasions I have priced the cost of running an inference server with a model that is actually usable and the annual cost is astronomical.
How large is a full context window in MiB and how long does it take to load the buffer? I.e. how many seconds should I expect my worst case wait time to take until I get my first token?
TTFT is under 2 seconds average. Worst case is 10-30s.
Yes, I was thinking about context buffers, which I assume are not small in large models. That has to be loaded into VRAM, right?
If I keep sending large context buffers, will that hog the batches?
not original author but batching is one very important trick to make inference efficient, you can reasonably do tens to low hundreds in parallel (depending on model size and gpu size) with very little performance overhead
I like the idea, I just wouldn't want my subscription to suddenly be on hold because a peer decided to stop theirs.
"Running 24x7" is what people want to do with openclaw.
It's not really the same "limit", AIUI.
SLLM: Being capped to the rate would make your openclaw run slowly, but still be able to work 24x7.
Normal Subs: Hiting the limit means your openclaw doesn't run at all for hours.
2. What if I try to hog all resources of a node by running some large data processing and making multiple queries in parallel? What if I try to resell the access by charging per token?
Edit: sorry if this comment sounds overly critical. I think that pooling money with other developers to collectively rent a server for LLM inference is a really cool idea. I also thought about it, but haven't found a satisfactory answer to my question number 2, so I decided that it is infeasible in practice.
Also mobile version is a bit broken, but good idea and good luck!
That's over a 1000 words/s if you were typing. If 1000 words/s is too slow for your use-case, then perhaps $5/m is just not for you.
I kinda like the idea of paying $5/m for unlimited usage at the specified speed.
It beats a 10x higher speed that hits daily restrictions in about 2 hours, and weekly restrictions in 3 days.
I mean my local 122b is only 20t/s so for background stuff it can be used for that. But not for anything interactive IME.
What are you running that local 122b on? I mean, this looks attractive to me for $5/m running unlimited at 20t/s-25t/s, but if I could buy hardware to get that running locally, I don't mind doing so.
I dig the idea! I'm curious where the costs will land with actual use.
Is there actually any $5/mo offering? It seems like the cheapest models start at $10.
So then what happens if some people's payment method fails once you do charge?
I expect its a pre-auth, like car rental companies do; a pre-auth gives you a code from the card issuer and an expiry. The issuer will reserve the amount on the cardholders account, and only perform the transaction to the merchant once the merchant sends a second message with the pre-auth code.
I instantly get a quick, functional-appearing view of the offering. I can picture how I might interact with it and what mental gymnastics stand between right now and me pulling out a credit card.
I don't see marketing fluff that bring up more questions than it provides answers. I can be pretty certain I won't wind up in a sales funnel from hell, also.
For $40, I'd get 20 tok/s * 2.6M seconds per month = 52M tokens of DeepSeek v3.2 per month if I run it 24/7, which is not realistic for most workloads.
On OpenRouter [1], $40 buys 105M tokens from the same model, which is more than 52M tokens, and I can freely choose when to use them.
[1]: https://openrouter.ai/deepseek/deepseek-v3.2
> deepseek-v3.2-685b, $40/mo/slot for ~20 tok/s, 465 slots total
> 465 users × 20 tok/s = 9,300 tok/s needed
> The node peaks at ~3,000 tok/s total. So at full capacity they can really only serve:
> 3,000 ÷ 20 = 150 concurrent users at 20 tok/s
> That's only 32% of the cohort being active simultaneously.
I can sign up for a cohort today, but there's not even a hint of how long it will take the cohort to fill up. The most subscribed cohort is only at 42% (and dropping), so maybe days to weeks? That's a long time to wait if you have a use case to satisfy.
And then the cohort expires, and I have to sign up for another one and play the waiting game again? Nobody wants that level of unreliability.
Also, don't say "15-25 tok/s". That is a min-max figure, but your FAQ says that this is actually a maximum. It makes no sense to measure a maximum as a range, and you state no minimum so I can only assume that it is 0 tok/s. If all users in the cohort use it simultaneously, the best they're getting is something like 1.5 tok/s (probably less), which is abyssmal.
You mention "optimization", but I have no idea what that means. It certainly doesn't mean imposing token limits, because your FAQ says that won't happen. If more than 25 users are using the cohort simultaneously, it is a physical impossibility to improve performance to the levels you advertise without sacrificing something else, like switching to a smaller model, which would essentially be fraud, or adding more GPUs which will bankrupt you at these margins. With 465 users per cohort, a large chunk of whom will be using tools like OpenClaw, nobody will ever see the performance you are offering.
The issue here is you are trying to offer affordable AI GPU nodes without operating at a loss. The entire AI industry is operating at a loss right now because of how expensive this all is. This strategy literally won't work right now unless you start courting VCs to invest tens to hundreds of millions of dollars so you can get this off the ground by operating at a loss until hopefully you turn a profit at some point in the future, but at that point developers will probably be able to run these models at home without your help.
For filling up the cohorts, I agree and we're launching for a week to gather feedback.
I personally would like something like this but with "regular" GPU access. Some people still use them for something other than LLMs ^^.
(i'm the ceo)
I recall hearing about them years ago.
Good to see they're thriving!
Split a "it needs to run in a datacenter because its hardware requirements are so large" AI/LLM across multiple people who each want shared access to that particular model.
Sort of like the Real Estate equivalent of subletting, or splitting a larger space into smaller spaces and subletting each one...
Or, like the Web Host equivalent of splitting a single server into multiple virtual machines for shared hosting by multiple other parties, or what-have-you...
I could definitely see marketplaces similar to this, popping up in the future!
It seems like it should make AI cheaper for everyone... that is, "democratize AI"... in a "more/better/faster/cheaper" way than AI has been democratized to date...
Anyway, it's a brilliant idea!
Wishing you a lot of luck with this endeavor!
This feels less like a pricing breakthrough and more like shifting the abstraction down to GPU sharing — which most developers probably don’t want to think about.
Curious how usable this actually feels under contention.
Out of your other three comments in the your entire account’s history, two of them are pretty structurally identical: quote hook + tangentially related question.
What is the ultimate play for all these AI accounts? Warming them up for future astroturfing and marketing? Manipulating upvotes?