Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

No it's not. Otherwise this part doesn't make sense

> in fact, they actually compound the problem by encouraging significantly more usage

because if eliminating training costs makes running the model above cost, the problem is helped by significantly more usage not compounded.

More usage compounds the problem only if inference is unprofitable.

(the article briefly mentions training but that's later).

 help



It made sense to me understanding that you can have a unit-profitable API but lose money on loss-leading campaigns like Code subscriptions. Those losses are amplified by encouraging usage. Perhaps I'm mistaken.

Again, that is a statement about inference time costs, not training costs.

> More usage compounds the problem only if inference is unprofitable.

No... only if you're charging full boat for that inference. As I said above, loss-leading caps are a in play here. Obviously encouraging people to use more of basically anything that is an all-you-can-eat subscription leads to less profitability. Not sure if we're talking past each other or what.


We are kind of talking past each other. I'm saying something simpler. This all goes back to the original point I made in reference to your reply to johnfn:

>> The post is factoring in training costs, not just inference.

It is not because training costs are irrelevant here. Training costs do not cause your costs to go up as you accumulate more users.

None of these calculations we're talking about include training costs. You're saying that inference is unprofitable (at least given the subscription plans). I'm simply pointing out that we are talking about inference not training as you stated earlier. You are (very accurately) not talking at all about training costs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: