Hacker Timesnew | past | comments | ask | show | jobs | submit | esafak's commentslogin

I assume it was to make-up for the token overconsumption bug.


Please tell me you're a Microsoft customer.

you can declare tools and tasks with http://mise.jdx.dev/

im surprised this is so far down, this has changed so much of my setup. Ive swapped from pretty much all managers to this and its been a lifechanger

He's saying it's bulky junk that's best torched.


link?


I just want the simple feature of PRs updating when the target branch changes. For example, say I have two tickets: T-100 and T-101. Both are targeting main, but T-101 builds on top of T-100. I put up a PR for the T-100 branch against main, and put up a PR for the T-101 branch against main.

The T-101 PR can't really be reviewed yet, since you are looking at changes from both T-100 and T-101 (because T-101 was based on T-100).

Ideally, after T-100 is reviewed and merged, the T-101 PR would automatically update to show only the T-101 changes. But it doesn't. You have to manually rebase or merge main and push changes to the branch to get it to update. It would be great if GitHub handled this automatically.


Locked... I'm like 40% sure they'll mess this up in some way that makes it completely useless.

https://x.com/jaredpalmer/status/2019817235163074881

(this was a follow-up to the initial announcement months ago, also made via X)


Ah nice! I hope this actually works across repos rather than just being nicer UI for the existing functionality.

That's only because current models don't saturate people's needs. Once they are fast and smart enough people will pick cheaper ones.

Does anyone have experience with Alibaba's coding plan? Not that I'm very tempted at $50/month...

A bit off-topic but I’m on the legacy Lite plan (now discontinued), and it’s more than enough for hobby projects. The main draw is the generous request-based quota (18k requests/month) rather than a token-based one.

This means a 100k token request counts the same as a 100-token one. I’ve made about 8000 requests in the last two weeks, averaging around 80k tokens per request. It feels like they’re subsidizing this just to gather data on agentic workflows.

On the downside, the speed is mediocre (15–30 tg/s for GLM-5), and I’ve seen the model glitch or produce broken output about 10 times out of those 8k requests.


I don't get the value proposition either; your landing page is underdeveloped. Tracking the query history is trivial. Offloading computation could be done with Polars Cloud or MotherDuck. Can you expand on the "manage datasets" part?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: