Hacker Timesnew | past | comments | ask | show | jobs | submit | efromvt's commentslogin

Hmm yeah it depends on your definition of insider. If you assume all raw information is public-ish, a market can reward those who can synthesize/operate on that knowledge to predict better. (The cars in the lot, etc. there is friction to this discovery; the knowledge can be communicated to others through the market after discovery to profit off the initial cost of discovery). There is symmetric competition to some degree. If you have true non public knowledge (I’m going to say something to tank the stock on this date) then you are purely extracting value from others because you will always win; they will never have that info and the incentive for anyone else to participate in price discovery would go away.

Still working on my urban tree visualization! Spent some time polishing the ingest pipeline to make it easier to add new cities, added a genus/species level view to aggregate across cities, and added in some basic imagery so I can see what species are. Thinking about adding in a end-user facing ingest pipeline so I can add some trees I like that I see on my walks. Probably need a performance pass to since I'm scaling up the volume quite a bit.

https://greenmtnboy.github.io/sf_tree_reporting

Posted in last thread when it was SF only: https://qht.co/item?id=47303111#47304199


I wonder why this doesn’t get us frustrated with the grid, not data centers. Delays on interconnects for renewables and offshore wind both seem pretty self inflicted.

I keep wondering this too. It feels like such a self fulfilling prophecy: don’t build new power plants. Don’t build nuclear. Get mad when the grid can’t keep up…it’s defeatist and anti-growth-of-any-sort through a different lens.

To be fair, for decades, electricity consumption has been mostly flat. There has not been a need to massively ramp up new generation or distribution. It is only in the last few years that such mega consumers have come online that is requiring new development at a frantic pace.

Not true. Electric vehicles have been threatening to collapse residential grids for quite a few years now. The US hasn't been making the necessary infrastructure investments for a long time. See PG&E for example.

For something the size of the electrical grid, you can find regional variations, but the national trend is quite clear. One report from a quick search[0]

  Consumption Growth Acceleration: After 14 years of near-stagnant growth (0.1% annually from 2008-2021), US electricity consumption surged 3.0% in 2024, driven by data centers, electric vehicles, and economic recovery, signaling a new era of demand growth.
[0] https://solartechonline.com/blog/how-much-electricity-does-u...

TBF multiple things can be true. A period of stagnation, a failure to perform sufficient upkeep, and a failure to keep up with new demand.

I mean one has to also consider the current political _and_ geopolitical landscape now when it comes to energy needs. And given the current outlook and environments even states are now operating in with federal overreach shutting down offshore wind farm efforts and more, it's not hard to do the calculus that lands you squarely in this reality:

- most grids can't sustain the AI energy demands at the moment

- literally no one could tell you if scaling up with clean/renewable energy sources to meet demand is even going to get greenlit right now. it is straight up gambling to try and give a black and white answer to it.

so to a lot of degrees i absolutely understand why a state might pump the brakes. this is increased pressure on a limited resource that is squeezing _the peoples_ economic circumstances. pump the brakes because no one is talking about how to greenlight it and scale up the right way so it doesn't result in even more financial uncertainty for people that are already financially uncertain. its absolutely not something i would want to give the go ahead on without guarantees that renewable energy is going to be the backbone of the increased energy demand.


Also power is not at all a limited resource as many top voted post on HN thinks it to be. Increased demand decreases the price of power not increases it in the long term.

And in any case ban doesn't make any sense. Instead they could charge different for grid electricity usage, and make the datacenter pay for grid expansion when they start building it.


Because we have decided that electrical generation tech ended once China became better at it.

Instead of dealing with that like adults we are throwing a fit instead


I’ve been curious about this too - obvious performance overhead to have a internal/external channel but might make training away this class of problems easier

Yeah the marginal cost of discovery going towards 0 (I mean, not there yet, but directionally) is the problem; it doesn't really matter if the agent isn't equivalent to a human artistic hand-crafted bug discovery if it can make it up on volume. Mass production of exploits!

Just like OpenAI's original moat, I don't think that's particularly durable. I've already seen plenty of people swing back to preferring codex, and it'll probably swap again with the next model drop. Openclaw is potentially better integrated with ChatGPT at this point because of the explicit subscription support.

It's pretty easy to get determinism with a simple harness for a well-defined set of tasks with the recent models that are post-trained for tool use. CC probably gets some bloat because it tries to do a LOT more; and some bloat because it's grown organically.

>It's pretty easy to get determinism with a simple harness for a well-defined set of tasks with the recent models that are post-trained for tool use.

Do you have a source? Claude Code is the only genetic system that seems to really work well enough to be useful, and it’s equipped with an absolutely absurd amount of testing and redundancy to make it useful.


Should I read that as 'generic system'? Most hard data is with company internal evals, but for the well defined tasks externally it's been pretty easy to spin up a basic tool loop and validate. Did you have something in mind? [I don't necessarily count 'coding' as well-defined in the generic sense, so I suspect we're coming at this from different scopes re: the definition of 'LLMs somewhat deterministic and useful as tools']

I haven't heard this benefit for mentors clearly articulated before (probably just missed it), but definitely felt it - I guess it's a deeper version of how writing/other communication forces clarity/organization of thoughts because mentorship conversations are so focused on extracting the why as well as the what.


I think this is exactly the point though (maybe more of the link than of this comment) - a sufficiently good product by all external quality metrics is fine even if the code is written on one line in a giant file or some other monstrosity. As long as one black box behaves the same way as another in all dimensions, they are competitive. You can argue that internal details often point to an external deficiency, but if they don’t, then there is no competitive pressure.


I've chose to embrace the silver lining where there is now business backing to prioritize all the devx/documentation work because it's easier to quantify the "value" because LLM sessions provide a much larger sample size than inconsistent new hire onboarding (which was also a one-time process, instead of per session).

I do think people are going way overboard with markdown though, and that'll be the new documentation debt. Needs to be relatively high level and pointers, not duplicate details; agents can parse code at scale much faster than humans.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: