Hacker Timesnew | past | comments | ask | show | jobs | submit | realfun's commentslogin

I think it would take quite a long while to achieve human-level anti-entropy in Agentic systems.

Complex system requires tons of iterations, the confidence level of each iteration would drop unless there is a good recalibration system between iterations. Power law says a repeated trivial degradation would quickly turn into chaos.

A typical collaboration across a group of people on a meaningfully complex project would require tons of anti-entropy to course correct when it goes off the rails. They are not in docs, some are experiences(been there, done that), some are common sense, some are collective intelligence.


Agreed. It does need memorization. Anyone who don't think so can try to add and multiply in hexadecimal, what is 0xA + 0xC and what is 0x8 * 0x6? It becomes quite obvious memorization is required when you try it in any other radix other than decimal(which is memorized already).


It might be quite obvious that memorization is convenient, but 'required' means "you can't do it any other way" and that's obviously incorrect.


IDK why we'd assume that there's a different cognitive process for learning mathematics with radix 10 than with radix 16?

Mathematics_education#Methods https://en.wikipedia.org/wiki/Mathematics_education#Methods :

> [...] Rote learning: the teaching of mathematical results, definitions and concepts by repetition and memorisation typically without meaning or supported by mathematical reasoning. A derisory term is drill and kill. In traditional education, rote learning is used to teach multiplication tables, definitions, formulas, and other aspects of mathematics.


I think they mean that it is very useful to “cache” these primitive results, but it is absolutely possible to regenerate them if one is missing or the like.


It should be understood first, and memorized later.

If you do not memorize small things that you frequently need, it will slow you down.

But if keep memorizing things without understanding, at some moment your memory will return an incorrect result, and you will be unable to notice it.


I don't plan to ever need to do math in hexadecimal without a calculator. Far as decimals go, I actually did get away with solving the multiplication problems on the fly...so no, memorization isn't necessary.


Roaring bitmap works surprisingly better than bloom filter in Elastic-search as skip list plugin, completely beats the built-in skip list function.


Is there a reliable way to tell which airplane type before booking?



Thanks for sharing EGreg.

Elasticsearch has geo-indexing as well(based on geohash internally), and by default it does id hashing similar to what you said(murmurhash3), we actually leverages that for location based searches.

The challenge addressed in the blog is not in how to search/address(as said Elasticsearch handles it already), it is about how to distribute the load so calculation only happens on limited nodes, and reduce the index size so it can be more performant.


Ah, the goal makes sense. I would suggest that it’s not so bad to have a controller node fan-out and fan-in queries, as long as the database can handle many concurrent queries. Essentially you’re distributing the work evenly across nodes but you don’t have affinity for a particular node. Yes, there is more latency (it is as slow as the slowest connection) but it is endlessly scalable. But, I am sure I missed some benefits from localizing calculations to only a node or two.

In the scheme above, by the way, it DOES localize searches on one shard. Essentially all relations to a stream are on the same shard as the stream. And each center+radius has one associated stream and therefore the search takes place on one shard.


not just near the poles. Geohash distortion is already very obvious when you compare UK and countries in south Asia.


Same strategy can be applied regardless of ES version. ES6 does has index sorting and other optimizations, but not as good as pre-sharding for location based services, cases like Tinder.


geo-indexing is different from geo-sharding, search indexes normally have geo-indexing, but it is still one big index handles every single search. The alternative(geohash) is mentioned in article, which has high distortion around earth poles.


Thanks for the suggestion and there are definitely room for optimization.

In the original design we did consider approach similar to the "heatmap approach" you mentioned, it would reduce the shard movement for people who commutes in the city.

Later we figured that simply sharding along on the Hilbert Curve already give what we need, and the shard move is unavoidable, we would explain more details around the shard move in future blog.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: