Hacker Timesnew | past | comments | ask | show | jobs | submit | mattstir's commentslogin

> and that's why heavily regulated industries like healthcare, education, and transportation have seen basically no innovation in 50 years.

Not to get distracted, but aren't these three all incredible examples of innovation over time? Healthcare alone is significantly better than it was 50 years ago and it's not really close. 50 years ago, this hip new treatment called electroshock therapy was being used to "treat" being gay. It was also within touching distance of getting a lobotomy for depression or anything else your husband thought was a problem.


The rates of depression in the US are at an all time high [1]. The primary theory behind the cause of depression and mechanism of most antidepressants has been abandoned [2]. Not treating homosexuality as a disease isn't an innovation, it's a cultural change.

You could maybe argue mRNA vaccines or semaglutides are big innovations, I think we've made a ton of progress against HIV, and it seems like we've made progress against cancer, but when you factor in how much government money goes into this research and compare it against the advancements we've seen in computational technology it's a lot less impressive. You could buy a raspberry pi for like $50 today that outperforms every computer made 50 years ago, whereas the cost of most medical imaging has actually increased [3]. Likewise the inflation adjusted cost of college degrees and building new rail lines or really any infrastructure has increased precipitously since 1970.

1. https://www.cdc.gov/nchs/pressroom/releases/20250416.html

2. https://www.ucl.ac.uk/news/2022/jul/no-evidence-depression-c...

3. https://www.jacr.org/article/S1546-1440%2822%2900710-4/fullt...


Returning impl trait is useful when you can't name the type you're trying to return (e.g. a closure), types which are annoyingly long (e.g. a long iterator chain), and avoids the heap overhead of returning a `Box<dyn Trait>`.

Async/await is just fundamental to making efficient programs, I'm not sure what to mention here. Reading a file from disk, waiting for network I/O, etc are all catastrophically slow in CPU time and having a mechanism to keep a thread doing useful other work is important.

Actively writing code for the others you mentioned generally isn't required in the average program (e.g. you don't need to create your own proc macros, but it can help cut down boilerplate). To be fair though, I'm not sure how someone would know that if they weren't already used to the features. I imagine it must be what I feel like when I see probably average modern C++ and go "wtf is going on here"


> Reading a file from disk, waiting for network I/O, etc are all catastrophically slow in CPU time and having a mechanism to keep a thread doing useful other work is important.

curious if you have benchmarks of "catastrofically slow".

Also, on linux, mainstream implementation translates async calls to blocked logic with thread pool on kernel level anyway.


Impl trait is just an enabler to create bad code that explodes compile times imo. I didn’t ever see a piece of code that really needs it.

I exclusively wrote rust for many years, so I do understand most of the features fair deeply. But I don’t think it is worth it in hindsight.


Could you elaborate on that error handling part? To me, Rust is the only sane language I've worked with that has error-like propagation, in that functions must explicitly state what they can return, so that you don't get some bizarre runtime error thrown because the data was invalid 15 layers deeper

I don't know what quotemstr was specifically talking about, but here's my own take.

The ideal error handling is inferred algebraic effects like in Koka[1]. This allows you to add a call to an error-throwing function 15 layers down the stack and it's automatically propagated into the type signatures of all functions up the stack (and you can see the inferred effects with a language server or other tooling, similar to Rust's inferred types).

Consider the following Rust functions:

    fn f1() -> Result<(), E1> {...}
    fn f2() -> Result<(), E2> {...}
    fn f3() -> Result<(), E3> {...}
    fn f4() -> Result<(), E4> {f1()?; f2()?;}
    fn f5() -> Result<(), E5> {f1()?; f3()?;}
    fn f6() -> Result<(), E6> {f4()?; f5()?;}
Now, how do you define E4, E5 and E6? The "correct" way is to use sum types, i.e., `enum E4 {E1(E1), E2(E2)}`, `enum E5 {E1(E1), E3(E3)}` and `enum E6 {E1(E1), E2(E2), E3(E3)}` with the appropriate From traits. The problem is that this involves a ton of boilerplate even with thiserror handling some stuff like the From traits.

Since this is such a massive pain, Rust programs tend to instead either define a single error enum type that has all possible errors in the crate, or just use opaque errors like the anyhow crate. The downside is that these approaches lose type information: you no longer know that a function can't return some specific error (unless it returns no errors at all, which is rare), which is ultimately not so different from those languages where you have to guard against bizarre runtime errors.

Worse yet, if f1 has to be changed such that it returns 2 new errors, then you need to go through all error types in the call stack and flatten the new errors manually into E4, E5 and E6. If you don't flatten errors, then you end up rebuilding the call stack in error types, which is a whole different can of worms.

Algebraic effects just handle all of this more conveniently. That said, an effect system like Koka's isn't viable in a systems programming language like Rust, because optimizing user-defined effects is difficult. But you could have a special compiler-blessed effect for exceptions; algebraic checked exceptions, so to speak. Rust already does this with async.

[1] https://koka-lang.github.io


Serde is maintained by dtolnay, who is a very influential figure in Rust mainly through his library development. Serde, syn, anyhow etc end up being pulled in as dependencies to nearly every Rust crate. If his account was compromised, the attack surface is essentially every single other Rust crate... not ideal

That locks users into an ecosystem that may never evolve, which can be fine but doesn't really solve one of the core issues the author was describing. It forces the ecosystem to depend on the oldest and most incumbent crates, rather than newer ones which might be better in some ways.

Perhaps that's fine in the particular case of serialization, but that line of thinking breaks down at more fundamental operations like `PartialEq` or `Hash`. Having a different definition for equality fundamentally breaks the program if the two versions ever mix. On the flip-side, it's important that the author is allowed to declare the "correct" way to do something, e.g. in a smart pointer crate where the safety of the program relies on a correct implementation. If traits weren't program-wide, you're just kicking the bucket down the road from the library author having to do define every trait impl to every user defining every trait impl, which is even worse imo

I can't guarantee a great experience, but anecdotally my brother and I have had no issues in the last ~12 months playing all types of games on Linux. Only games which require kernel-level anti-cheat are unavailable. Otherwise, Proton and native clients (when available) have been rock solid, and I've been surprised that some games (like Minecraft) actually run much better than in Windows.

Although I haven't touched Windows in a few years now, my understanding is that the OS has been having a very rough few months with unstable updates, bricked devices, etc. And yet the first thing they mention is moving around the task bar? Is that really what they want to lead with? It's just baffling. It's also a bit disturbing to see "reduced flicker for file explorer" as a main focus. Just how bad is the Windows experience?

Their vulnerable introduction leading into the 4 task bar screenshots made me laugh out loud.

Yeah, Firefox uses a different CSS engine that doesn't automatically have this same use-after-free.


I'm not sure how hash chains would resolve the fundamental issue of needing to send your ID or similar to some random third-party company that does god-knows-what with it (probably stores it in a publicly accessible path with big "steal me" signs pointing at it). That they need to attest to your age means that they need to trust what your age is, which has really just moved the problem one layer deeper (as far as I can tell).


I assume by third party you mean the authority, and yes, the authority would need to know your personal information. At least enough of it to verify your age. So the ideal is that the authority is the entity that already knows your personal information. Like the entity that issued your passport to you, or the one that issued you drivers license.

But even if the authority was a private company, I think it would be an improvement compared to the current situation. In this situation your personal information would be held by this one company, and not whatever provider that needs to verify your age. Also, you would be able to use the commitments, that this private authority gave you, without any coordination afterwards. The authority would not know about your transactions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: