Hacker Timesnew | past | comments | ask | show | jobs | submit | esailija's commentslogin

It actually makes sense. For any task it is completely trivial for anyone to become better than >80% humans and still easy to be better than >95%. The only problem is motivation not intelligence.

Not being able to call a function inside if block or a loop is not just javascript.

> When it comes to deciding where to place which code, I found a particular mantra very helpful: minimize code and maximize use cases. The idea is to have as many reusable software components as possible and to minimize the overall code within an organization.

You'll want to do exactly the opposite. This just leads to maximizing dependencies and grinding everything to halt as everything you could fix in 2 mins will take 6 months of waiting instead. Also literally anything can be "reused" with enough configuration and parameters, with them becoming the source of complexity and programming itself and at the end of the day it was a huge waste to effectively create a new programming language.


I actually lean towards copy and pasting the same code into many files just so that they are self-contained and have no dependencies.


We don't know what humans are because they are a black box, we use some imperfect models that have limited usability in specific contexts.

LLM is white box that we know for sure is just a statistical next token predictor and nothing more. It's not a just a model of some black box we are trying to understand but the whole actual thing. That people think it's something more or could be something more is on them. If you understand that then you understand the flaws, limitations and vulnerabilities which is very useful.


There is no such thing that you can always keep adding more of and have it automatically be effective.

I tend to automate too much because it's fun, but if I'm being objective in many cases it has been more work than doing the stuff manually. Because of laziness I tend to way overestimate how much time and effort it would took to do something manually if I just rolled my sleeved and simply did it.

Whether automating something actually produces more with less labor depends on nuance of each specific case, it's definitely not a given. People tend to be very biased when judging the actual productivity. E.g. is someone who quickly closes tickets but causes disproportionate amount of production issues, money losing bugs or review work on others really that productive in the end?


There is more than molecules, neurons and synapses. They are made from lower level stuff that we have no idea about (well, we do in this instance but you get the point). They are just higher level things that are useful to explain and understand some things but don't describe or capture the whole thing. For that you would need to go to lower and lower level and so far it seems they go on infinitely. Currently we are stuck at the quantum level, that doesn't mean it's the final level.

OTOH, an LLM is just a token prediction engine. It fully and completely covers it. There is no lower level secrets hidden in the design nobody understands, because it could not have been created if there was. The fact that the output can be surprising is not evidence of anything, we have always had surprising outputs like funny bugs or unexpected features. Using the word "emergence" for this is just deceitful.

This algorithm has fundamental limitations and they have not been getting better, if you look closely. For instance you could vibe code a C compiler now, but it's 80% there, cute trick but not usable in real world. Just like anything, it cannot be economically vibe coded to 100%. They are not going back and vibe coding the previous simpler projects to 100% with "improved" models. Instead they are just vibe coding something bigger to 80%. This is not an improvement in limitations, it is actually communicating between the lines that the limitations cannot be overcome.

Also, enshittification has not even started yet.


I can bake a cake while having 0 understanding of the chemistry that powers the transformation. One is a pile of wet flour, the other is delicious.

A dog can create a snack by doing a trick. Doesn't mean that there isn't some mechanism going on there that neither of them understand.


Whose argument is this supposed to be furthering here? You didn't specify what the wet flour is? What is the point of this contribution?


I don't think so. Imagine it was vice versa, someone saying they knew JS and were weak at C/C++/Java.


This doesn't sound right to me. If someone who were expert in JS looked at a relatively simple C++ program, I think they could reasonably well tell if the quality of code were good or not. They wouldn't be able to, e.g., detect bugs from default value initialization, memory leaks, etc. But so long as the code didn't do any crazy templating stuff they'd be able to analyze it at a rough "this algorithm seems sensible" level".

Analogously I'm quite proficient at C++, and I can easily look at a small JS program and tell if it's sensible. But if you give me even a simple React app I wouldn't be able to understand it without a lot of effort (I've had this experience...)

I agree with your broad point: C/C++/Java are certainly much more complex than JS and I would expect someone expert in them to have a much easier time picking up JS than the reverse. But given very high overlap in syntax between the four I think anyone who's proficient in one can grok the basics of the others.


Tests knowing about implementation details and testing the implementation details (which is the case 99.999% of the time if you use mocks) is more common than not. Even when the main value of automated testing is being able to change those very implementation details, that you now cannot do.

A whole bunch of work spent for no benefit or negative benefit is pretty common.


There is a difference between extrapolating from just a few examples vs interpolating between trillion examples


The problem is of course that there is no useful default behavior you can define when the trait is so isolated and generic.


It doesn't have to be "so isolated". The trait can still have required methods that don't have a default implementation. Eg:

    trait Robot {
        fn send_command(&mut self, command: Command);

        fn stop(&mut self) {
            self.send_command(Command.STOP);
        }
    }

    struct BenderRobot;
    
    impl Robot for BenderRobot {
        // Required.
        fn send_command(&mut self, command: Command) { todo!(); } 
    }
This is starting to look a lot like C++ class inheritance. Especially because traits can also inherit from one another. However, there are two important differences: First, traits don't define any fields. And second, BenderRobot is free to implement lots of other traits if it wants, too.

If you want a real world example of this, take a look at std::io::Write[1]. The write trait requires implementors to define 2 methods (write(data) and flush()). It then has default implementations of a bunch more methods, using write and flush. For example, write_all(). Implementers can use the default implementations, or override them as needed.

Docs: https://doc.rust-lang.org/std/io/trait.Write.html

Source: https://doc.rust-lang.org/src/std/io/mod.rs.html#1596-1935


> First, traits don't define any fields.

How does one handle cases where fields are useful? For example, imagine you have a functionality to go fetch a value and then cache it so that future calls to get that functionality are not required (resource heavy, etc).

    // in Java because it's easier for me
    public interface hasMetadata {
        Metadata getMetadata() {
            // this doesn't work because interfaces don't have fields
            if (this.cachedMetadata == null) {
                this.cachedMetadata = generateMetadata();
            }
            return this.cachedMetadata;
        }
        // relies on implementing class to provide
        Metadata fetchMetadata(); 

    }


Getters and setters that get specified by the implementing type.


But then you have the getters, setters, and field on every class that implements the functionality. It works, sure, it just feels off to me. This is code that will be the same everywhere, and you're pulling it out of the common class and implementing it everywhere.


Yep. Or ... don't put that in the interface at all. It looks like an implementation concern to me.


But if there's a lot of classes that implement the same thing, then not duplicating code makes sense. And saying "it's an implementation detail" leads to having the same code in a bunch of different classes. It feels very similar to the idea of default implementations to me; when the implementation will be the same everywhere, it makes sense to have it in one place.


So to be clear about your example: You have a whole lot of different - totally distinct - types of things, which all need to have the same logic to cache HTTP requests? Can you give some examples of these different types you're creating? Why do you have lots of distinct types that need exactly the same caching logic?

It sounds like you could solve that problem in a lot of different ways. For example, you could make an HTTP client wrapper which internally cached responses. Or make a LazyResource struct which does the caching - and use that in all those different types you're making. Or make a generic struct which has the caching logic. The type parameter names the special individual behaviour. Or something else - I don't have enough information to know how I'd approach your problem.

Can you describe a more detailed example of the problem you're imagining? As it is, your requirements sound random and kind of arbitrary.


From a very modified version of something I was working on recently, but with the stuff I couldn't do actually done here (and non-functionality code because of that, but is shows the idea)

    public interface MetadataSource {
        Metadata metadata = null;

        default Metadata getMetadata() {
            if (metadata == null) {
                metadata = fetchMetadata();
            }
            return metadata;
        }
        
        // This can be relatively costly
        Metadata fetchMetadata();
    }

    public class Image implements MetadataSource {
        public Metadata fetchMetadata() {
            // goes to externally hosted image to fetch metadata
        }
    }

    public class Video implements MetadataSource {
        public Metadata fetchMetadata() {
            // goes to video hosting service to get metadata
        }
    }

    public class Document implements MetadataSource {
        public Metadata fetchMetadata() {
            // goes to database to fetch metadata
        }
    }
Each of the above have completely different ways to fetch their metadata (ex, Title and Creator), and of them has different characteristics related to the cost of getting that data. So, by default, we want the interface to cache the result so that the

1. The thing that _has_ the metadata only needs to know how to fetch it when it's asked for (implementation of fetchMetadata), and it doesn't need to worry about the cost of doing so (within limits of course)

2. The things that _use_ the metadata only need to know how to ask for it (getMetadata) and can assume it has minimal cost.

3. Neither one of those needs to know anything about it being cached.

I had a case recently where I needed to check "does this have metadata available" separate from "what is the metadata". And fetching it twice would add load.


Here's my take on implementing this in rust. I made a trait for fetching metadata, that can be implemented by Image, Video, Document, etc:

    trait MetadataSource {
        fn fetch_metadata(&self) -> Metadata;
    }
    impl MetadataSource for Image { ... } 
    impl MetadataSource for Video { ... } 
    impl MetadataSource for Document { ... }
And a separate object which stores an image / video / document alongside its cached metadata:

    struct ThingWithMetadata<T> {
        obj: T, // Assuming you need to store this too?
        metadata: Option<Metadata>
    }

    impl<T: MetadataSource> ThingWithMetadata {

        fn get_metadata(&self) -> &Metadata {
            if self.metadata.is_none() {
                self.metadata = Some(self.obj.fetch_metadata());
            }
            self.metadata.as_ref().unwrap()
        }
    }
Its not the most beautiful thing in the world, but it works. And it'd be easy enough to add more methods, behaviour and state to those metadata sources if you want. (Eg if you want Image to actually load / store an image or something.)

In this case, it might be even simpler if you made Image / Video / Document into an enum. Then fetch_metadata could be a regular function with a match expression (switch statement).

If you want to be tricky, you could even make struct ThingWithMetadata also implement MetadataSource. If you do that, you can mix and match cached and uncached metadata sources without the consumer needing to know the difference.

https://play.rust-lang.org/?version=stable&mode=debug&editio...


Isn't this essentially the generic typestate pattern in Rust? In my view there is a pretty obvious connection between that particular pattern and how other languages implement OO inheritance, though in all fairness I don't think that connection is generally acknowledged.

(For one thing, it's quite obvious to see that the pattern itself is rather anti-modular, and the ways generic typestate is used are also quite divergent from the usual style of inheritance-heavy OO design.)


When you call myImageInstance.fetchMetadata, what does it do? I don't know rust, so it's not clear to me how the value gets cached.


In this example, ThingWithMetadata does the caching. image.fetch_metadata fetches the image and returns it. It’s up to the caller (in ThingWithMetadata) to cache the returned value.


But part of the goal is to not need the caller to cache it. Nor have the class that knows how to fetch it need to know how to cache it either. The responsibility of knowing how to cache the value is (desired to be) in the MetadataSource interface.


The rule is that you can't cache a value in an interface, because interfaces don't store data. You need to cache a value in a struct somewhere. This implementation wraps items (like images) in another struct which stores the image, and also caches the metadata. Thats the point of ThingWithMetadata. Maybe it should instead be called WithCachedMetadata. Eg, WithCachedMetadata<Image>.

You can pass WithCachedMetadata around, and consumers don't need to understand any of the implementation details. They just ask for the metadata and it'll fetch it lazily. But it is definitely more awkward than inheritance, because the image struct is wrapped.

As I said, there's other ways to approach it - but I suspect in this case, using inheritance as a stand-in for a class extension / mixin is probably going to always be your most favorite option. A better approach might be for each item to simply know the URL to their metadata. And then get your net code to handle caching on behalf of the whole program.

It sounds like you really want to use mixins for this - and you're proposing inheritance as a way to do it. The part of me which knows ruby, obj-c and swift agrees with you. I like this weird hacky use of inheritance to actually do class mixins / extensions.

The javascript / typescript programmer in me would do it using closures instead:

    function lazyResource(url) {
      let cached = null
      return async () => {
        if (cached == null) cached = await fetch(url)
        return cached
      }
    }

    // ...
    const image = {
      metadata: lazyResource(url)
    }
Of all the answers, I think this is actually my favorite solution. Its probably the most clear, simple and expressive way to solve the problem.


> The rule is that you can't cache a value in an interface, because interfaces don't store data.

Right, but the start of where I jumped into this thread was about the fact that there are places where fields would make things better (specifically in relation to traits, but interfaces, too). And then proceeding to discuss a specific use case for that.

> A better approach might be for each item to simply know the URL to their metadata.

Not everything is a coming from a url and, even when it is, it's not always a GET/REST fetch.

> but I suspect in this case, using inheritance as a stand-in for a class extension / mixin is probably going to always be your most favorite option

Honestly, I'd like to see Java implement something like a mixin that allows adding functionality to a class, so the class can say "I am a type of HasAuthor" and everything else just happens automatically.


One way you could fix this with composition is:

    class CachedMetadataSource implements MetadataSource {
      CachedMetadataSource(MetadataSource uncachedSource) {}
      Metadata getMetadata() {
        if (metadata == null) {
          metadata = uncachedSource.getMetadata();
        }
        return metadata;
      }
    }


I don't see how that solves the problem. It seems like Video will need to keep it's own copy of CachedMetadaSource, which points back to itself, and go through that access it's metadata in the getMetadata implementation it makes available to it's users. At that point, it might as well just cache the value itself without the extra hoops. The difficult part isn't caching the value, it's preventing every class that implements MetadataSource from having to do so.


It would be the other way around. You wouldn't pass around the underlying suppliers directly, you'd wrap them. But if you must have state _and_ behavior, then `abstract class` is your friend in Java (while in Scala traits can have fields and constructors, so there is no problem).


Don't mix implementation and interface inheritance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: