I've tried these with Claude various times and never get the wrong answer. I don't know why, but I am leaning they have stuff like "memory" turned on and possibly reusing sessions for everything? Only thing I think explains it to me.
If your always messing with the AI it might be making memories and expectations are being set. Or its the randomness. But I turned memories off, I don't like cross chats infecting my conversations context and I at worse it suggested "walk over and see if it is busy, then grab the car when line isn't busy".
Even Gemini with no memory does hilarious things. Like, if you ask it how heavy the average man is, you usually get the right answer but occasionally you get a table that says:
- 20-29: 190 pounds
- 30-39: 375 pounds
- 40-49: 750 pounds
- 50-59: 4900 pounds
Yet somehow people believe LLMs are on the cusp of replacing mathematicians, traders, lawyers and what not. At least for code you can write tests, but even then, how are you gonna trust something that can casually make such obvious mistakes?
> how are you gonna trust something that can casually make such obvious mistakes?
In many cases, a human can review the content generated, and still save a huge amount of time. LLMs are incredibly good at generating contracts, random business emails, and doing pointless homework for students.
And humans are incredibly bad at "skimming through this long text to check for errors", so this is not a happy pairing.
As for the homework, there is obviously a huge category that is pointless. But it should not be that way, and the fundamental idea behind homework is sound and the only way something can be properly learnt is by doing exercises and thinking through it yourself.
Yeah, ChatGPT's paid version is wildly inaccurate on very important and very basic things. I never got onboard with AI to begin with but nowadays I don't even load it unless I'm really stuck on something programming related.
So what? That might happen one out of 100 times. Even if it’s 1 in 10 who cares? Math is verifiable. You’ve just saved yourself weeks or months of work.
The kind of mistakes it makes are usually strange and inhuman though. Like getting hard parts correct while also getting something fundamental about the same problem wrong. And not in the “easy to miss or type wrong” way.
I wish I had an example for you saved, but happens to me pretty frequently. Not only that but it also usually does testing incorrectly at a fundamental level, or builds tests around incorrect assumptions.
I've seen LLMs implement "creative" workarounds. Example: Sonnet 4.5 couldn't figure out how to authenticate a web socket request using whatever framework I was experimenting with, so it decided to just not bother. Instead, it passed the username as part of the web socket request and blindly trusted that user was actually authenticated.
The application looked like it worked. Tests did pass. But if you did a cursory examination of the code, it was all smoke and mirrors.
Yeah recently it had an issue getting OIDC working and decided to implement its own, throwing in a few thousand extra lines. I'm sure there were no security holes created in there at all. /s
yes i wished i had safes some of my best examples too. One i had was super weird in chatgpt pro. It told me that after 30 years my interest would become negative and i would start loosing money. Didnt want to accept the error.
Errors compounding is a meme. In iterated as well as verifiable domains, errors dilute instead of compounding because the llm has repeated chances to notice its failure.
> So based on the article's own observation: no, of course not.
This had very little to the discussion. Of course it can't be replaced. Code is created by humans, and as long as we have opinions nothing gets truly replaced. Just decreased usage over time.
> C++ and Switft just became "more dominant".
Yup, like this. Of course a general statement is no.
I have very little interest in this topic. But I seen this SAME comment a million times on anything thats new that attempts to challenge something. And as usual whether something "dethrones" something is less interesting than what changes or ideas that it offers.
Just like ALL those you listed, they didn't replace any of those, but they definitely challenged the ecosystems, or improved the old ones.
Naunce discussion is far more interesting.
For example, why do you think Carbon won't be able to gain dominance over time? I mean I think thats a huge hurdle too.
You've apparently read the reverse of what I said. I said the new thing didn't replace the old thing, and that therefore the idea that "we are doing the same, and it will replace the old thing" is nonsense. I did not say that because it can't replace it, it is therefore not worth doing. It absolutely is, like all attempts at making things that "solve the problems that C++ has" have been varying degrees of worth it. But the idea that it can, let alone will, replace the original is such an obvious "no" that the title is clickbait. Or slide-bait (since it was originally a conf. talk)
The way that company has posted blog where they seem to have complete lack of understanding what graphql is and their comments on dev.to show a huge amount of disrespect.
The blogs posts comments themselves shows the company's author of these seems dense, rude and unknowledgeable in what they argue. Which also means their product they are making is likely being made by the same mindset.
Definitely wouldn't trust them.
It seems also like a ploy to get attention, which is definitely gonna keep me away from their product.
Seriously, check out these comment section. It's like they posted about something they have no clue about and then are gonna defend it to their grave regardless if they are wrong.
Wait until some security flaw comes out and this attitude makes them unwilling to admit they are wrong. Gross.
> Even if that was true, the above is 7 lines of code. That is 3.5 times as many LOC as my 2 liner. Science shows us that the amount of resources required to maintain code is proportional to the LOC count. Your example is hence 3.5 times more demanding in both initial resources to create it and resources required to maintain it. One of OOP's sales pitches was "that it makes it easier to maintain your code". You just scientifically proved it wrong ...
I think it has to be trolling, right? I haven't seen mention of LOC as a useful metric since the oughts.
Anyone who's been in the game long enough will tell you that, outside of tight performance-critical loops, developer experience trumps everything. And you cannot reduce devX down to a single number.
> If OOP was a solution to anything really, we wouldn't need design patterns, clean architecture, or SOLID design principles.
He's not wrong. I loved moving from C to C++ polymorphic code is such a cool concept but after a while you realise it really doesn't solve anything on its own.
Then you start to encapsulate everything in an attempt to separate concerns then you realise that separation of concerns is actually quite easy if you separate data from function and make sure functions have no side effects, something OOP encourages the exact opposite of.
That is more related to the frontend, the frontend doesn't have to overfetch data.
But the graphql still will fetch that data and just filter what does out, it still has to get that data.
Example is a query like
```
{
currentUser {
id
name
todoLists {
title
items {
name
}
}
}
}
```
The resolver will likely get the whole user object from the database, then just send name and id. Then when it finished getting the user, it will then query for the todo lists, and then only send the title (even though it got the whole row for each todo list), then after it fetches those lists, it will query for the items. And retrieve the whole rows of each item from the database.
The data the server needed to fetch didn't change, just what the frontend receives. It is still loading and fetching all the data on that query and then graphql filters the results leaving the server.
Also in the above steps, you notice it queries AGAIN after a data set has been retrieve, this causes an N+1 problem.
It is not inherent in the specs or implementation that fixes these. If you want to avoid fetching the whole object you will need custom code, and to avoid N+1 problem, you need batching of data within requests that "caches" or consolidate nested requests like data-loader, and some form of response caching to help with these issues.
Not siding against the tech, just clarifying those cons.
Yes the client queries for only data it needs and server returns only data which client requested.
With this query,
{ currentUser { id name todoLists { title items { name } } } }
It is up to the server how it is implemented.
- The server can fetch all the data for the user, todolist and items from the database in one go and resolve the client query mentioned above. In this case there will be overfetching from the database if the client only requested user information.
The server can also fetch the data in 3 queries
1> First to fetch the user, lets say with id 1.
2> Then get all the todos for the for user id 1.
3> Then get all the items for all the todos in step 2. Batching/Dataloaders.
All these queries can be executed in parallel on the server side. Does this make the server complex? Yes but there is also benefit to this when the user only request currentUser it does not fetch any todolists or items from the database.
> It's still like putting comments above your functions.
If you're used to a certain language then sure that makes sense. Your comment seems like you are boxing yourself in, limiting yourself to just what makes sense in Javascript, very opposite of a programmer who looks to improve their craft.
Take a moment and think about that. Because it looks like Javascript comment...you only see it like that to a point of making this statement. There is a lot of ways different languages uses tokens/symbols to indicate something. Not everyone agrees what those symbols are used for. Some languages use it to define macros like C++ or preprocessor directives like C#. Some as comments JS/Python/. Some as like Java nothing.
Because the languages you use are use to it, and the language you use doesn't use something similar, it looks "wrong". That is in itself a narrow view.
I think you should question that kind of thinking, I believe it will be helpful.
> now that it has become evident that the brain is merely a kind of a computer
I am ignorant in this area. But I keep reading how brains are nothing like computers the more we learn. Your statement seems to suggest otherwise and id love to read about it. Can you drop something where I can start exploring about how the brain has become more evident that it's merely a kind of computer? Thanks!
The brain is thought to be merely a computer in the original sense of a long strip of paper along with a scribe and a rulebook. The logic is, a Turing machine can simulate quantum electrodynamics to an arbitrary degree of accuracy. Then, two beliefs about physics and the structure of the brain are included:
1. There is nothing going on in the brain that would require simulation to infinite accuracy. Not even a chaotic system would have this property, because they take a finite time to "blow up" an initial uncertainty, and the smaller the initial uncertainty the longer they take to blow up. For this proposition to be violated there would have to be an undiscovered fininite-time nondeterministic blowup, which is unlikely, but I've heard rumblings that we haven't proven that it can't happen in Navier-Stokes. So maybe it can happen in the brain.
2. There is nothing going on in the brain that depends on nuclear physics or anything more "powerful" than quantum electrodynamics.
I have not seen any evidence that 1 or 2 aren't true for the brain, so that puts something behind saying it's "merely a computer."
Pretty important point for how far we should regulate something.
I think the gaming community would want loot-boxes completely banned under the guise of "gambling".
The individuals that would spend an unhealthy amount on these loot-boxes are not individuals that will stop their behavior if lootboxes are banned or even regulated.
These individuals have know that you can't get your money back, you can't earn more money and you can blow your life savings on it. With no return promised, monetarily, these individuals know the money they spend can't be gambled back. Thats the crux of gambling and their laws. People truly believes one more bet and they can get it all back.
It's insane if a person on Overwatch or Fortnite truly believes the money they waste if beyond their budget is a good investment at all. That problem lies with the individual not EA and loot boxes.
Those people will have constant issues with their finance until they seek the help they need to stop. That could be learning better budgeting behavior to identifying they have a gambling problem.
That could be a gambling impulse but really we know people that constantly spend their money carelessly regardless of some mental addiction. A lot of people lack of financial control shouldn't create laws to dictate to companies what they should do, especially in a market saturated with competition that don't deploy any of these practices simply because these people refuse to follow a monthly budget thats within their means.
We don't remove things from society just cause others have difficulty with it. Now I have no issue with more regulations especially targeted at kids, but let's be honest here. Most of this should still fall on parents and education. Companies caught pushing to sell these random loot-boxes to kids should be addressed and fined. Especially the ones that use streams that buy these boxes and open them with kids audiences. These companies that sponsored these streamers should receives fines for targeting kids. Also like to see some form of guarantee or odds exposure.
Long rant, but I find the fact you can't get monetary value back makes it extremely different than the current types of gambling and should effect the way we regulate it. End of the day, when no monetary value can be extracted we are regulating random chance and rewards...silly path to go down to call anything random and rewarding essentially gambling.
reply