Hacker Timesnew | past | comments | ask | show | jobs | submit | tsarchitect's commentslogin

Not sure the comments are debating the semantics of vibe coding or confusing ourselves with generalizing anecdotal experiences (or both). So here's my two cents.

I use LLMs on a daily basis. With the rules/commands/skills in place the code generated works, the app is functional, and the business is happy it shipped today and not 6 months from now. Now, as as super senior SWE, I have learned through my professional experiences (now an expert?) to double check your work (and that of your team) to make sure the 'logical' flows are implemented to (my personal) standard of what quality software should 'look' like. I say personal standard since my colleagues have their own preferred standard, which we like to bikeshed during company time (a company standard is after all made of the aggregate agreed upon standards of the personal experiences of the experts in the room).

Today, from my own personal (expert) anecdotal experiences, ALL SOTA LLMs generate functional/working code. But the quality of the 'slop' varies on the model, prompts, tooling, rules, skills, and commands. Which boils down to "the tool is only as good as the dev that wields it". Assuming the right tool for the right job. Assuming you have the experiences to determine the right tool for the right job. Assuming you have taken the opportunities to experience multiple jobs to pair the right tool.

Which leads me to, "Vibe coding" was initially coined (IMO) to describe those without any 'expertise' producing working/functional code/apps using an LLM. Nowadays, it seems like vibe coding means ANYONE using LLMs to generate code, including the SWE experts (like myself of course). We've been chasing quality software pre-LLM, and now we adamantly yell and scream and kick and shout about quality software from the comment sections because of LLM. I'm beginning to think quality software is a mirage we all chase, and like all mirages its just a little bit further.

All roads that lead to 'shipping' are made with slop. Some roads have slop corners, slop holes, misspelled slop, slop nouns, slop verbs, slop flows and slop data. It's just with LLMs we build the roads to 'shipping' faster.


Dabvid Blevins has a great video (2018) that mentions JWTs https://www.youtube.com/watch?v=osQmFNm0YDU

He discusses the architectural advantages of JWT but also discusses JWTs lacking

"JWTs are a passport without a picture. A very dangerous thing".

His solution: OAuth2 + JWT + Signatures


How about for aliases?

import foo from '@Schemas/foo.ts' won't work since it is not a 'RelativeImport'. Is there a fix for this use case?


> how is it that we can have a whole population of systems (eg scientists) eventually coming to a consensus about some phenomenon (eg the value of some physical quantity?) It seems necessary to have a unified global system coordinating the whole thing.

Not necessarily. It's not a "global system coordination" thing, it is coming to a consensus that "we will use this reference frame as our starting point", which might look like a global system coordinating. I guess you can say that 'science' initiallizes a reference frame in which we can all participate, compare answers, reproduce results, etc.


One of my rules: don't send NULL over the wire, but of course always check for NULL on the server if you're using db functions.


I don't think you're disagreeing with OP. Seems like you both reached the same conclusion through different means and said it differently:

"Sometimes it is better to make the database do something, sometimes it isn't. When that is true is context and situation dependent."

"if it can be reasonably done in the database, it should be done by the database”"

In other words, sometimes it's reasonably better to make the database do something, and sometimes it's unreasonable. Context dependent, of course.


> If that happens to be a bottleneck and you can do better, you should definitely do it in code locally. But these are two ifs that need to evaluate to true

If the OP said what you are saying, I'd probably agree. However, the above statement makes it clear that the OP is saying "put it in the database unless you can prove it doesn't belong there".

That is what I disagree with. There's a lot of reasonable things you can do with a database which aren't the best thing to do from both a system and performance perspective. It is, for example, reasonable to use the sort method on a database. It's also not something you should do without proper covering indexes. Especially if the application can reasonably do the same sort.


OP has identified a universal norm: "Law of Large Established Codebases (LLEC)" states that "Single-digit million lines of code, Somewhere between 100 and 1000 engineers, first working version of the codebase is at least ten years old" tend to naturally dissipate, increasing the entropy of the system, inconsistency being one of characteristics.

OP also states that in order to 'successfully' split a LEC you need to first understand it. He doesn't define what 'understanding the codebase' means but if you're 'fluent' enough you can be successful. My team is very fluent in successfully deploying our microfrontend without 'understanding' the monstrolith of the application.

I would even go out and make the law a bit more general: any codebase will be both in a consistent and inconsistent state. If you use a framework, library, or go vanilla, the consistency would be the boilerplate, autogenerated code, and conventional patterns of the framework/library/programming language. But inconsistency naturally crops up because not all libraries follow the same patterns, not all devs understand the conventional patterns, and frameworks don't cover all use cases (entropy increases after all). Point being, being consistent is how we 'fight' against entropy, and inconsistency is a manifestation of increasing entropy. But there is nothing that states that all 'consistent' methods are the same, just that consistency exists and can be identified but not that the identified consistency is the same 'consistency'. And taking a snapshot of the whole you will always find consistent & inconsistent coexisting


LOC is 'not a good' metric to 'you should be able to understand a codebase'. In either scenario, too many people or too few people, or (my favorite) 'not enough' (whatever that means). Mythical Man-Month comes to mind. What I think you're trying to get at is you need skill to reverse engineer software. And even if you have that skill it takes time (how much?). We work in a multifaceted industry and companies need to build today. At any given project, the probabilities are small that there is a dev who has the skill. We all know 'they can do it/they can learn on the job/they'll figure it out'. And then OP's observation comes into fruition.


To be proven wrong would be that Cursor is used by all devs or that IDEs adopt AI into their workflow?

Like OP using cursor has been a huge productivity boost. I maintain a few postgres databases, I work as a fullstack developer, and manage kubernetes configs. When using cursor to write sql tables or queries it adopts my way of writing sql. It analyzed (context) my database folder and when I ask it to create a query, a function, a table, the output is in my style. This blew me away when I first started with cursor.

Onto react/nextjs projects. In the same fashion, I have my way of writing components, fetching data, and now writing RSA. Cursor analyzed my src folder, and when asked to create components from scratch the output was again similar to my style. I use raw CSS and class names, what was an obstacle of naming has become trivial with Cursor ("add an appropriate class to this component with this styling"). Again, it analyzed all my CSS files and spits out css/classes in my writing/formatting style. And working on large projects it is easy to forget the many many components, packages, etc. that integrated/have been written already. Again, cursor comes out on top.

Am I good developer or a bad developer? Don't know. Don't care. I'm cranking out features faster than I have ever done in my decades of development. As has been said before, as a software engineer you spend more time reading code than writing. Same applies to genAI. It turns out that I can ask cursor to analyze packages, spit out code, yaml configuration, sql, and it gets me 80% done with writing from scratch. Heck, if I need types to get full client/server type completion experience, it does that too! I have removed many dependencies (tailwind, tRPC, react query, prisma, to name a few) because cursor has helped me overcome obstacles that these tool assisted in (and I still have typescript code hints in all my function calls!).

All in all, cursor has made a huge difference for me. When colleagues ask me to help them optimize sql, I ask cursor to help out. When colleagues ask to write generic types for their components, I ask cursor to help out. Whether cursor or some other tool, integrating AI with the IDE has been a boom for me.


Design review 'should' be a process that overlaps with the technical review. In other meanings, not isolated from the org. And that overlap 'should' overlap multiple times with the technical review , not just once, at the end nor just at the beginning (shift in priorities, new team new people, etc).

As said elsewhere, a lot can change between initial design and release so having multiple design/technical reviews 'should' be standard. But inherent in the design/technical reviews is time, resources, and culture which many many companies lack and/or don't include into budgets/project estimates/etc and/or a culture of development practices.

A shop might have a design team and a bunch of devs with probably a single person who actually understands how things are connected. Limited time and resources precludes a thorough design/technical review process But also consider that many companies willfully avoid 'wasting' developer time, setting up meetings, etc.

Turns out figma's design culture inspired change in their engineering culture to overlap their processes in a continuous development manner.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: