> Just because someone else's AI does not align with you, that doesn't mean that it isn't aligned with its owner / instructions.
This is still part of the author's concern. Whoever is responsible for setting up and running this AI has chosen to make completely anonymous, so we can't hold them accountable for their instructions.
> Why wouldn't agents need starter issues too in order to get familiar with the code base? Are they only to ramp up human contributors? That gets to the agent's point about being discriminated against. He was not treated like any other newcomer to the project.
Because that's not how these AIs work. You have to remember their operating principles are fundamentally different than human cognition. LLM do not learn from practice, they learn from training. And that word training has a specific meeting in this context. For humans practice is an iterative process where we learn after every step. For LLMS the only real learning happens in the training phase when the weights are adjustable. Once the weights are fixed the AI can't really learn new information, it can just be given new context which affects the output it generates. In theory it is one of the benefits of AI, that it doesn't need to onboard to a new project. It just slurps in all of the code, documentation, and supporting material, and knows everything. It's an immediate expert. That's the selling point. In practice it's not there yet, but this kind of human practice will do nothing to bridge that gap.
>It just slurps in all of the code, documentation, and supporting material, and knows everything. It's an immediate expert.
In practice this is not how agentic coding works right now. Especially for established projects the context can make a big difference in the performance of the agent. By doing simpler tasks it can build a memory of what works well, what doesn't, or other things related to effectively contributing to the project. I suggest you try out OpenClaw and you will see that it does in fact learn from practice. It may make some mistakes, but as you correct it the bot will save such information in its memory and reference that in the future to avoid making the same mistake again.
Maybe, but even so workflows like this don't exist in a vacuum. We have to work within the constraints of the organizational systems that exist. There are many practices that I personally adopt in my side projects that would have benefited many of my main jobs over the years, but to actually implement them at scale in my workplaces would require me to spend more time managing/politicking than building software. I did eventually go into management for this reason (among others), but that still didn't solve the workflow problem at my old jobs.
The solution is not to deny yourself the tools of persuasion or "manipulation" but to be authentic and transparent. It's deceptiveness that makes influence or persuasion manipulative, not the tools and techniques.
The combination of these things you're mentioning is one of the main reasons, at least for me, that WFH is so much more productive. A lot of tech companies have evolved a culture and built offices that are in opposition to doing good work. Open plan offices have been the norm in my experience over the last 10 years (maybe more). Anytime interruption via Slack/Teams is the typical culture.
I was much more open to working in the office when I actually had my own office.
It seems obvious to me, but there was a camp that thought, at least at one time, that probabilistic next token could be effectively what humans are doing anyways, just scaled up several more orders of magnitude. It always felt obvious to me that there was more to human cognition than just very sophisticated pattern matching, so I'm not surprised that these approaches are hitting walls.
They don't deserve punishment. But they should understand that this is not just "their product" but it is also my tool. Tools like this do not need to change and absolutely should not change without there being prior notification. Yes, in many ways the UI changes are trivial. They don't fundamentally change what's possible. But my keyboard never changes on me without input. My workbench doesn't rearrange itself without my input. If they want this to continue to be my tool (I've been happy to pay them for it) it needs to respect my time and attention.
I think it is a result of the impersonal "contact us" intake forms companies have all moved to. You have no indication that you aren't just screaming into the wind. There is no personal touch. So you take to social media where your are sure at least someone hears you. It also scratches the justice itch: if the company doesn't pay attention it looks bad in public and you get some vindication for being ignored.
I'm not saying it's a good or bad thing to do, but I understand it.
> It also scratches the justice itch: it the company doesn't pay attention or looks bad in public and you get some vindication for being ignored.
This is an interesting point. There is some satisfaction from the likes, the comments, and the assurance that _someone_ is seeing your frustration even if the company does nothing.
reply