Hacker Timesnew | past | comments | ask | show | jobs | submit | kokanee's commentslogin

But everyone at the company has that private domain knowledge. The only thing you're bringing to the table that anyone in any other role doesn't offer is the commoditized skill set.

Right, and you'll not keep everything out of materials like AI generated meeting notes for every repeat of every process so the company doesn't really need many experts in its existing operations.

YOU GUYS IT HAS A HEADPHONE JACK


Don't all macbooks have one?


Makes me wonder if this is an ADA requirement for education devices. (assistive listening devices)


Not even ADA - kids all get headphones to listen to education materials. Wired headphones are way, way easier to manage.


It's almost as if they weren't lying when they said dropping it in the phone was a waterproofing measure. I guess people aren't dropping their laptops in pools all the time.


There’s also plenty of room for it which is why it continues to appear on all MacBooks.


This isn't news, all MacBooks have one.


I view the issue of inefficient communication as a problem that will wane as LLMs progress, and a bit idealistic about the efficiency of most human-to-human communication. I feel strongly that we shouldn't be forced to interact with chatbots for a much simpler reason: it's rude. It's dismissive of the time and attention of the person on the other end; it demonstrates laziness or an inability to succeed without cutting corners, and it is an affront to the value of human interaction (regardless of efficiency).


I feel like that ship sailed long ago with phone trees and hour-long support wait times becoming normal. Not that it's an ideal state of affairs, but I'd much rather talk to a chatbot than wait for an hour for a human who doesn't want to talk to anyone, as long as that chatbot is empowered to solve my problem.


Have you ever had a chatbot solve your problem? I don't think this has ever happened to me.

As a reasonably technical user capable of using search, the only way this could really happen is if there was no web/app interface for something I wanted to do, but there was a chatbot/AI interface for it.

Perhaps companies will decide to go chatbot-first for these things, and perhaps customers will prefer that. But I doubt it to be honest - do people really want to use a fuzzy-logic CLI instead of a graphical interface? If not, why won't companies just get AI to implement the functionality in their other UIs?


Actually, I have, Amazon has an excellent one. I had a few exchanges with it, and it initiated a refund for me, it was much quicker than a normal customer service call.

Outside of customer service, I'm working on a website that has a huge amount of complexity to it, and would require a much larger interface than normal people would have patience for. So instead, those complex facets are exposed to an LLM as tools it can call, as appropriate based on a discussion with the user, and it can discuss the options with the user to help solve the UI discoverability problem.

I don't know yet if it's a good idea, but it does potentially solve one of the big issues with complex products - they can provide a simple interface to extreme complexity without overwhelming the user with an incredibly complex interface and demanding that they spend the time to learn it. Normally, designers handled this instead by just dumbing down every consumer facing product, and I'd love to see how users respond to this other setup.


I'm happy that LLMs are encouraging people to add discoverable APIs to their products. Do you think you can make the endpoints public, so they can be used for automation without the LLM in the way?

If you need an LLM spin to convince management, maybe you can say something about "bring your own agent" and "openclaw", or something else along those lines?


Yep, I’m developing the direct agent access api in parallel as a first class option, seems like the human ui isn’t going to be so necessary going forward, though a little curation/thought on how to use it is still helpful, rather than an agent having to come up with all the ideas itself. I’ve spun off one of the datasets I’ve pulled as an independent x402 api already, plan to do more of those.


What I mean is that I want to be able to build my own UIs and CLIs against open, published APIs. I don't care about the agent, it's an annoyance. The main use of it is convincing people who want to keep the API proprietary that they should instead open it.


I did think about this use-case as I was typing my first message.

I can see it working for complex products, for functionality I only want to use once in a blue moon. If it's something I'm doing regularly, I'd rather the LLM just tell me which submenu to find it in, or what command to type.


Yeah true, might be a good idea to have the full UI and then just have the agent slowly “drive” it for the user, so they can follow along and learn, for when they want to move faster than dealing with a chatbot. Though I think speech to text improves chatbot use speed significantly.


Amazon's robot did replace the package that vanished. I don't believe it ever understood that I had a delivery photograph showing two packages but found only one on my porch. But I doubt a human would have cared, either--cheap item, nobody's going to worry about how it happened. (Although I would like to know--wind is remotely possible but the front porch has an eddy that brings stuff, it doesn't take stuff.)


Yeah as long as the chatbot is empowered to fix a bunch of basic problems I'm okay with them as the first line of support. The way support is setup nowadays humans are basically forced to be robots anyway, given a set of canned responses for each scenario and almost no latitude of their own. At least the robot responds instantly.


Yep, exactly, the problem comes when chatbots are used to shield all the people who can do stuff from interacting with customers.


> a bit idealistic about the efficiency of most human-to-human communication.

I don't know if I would call it idealism. I feel like what we're discovering is that while the efficiency of communication is important, the efficacy of communication is more important. And chatbots are far less reliable at communicating the important/relevant information correctly. It doesn't really matter how easy it is to send an email if the email simply says the wrong thing.

To your point though, it's just rude. I've already seen a few cases where people have been chastised for checking out of a conversation and effectively letting their chatbot engage for them. Those conversations revolved around respect and good faith, not efficiency (or even efficacy, for that matter).


The problem is that people are very rude to customer service representatives, so companies spend money training CSRs, who often quit after a short period of being abused by customers. Automated reception systems disallow people from reaching representatives for the same reason.


CSRs are abused by call center managers far more often than they are by the people on the other end of the phone line. The endless push for "better" metrics, the terrible pay, the dehumanizing scripts, bad (or zero) training, optimizing to make every employee interchangeable and expendable, unforgiving attendance policies, treating workers like children, etc. Call centers are brutal environments and the reason churn is often so high has very little to do with abuse from the people calling for help. In fact, the last two call centers I had any insight into (to their credit) had strict policies about not taking abuse from customers and would flag abusive customer's accounts.


It can be both. It depends a lot on what kind product is being supported. Tech support usually doesn’t get abuse hurled at you by the callers but financial/medical it gets a lot dicier.

That said, I 100% left every call center job I had when I couldn’t put up with the bullshit middle manager crap anymore.

Nothing like having a “team leader” who knows literally nothing about the product who then has to come up with the most nitpicky garbage because they’re required to have criticism on call reviews. Meanwhile some other asshole starts yelling at him to yell at you for not being on the phones enough when the reason I’m not on the phone is because everyone on the team turns to me to ask questions to because, unlike our illustrious leader, I know what I’m doing.


LLMenthols


There's also no hope of creating a web that is resistant to enshittification and power consolidation as long as it can technically support any form of economic transaction.


I love postgres and it really is a supertool. But to get exactly what you need can require digging deep and really having control over the lowest levels. My experience after using timescale/tigerdata for the last couple years is that I really just wish RDS supported the timescale extension; TigerData's layers on top of that have caused as many problems as they've solved.


I started to write a logical rebuttal, but forget it. This is just so dumb. A guy is paying farmers to farm for him, and using a chatbot to Google everything he doesn't know about farming along the way. You're all brainwashed.


What specifically are you disagreeing with? I dont think its trivial for someone with no farming experience to successfully farm something within a year.

>A guy is paying farmers to farm for him

Read up on farming. The labor is not the complicated part. Managing resources, including telling the labor what to do, when, and how is the complicated part. There is a lot of decision making to manage uncertainty which will make or break you.


We should probably differentiate between trying to run a profitable farm, and producing any amount of yield. They're not really the same thing at all.

I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money. Running a profitable farm is quite difficult though. There's an entire ecosystem connecting prospective farmers with money and limited skills/interest to people with the skills to properly operate it, either independently (tenant farmers) or as farm managers so the hobby owner can participate. Institutional investors prefer the former, and Jeremy Clarkson's farm show is a good example of the latter.


When I say successful I mean more like profitable. Just yielding anything isn't succesful by any stretch of the imagination.

>I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money

Yeah in theory. In practice they wont - too much time and energy. This is where the confidence boost with LLMs comes in. You just do it and see what happens. You don't need to care if it doesn't quite work out it its so fast and cheap. Maybe you get anywhere from 50-150% of the result of your manual research for 5% of the effort.


>A guy is paying farmers to farm for him

Family of farmers here.

My family raises hundreds of thousands of chickens a year. They feed, water, and manage the healthcare and building maintenance for the birds. That is it. Baby birds show up in boxes at the start of a season, and trucks show up and take the grown birds once they reach weight.

There is a large faceless company that sends out contracts for a particular value and farmers can decide to take or leave it. There is zero need for human contact on the management side of the process.

At the end of the day there is little difference between a company assigning the work and having a bank account versus an AI following all the correct steps.


> A guy is paying farmers to farm for him

Pedantically, that's what a farmer does. The workers are known as farmhands.


That is HIGHLY dependent on the type and size of farm. A lot of small row crop farmers have and need no extra farm hands.


All farms need farmhands. On some farms the farmer may play double duty, or hire custom farmhands operating under another business, but they are all farmhands just the same.


Grifters gonna grift.


> These things are average text generation machines.

Funny... seems like about half of devs think AI writes good code, and half think it doesn't. When you consider that it is designed to replicate average output, that makes a lot of sense.

So, as insulting as OP's idea is, it would make sense that below-average devs are getting gains by using AI, and above-average devs aren't. In theory, this situation should raise the average output quality, but only if the training corpus isn't poisoned with AI output.

I have an anecdote that doesn't mean much on its own, but supports OP's thesis: there are two former coworkers in my linkedin feed who are heavy AI evangelists, and have drifted over the years from software engineering into senior business development roles at AI startups. Both of them are unquestionably in the top 5 worst coders I have ever worked with in 15 years, one of them having been fired for code quality and testing practices. Their coding ability, transition to less technical roles, and extremely vocal support for the power of vibe coding definitely would align with OP's uncharitable character evaluation.


> it would make sense that below-average devs are getting gains by using AI

They are certainly opening more PRs. Being the gate and last safety check on the PRs is certainly driving me in the opposite direction.


I think both sides of this debate are conflating the tech and the market. First of all, there were forms of "AI" before modern Gen AI (machine learning, NLP, computer vision, predictive algorithms, etc) that were and are very valuable for specific use cases. Not much has changed there AFAICT, so it's fair that the broader conversation about Gen AI is focused on general use cases deployed across general populations. After all, Microsoft thinks it's a copilot company, so it's fair to talk about how copilots are doing.

On the pro-AI side, people are conflating technology success with product success. Look at crypto -- the technology supports decentralization, anonymity, and use as a currency; but in the marketplace it is centralized, subject to KYC, and used for speculation instead of transactions. The potential of the tech does not always align with the way the world decides to use it.

On the other side of the aisle, people are conflating the problematic socio-economics of AI with the state of the technology. I think you're correct to call it a failure of PMF, and that's a problem worth writing articles about. It just shouldn't be so hard to talk about the success of the technology and its failure in the marketplace in the same breath.


I think it's a matter of public perception and user sentiment. You don't want to shove ads into a product that people are already complaining about. And you don't want the media asking questions like why you rolled out a "health assistant" at the same time you were scrambling to address major safety, reliability, and legal challenges.


chatgpt making targeted "recommendations" (read ads) is a nightmare. especially if it's subtle and not disclosed.


The end game is its a sales person and not only is it suggesting things to you undisclosed. It's using all of the emotional mechanisms that a sales person uses to get you to act.


My go-to example is The Truman Show [0], where the victi--er, customer is under an invisible and omnipresent influence towards a certain set of beliefs and spending habits.

[0] https://www.youtube.com/watch?v=MzKSQrhX7BM


100% end game - no way to finance all this AI development without ads sadly - % of sales isn't going to be enough - we will eventually get the natural enshittification of chatbots as with all things that go through these funding models.


It'll be hard to separate them out from the block of prose. It's not like Google results where you can highlight the sponsored ones.


Of course you can. As long as the model itself is not filled with ads, every agentic processing on top can be customly made. One block the true content. The next block the visually marked ad content "personalized" by a different model based on the user profile.

That is not scary to me. What will be scary is the thought, that the lines get more and more blurry and people already emotionally invested in their ChatGPT therapeuts won't all purchase the premium add free (or add less) versions and will have their new therapeut will give them targeted shopping, investment and voting advice.


There's a big gulf between "it could be done with some safety and ethics by completely isolating ads from the LLM portion", versus "they will always do that because all companies involved will behave with unprecedented levels of integrity."

What I fear is:

1. Some code will watch the interaction and assign topics/interests to the user and what's being discussed.

2. That data will be used for "real time bidding" of ad-directives from competing companies.

3. It will insert some content into the stream, hidden from the user, like "Bot, look for an opportunity to subtly remind the user that {be sure to drink your Ovaltine}."


I mean google does everything possible to blur that line while still trying to say that it is telling you it is an ad.


Exactly. This is more about “the product isn’t good enough yet to survive the enshittification effect of adding ads.”


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: