How about trying to reduce dependencies? 11ty is going in correct direction, dropping significant chunk of various dependencies or replacing them with packages with no dependencies or using platform features, that becomes readily available.
Watering plants is also super easy once you do it regularly. You get a sense of how much water a plant needs just by looking at it and testing the soil (via moisture meter or just by touch). It's quite rewarding realizing how each plant differs.
It's nice to know the plants are getting the water at the right time when they need it, when the temp is right, etc. But agree obviously it does automate some of the fun.
AI is one of the few major general technological breakthroughs, comparable to the Internet and electricity. It's potentially applicable to everything, which is why right now everyone is trying to apply it to everything. Including developing new optimization algorithms, optimizing optimizing compilers, optimizing applications, optimizing systems, optimizing hardware, ...
Big AI vendors are at the forefront of it, because they're the ones who actually pay for the AI revolution, so any efficiency improvement saves them money.
> which is why right now everyone is trying to apply it to everything
And are any of them actually succeeding? Where are the new AI businesses? Where's the new wealth and money? Where's the one guy AI pioneer doing what used to take 100s?
> because they're the ones who actually pay for the AI revolution
Their customers do. The customers are getting ripped off. They want the AI revolution, what they got was a crappy search engine, and copyright whitewashing service instead.
The first electrical motors and lightbulbs absolutely sucked, but the former was still a radical transformation from factories based on a huge steam engine prime mover distributing motive power via belts and pulleys, and the latter was still an improvement over gas lanterns.
The very early internet also sucked, but was nevertheless useful. This was still true even for the very early public web, which was decades later.
The AI which already actually exists is a major general technological breakthroughs, comparable to the Internet and electricity.
Any given AI doesn't need to be superhuman to be comparable.
> And are any of them actually succeeding?
Yup. Old example now, but most physical post (at least in industrialised nations) is now read and sorted by machines which learned to read handwriting as well as printed addresses. ML on handwritten text was basically the "hello world" of AI even 5-10 years back.
And online, spam filtering has always needed help from AI. Before Transformers came along, the state of the art for natural language processing was just counting words in whichever bucket (sales for spam, good-and-bad for sentiment, etc.) and applying Bayesian rules.
This kind of thing was the bread and butter of AI before ChatGPT came along and people forgot AI could be anything beyond a chatbot. Though even as a chatbot, it's a surprisingly effective tool; heck, even as autocomplete it's a surprisingly effective tool, and that's about the weakest thing you can do with it.
> Where are the new AI businesses? Where's the new wealth and money?
Google.
Like, not just today's Google, the original one.
The whole thing about Page Rank is a matrix multiply, and then they added personalisation and ads which uses your every action as signal to optimise a multi-armed-bandit problem. It is, in fact, a machine learning problem; it is AI.
And of course the thing with ads is this argument applies to all ad agencies, so also Facebook. (And not just how Facebook has AI generated scams etc., though even scams are still "where's the money").
Ditto recommendation engines. Both for what news article to read next, what goods Amazon or eBay etc. will suggest, and what shows or clips YouTube etc. will suggest.
And translation. If you forget machine learning and try to write translation software as a traditional piece of software, it will suck: the example I was given when I was young was someone's attempt that ended up with "water sheep" for "hydraulic rams". Google Translate wasn't the first, nor only, tool, but Google did invent the Transformer model specifically for improved translations. And while Google didn't monetise this directly (but did put auto-translation into Chrome and (IIRC) Android apps so perhaps indirectly), some of the other translation companies sure did.
> Where's the one guy AI pioneer doing what used to take 100s?
Depending on what you mean, either "LMGTFY", or "they joined the big companies to do even more because resources", or "the people posting AI-generated projects on Show HN", or "you can sort of see this by looking at employee count and finding the ones with very small head-counts: https://www.ycombinator.com/companies/industry/ai "
To combine the examples, here's an AI product originally authored by one person to do translation on OCR'ed text which got bought by big tech: https://en.wikipedia.org/wiki/Word_Lens
> Where's the one guy AI pioneer doing what used to take 100s?
Does me programming my phone into a magic wand from Harry Potter using a single spoken sentence count? Does me talking the phone into becoming a 3-way universal translator from Star Trek count?
In both cases, it's a trivial use of current (as of 6-12 months ago) capabilities of Gemini and ChatGPT, respectively, plus some basic logic. As simple as:
if(someone just said "Lumos") { togglePhoneFlashlight(); }
But in each case "implemented" totally spontaneously, on the fly, using less code than above (though in natural language), and it directly solved specific problems for real, concrete users (which is more that I can say for most coding work I did).
Is this much? Not really, but 5 years ago it would take at least a single engineer and at least a good work day (a conservative estimate) to properly wire all things together and put in a mobile form factor for testing. Today (minus 6-12 months) it took me less than a minute, so that's a 480x improvement right there.
And then I can tell you how last week I had Gemini single-shot me a proper in-browser photo editing tool for autocropping photos of scans (complete with auto-placing non-rectangular quad cropping zone for rotation and perspective correction, snapping sides to edges inside image for minute edits, and host of other trivial editing features). It's impressive it did that at all (even if all the heavy lifting was done by JS version of OpenCV), but it's not the important part. Neither is that result was correct on the first try. No, the important part - the thing that's profound about this technology - is that having AI build such a tool was much, much faster than finding an existing tool for that job.
Building tools to help yourself with your own work isn't a new thing (especially outside of computing). But LLMs are a qualitative change here - they let you wire up bespoke solutions in minutes, that would otherwise take hours or days of fully-focused expert work, which throws xkcd://1205 out the window, as suddenly it makes sense to do that to shave an hour off a two-hour task.
Even with all age verifications implemented, some parents will just toss their phone with tiktoks to their toddler, just to keep them quiet for a second or two.
That camera bump, though. Maybe they could've shuffled components in such way, that phone would be overall smaller, but as deep as camera bump, to make whole back flat?
Why everyone and their grandmother are rounding corners of web viewport? Where is the scrollbar? It will be really hard to distinguish what is a website, and what's the actual browser UI. Why is there padding between window edge and browser toolbar?
It's very import that Firefox look like a cheap knock-off of whatever is popular. This ensures that there's no reason for new users to switch to it, whilst also alienating current users, bringing the Mozilla Foundation closer to their goal, which seems to be having zero happy users and zero happy employees, for some reason.
reply