I understand your feelings. You spent years working hard to learn and master a complex craft, and now seeing that work feel almost irrelevant because of AI can be deeply unsettling.
However, this can also be an opportunity to gain some understanding about our nature and our minds. Through that understanding, we can free ourselves from suffering, find joy, and embrace life and the present moment as it is.
I am just finishing the book The Power of Now by Eckhart Tolle, and your comment made me think about what is explained in it. Tolle talks about how much of our suffering comes from how deeply we (understandably) tie our core identity and self-worth to our external skills, our past achievements, and our status among peers.
He explains that our minds construct an ego, with which we identify. To exist, this ego needs to create and constantly feed an image of itself based on our past experiences and achievements. Normally we do this out of fear, in an attempt to protect ourselves, but the book explains that this never works. We actually build more suffering by identifying with our mind-constructed ego. Instead of living in the present and accepting the world as it is, we live in the past and resist reality in order to constantly feed an ego that feels menaced.
The deep expertise you built is real, but your identity is so much more than just being a 'principal engineer'. Your real self is not the mind-constructed ego or the image you built of yourself, and you don't need to identify with it.
The book also explores the Buddhist concept that all things are impermanent, and by clinging to them we are bound to suffer. We need to accept that things come and go, and live in the present moment without being attached to things that are by their nature impermanent.
I suggest you might take this distress you are feeling right now as an opportunity to look at what is hurting inside you, and disidentify yourself from your ego. It may bring you joy in your life—I am trying to learn this myself!
I'm reading The Compassionate Mind by Paul Gilbert and I find it shares many similar ideas. Also I've been interested by Buddhist concepts like impermanency for a while.
While I think rationally what you said is good and makes sense, at the same time it feels like it says you should forget your roots and be this impermanent being existing in the present and only the present. I value everything about my life, the past, my role models when I was a kid, my past and current skills, all friends from all ages, my whole path essentially. When considering current choices I have to make, I feel more drawn to think "What has been my path and values previously, and what makes sense now?" instead of forgetting the past and my ego and just hustling with the $CURRENT technology.
At least that's how I have thought about my ego when I have tried to approach it with topics like these. It might allow me to make more money in the present if I just disidentified with it, but that thought legitimately feels horrifying because it would mean devaluing my roots.
I think that's right when you say: "What has been my path and values previously, and what makes sense now?" That is actually a sensible way to approach the present moment.
Disidentifying from your ego doesn't mean you have to act like a stateless robot with amnesia. Your past experiences, your role models, and your skills are still there for you to recall; they are tools that help guide your decisions. Disidentifying just means you don't let the mind-constructed image of those things define who you are. It means you don't have to constantly mull over the past, and you don't feel threatened when the things you valued in the past ends or changes.
However, I was really struck by your comment that disidentifying would feel horrifying because it would mean "devaluing your roots" to make more money. I am wondering if this is what you really think.
Imagine if letting go of that specific past identity led you to a truly marvelous opportunity in the present: not just more money, but working with wonderful people, doing engaging things, and being genuinely happy. Would that really be horrifying just because it didn't perfectly align with your roots? Probably not.
I suspect what you actually find horrifying isn't "devaluing your roots," but rather the idea of selling out. The real nightmare is getting a well-paid but completely soulless job where you are unhappy, working on things you don't care about, or being treated like a disposable cog who just takes orders.
I analyzed the test using Pangram, which is apparently reliable, it say "Fully human Written" without ambiguity.[1]
I personally like the content and the style of the article. I never managed to accept going through the pain to install and use Visual Studio and all these absurd procedures they impose to their users.
These days I'm always wondering whether what I'm reading is LLM-slop or the actual writing of a person who contracted AI-isms by spending hours a day talking to them.
It's incredible that Google is letting OpenAI eat their lunch by capturing users while Google focuses on ad revenue.
OpenAI offered ChatGPT for free to anyone—even if not their best model—without needing to be logged in. That's crucial for attracting and retaining casual users.
If you compare this to what Google was at the beginning, it was just a simple interface to search the web: no questions asked, no subscription, no login. That was one of the secrets that led people to adopt Google Search when it was new (the other being result quality). It was a refreshing, simple page where you typed something and got results without any friction.
Now, with Gemini, Google finally has an excellent LLM. But a casual user can't use it unless they: 1. have a Google account, and 2. are logged in.
One might ask, "What's the matter? Everyone has a Google account." But the login requirement isn't as harmless as it seems. For example, if you want to quickly show a friend Gemini on their PC, but they use Safari and aren't logged into Google—bummer, you can't show them. Or a colleague asks about Gemini, but you can't log in with a personal account on a work machine. Gemini is immediately excluded from the realm of possibility. In the good old days, anyone could use Google at work instantly.
Right now, the companies capturing users are OpenAI (with the accessible ChatGPT brand) and Microsoft (with Copilot integrated into Microsoft 365). My company, for instance, sent a memo stating we must use Copilot with our corporate accounts for data security.
Google has botched this. They don't seem to understand that they are losing this round. They still have a strong position with Search and Android, but it’s funny to watch them make this huge strategic mistake.
NOTE: Personally, I dislike ads unless they are privacy-friendly and discrete (like early Google). If OpenAI starts using invasive ads, I will stop using ChatGPT immediately, just as I stopped using Google Search in favor of Kagi.
>a casual user can't use [Gemini] unless they: 1. have a Google account, and 2. are logged in.
Is this a regional thing? I can use Google AI Mode without being logged in just fine. AI summaries for certain queries are also auto-generated when logged out for me.
Going to https://gemini.google.com works fine for me when not logged in. It might be doing some sort of reputation check on your browser/IP to decide whether it requires a login or not.
edit: sure enough, while using Tor or a well known VPN IP, Gemini requires I login.
That's not inconsistent with what I reported. It seems to require it sometimes, but not others, for mysterious reasons.
Are you and your colleague both trying at work? Probably on the same IP? Google might attribute less trust to an IP shared between many different users than it does to a regular residential internet IP (like mine).
Did some more testing and the behavior is interesting. When connecting through a Mullvad node in the US it doesn't require login, but any Mullvad node outside of the US and it does. I might be wrong its and it's just a per-country policy.
It seems that coffee has a health benefit for preventing gout. Gout used to be quite a common health problem in the past, and apparently coffee may offer some protection.
I agree. In addition to the chemical elements like water, as mentioned in the article, the impact with Theia also enabled strong magmatic activity at the core of the planet, and that was a critical element as well to sustain life.
Probably the strong magnetic activity of the Earth's core was key to maintaining the atmosphere, but also, the magmatic heat contributed to keeping the planet at a good temperature to support life when a young Sun provided significantly less radiation.
All these elements may suggest that the collision is needed to satisfy the very strict requirements about where the planet is located and about the size and composition of the colliding planet. This makes the probability for life-sustaining planets in the Drake equation extremely low.
As an indirect proof of the tightness of the condition is the fact that the Earth in its history had periods of climate extremes hostile to life, like the Snowball Earth when the planet was completely covered by ice and snow, or at the opposite extreme, the very hot periods when the greenhouse effect was dominating the climate.
I found the questioning of love very interesting. I myself thought about whether the LLM can have emotions. Based on the book I am reading, Behave: The Biology of Humans at Our Best and Worst by Robert Sapolsky, I think the LLM, as they are now with the architecture they have, cannot have emotions. They just verbalize things like they sort-of-have emotions but these are just verbal patterns or responses they learned.
I have come to think they cannot have emotions because emotions are generated in parts of our brain that are not logical/rational. They emerge based on environmental solicitations, mediated by hormones and other complex neuro-physical systems, not from a reasoning or verbalization. So they don't come up from the logical or reasoning capabilities. However, these emotions are raised and are integrated by the rest of our brain, including the logical/rational one like the dlPFC (dorsolateral prefrontal cortex, the real center of our rationality). Once the emotions are raised, they are therefore integrated in our inner reasoning and they affect our behavior.
What I have come to understand is that love is one of such emotions that is generated by our nature to push us to take care of some people close to us like our children or our partners, our parents, etc. More specifically, it seems that love is mediated a lot by hormones like oxytocin and vasopressin, so it has a biochemical basis. The LLM cannot have love because it doesn't have the "hardware" to generate these emotions and integrate them in its verbal inner reasoning. It was just trained by human reinforcement learning to behave well. That works up to some extent, but in reality, from its learning corpora it also learned to behave badly and on occasions can express these behaviors, but still it has no emotions.
I was also intrigued by the machine's reference to it, especially because it posed the question with full recognition of its machine-ness.
Your comment about the generation of emotions does strike me a quite mechanistic and brain-centric. My understanding, and lived experience, has led me to an appreciation that emotion is a kind of psycho-somatic intelligence that steers both our body and cognition according to a broad set of circumstances. This is rooted in a pluralistic conception of self that is aligned with the idea of embodied cognition. Work by Michael Levin, an experimental biologist, indicates we are made of "agential material" - at all scales, from the cell to the person, we are capable of goal-oriented cognition (used in a very broad sense).
As for whether machines can feel, I don't really know. They seem to represent an expression of our cognitivist norm in the way they are made and, given the human tendency to anthropormorphise communicative systems, we easily project our own feelings onto it. My gut feeling is that, once we can give the models an embodied sense of the world, including the ability to physically explore and make spatially-motivated decisions, we might get closer to understanding this. However, once this happens, I suspect that our conceptions of embodied cognition will be challenged by the behaviour of the non-human intellect.
As Levin says, we are notoriously bad at recognising other forms of intelligence, despite the fact that global ecology abounds with examples. Fungal networks are a good example.
> My understanding, and lived experience, has led me to an appreciation that emotion is a kind of psycho-somatic intelligence that steers both our body and cognition according to a broad set of circumstances.
Well, from what I understood, it is true that some parts of our brain are more dedicated to processing emotions and to integrating them with the "rational" part of the brain. However, the real source of emotions is biochemical, coming from the hormones of our body in response to environmental sollicitations. The LLM doesn't have that. It cannot feel the emotions to hug someone, or to be in love, or the parental urge to protect and care for children.
Without that, the LLM can just "verbalize" about emotions, as learned in the corpora of text from the training, but there are really no emotions, just things it learned and can express in a cold, abstract way.
For example, we recognize that a human can behave and verbalize to fake some emotions without actually having them. We just know how to behave and speak to express when we feel some specific emotion, but in our mind, we know we are faking the emotion. In the case of the LLM, it is physically incapable of having them, so all it can do is verbalize about them based on what it learned.
> people claiming "AI" can now do SWE tasks which take humans 30 minutes or 2 hours
Yes people claim that but everyone with a grain of salt in his mind know this is not true. Yes, in some cases an LLM can write from scratch a python or web demo-like application and that looks impressive but it is still far from really replacing a SWE. Real world is messy and requires to be careful. It requires to plan, do some modifications, get some feedback, proceed or go back to the previous step, think about it again. Even when a change works you still need to go back to the previous step, double check, make improvements, remove stuff, fix errors, treat corner cases.
The LLM doesn't do this, it tries to do everything in one single step. Yes, even when it is in "thinking" mode, in thinks ahead and explore a few possibilities but it doesn't do several iterations as it would be needed in many cases. It does a first write like a brilliant programmers may do in one attempt but it doesn't review its work. The idea of feeding back the error to the LLM so that it will fix it works in simple cases but in most common cases, where things are more complex, that leads to catastrophes.
Also when dealing with legacy code it is much more difficult for an LLM because it has to cope with the existing code with all its idiosincracies. One need in this case a deep understanding of what the code is doing and some well-thought planning to modify it without breaking everything and the LLM is usually bad as that.
In short, LLM are a wonderful technology but they are not yet the silver bullet someone pretends it to be. Use it like an assistant to help you on specific tasks where the scope is small the the requirements well-defined, this is the domain where it does excel and is actually useful. You can also use it to give you a good starting point in a domain you are nor familiar or it can give you some good help when you are stuck on some problem. Attempt to give the LLM a stack to big or complex are doomed to failure and you will be frustrated and lose your time.
> The other might be more humbling: how significant are we? Or, as a statement instead of a question, we are the only significant thing of which we know.
We may assume that we are the only intelligent life in the universe and that life on our planet is highly significant. Humanity itself faces a great challenge in finding its way. We are currently in a dark period of our evolution—one where we have mastered a great deal of technology to make our lives materially comfortable, yet we have not mastered the "demons" within our minds. We fail to control them as individuals, and even less so as societies. These demons were instilled in us by natural evolution, serving us well until the Neolithic age. But in the modern era, they have become our greatest enemy. At this point, the biggest problem facing humanity is human nature itself. We stand on the brink of destroying our planet in numerous ways. Humans have already caused one of the greatest mass extinctions of large animals in Earth's history.
One argument supporting the theory that Earth is the only planet with advanced life is the growing realization of how many rare conditions must be met for life to emerge. In the past, scientists believed it was enough for a planet to be located within the habitable zone of its star. We are now beginning to understand that this is merely one of the most basic requirements among many others.
Earth itself has come close to losing all its life on multiple occasions—such as during the Snowball Earth period—despite the Sun remaining stable and the planet still being within the habitable zone.
One crucial factor for sustaining life is a planet’s internal magmatic activity, which must be powerful enough to generate a stable magnetic field. This field protects the atmosphere from being stripped away by solar winds. Additionally, it seems that magmatic activity played a key role in warming the planet during its early years when the Sun’s radiation was weaker. In fact, the gradual increase in solar radiation over billions of years appears to have offset the decrease in Earth's internal heat, maintaining the planet’s temperature within a range suitable for life to thrive.
However, Earth's prolonged and vigorous magmatic activity appears exceptional, likely because a colossal collision with a rogue protoplanet—the event known as the Giant Impact Hypothesis—not only formed the Moon but also injected an enormous amount of thermal energy into the young Earth. This impact created a long-lasting magma ocean phase, effectively resetting the planet's internal heat and driving rapid mantle convection and differentiation. Such enhanced magmatic activity contributed to the early formation of a stable geodynamo, which has sustained Earth's magnetic field and, consequently, its atmosphere over geological time.
For all we know, Earth may be unique in the universe, but we are far from certain enough to make such a claim.
The other possibility is that intelligent life exists elsewhere, but the barriers imposed by the speed of light—combined with the unimaginable vastness of the universe—may render it impossible for advanced civilizations to find or communicate with one another. Who knows? Perhaps the universe was created by some form of intelligence that ensured life could develop, but only in such rare and distant pockets that no two civilizations could ever reach each other, or even communicate.
EDIT: expanded the paragraph about the big impact hypothesis.
An airplane is far less energy-efficient than a bird to fly, to such an extent that it is almost pathetic. Nevertheless, the airplane is a highly useful technology, despite its dismal energy efficiency. On the other hand, it would be very difficult to scale a bird-like device to transport heavy weights or hundreds of people.
I think current LLMs may scale the same way and become very powerful, even if not as energy-efficient as an animal's brain.
In practice, we humans, when we have a technology that is good enough to be generally useful, tend to adopt it as it is. We scale it to fit our needs and perfect it while retaining the original architecture.
This is what happened with cars. Once we had the thermal engine, a battery capable of starting the engine, and tires, the whole industry called it "done" and simply kept this technology despite its shortcomings. The industry invested heavily to scale and mass-produce things that work and people want.
However, this can also be an opportunity to gain some understanding about our nature and our minds. Through that understanding, we can free ourselves from suffering, find joy, and embrace life and the present moment as it is.
I am just finishing the book The Power of Now by Eckhart Tolle, and your comment made me think about what is explained in it. Tolle talks about how much of our suffering comes from how deeply we (understandably) tie our core identity and self-worth to our external skills, our past achievements, and our status among peers.
He explains that our minds construct an ego, with which we identify. To exist, this ego needs to create and constantly feed an image of itself based on our past experiences and achievements. Normally we do this out of fear, in an attempt to protect ourselves, but the book explains that this never works. We actually build more suffering by identifying with our mind-constructed ego. Instead of living in the present and accepting the world as it is, we live in the past and resist reality in order to constantly feed an ego that feels menaced.
The deep expertise you built is real, but your identity is so much more than just being a 'principal engineer'. Your real self is not the mind-constructed ego or the image you built of yourself, and you don't need to identify with it.
The book also explores the Buddhist concept that all things are impermanent, and by clinging to them we are bound to suffer. We need to accept that things come and go, and live in the present moment without being attached to things that are by their nature impermanent.
I suggest you might take this distress you are feeling right now as an opportunity to look at what is hurting inside you, and disidentify yourself from your ego. It may bring you joy in your life—I am trying to learn this myself!