Last I checked humans didn't pop into existence doing that. It happened after billions of years of brute force, trial and error evolution. So well done for falling into the exact same trap the OP cautions. Intelligence from scratch requires a mind boggling amount of resources, and humans were no different.
To be fair, it is still pretty remarkable what the human brain does, especially in early years - there is no text embedded in the brain, just a crazily efficient mechanism to learn hierarchical systems. As far as I know, AI intelligence cannot do anything similar to this - it generally relies on giga-scaling, or finetuning tasks similar to those it already knows. Regardless of how this arose, or if it's relevant to AGI, this is still a uniqueness of sorts.
Human babies "train" their brain on literally gigabytes of multi-modal data dumped on them through all their sensory organs every second.
In a very real sense, our magic superpower is that we "giga-scale" with such low resource consumption, especially considering how large (in terms of parameters) the brain is compared to even the most advanced models we have running on those thousands of GPUs today. But that's where all those millions of years of evolution pay off. Don't diss the wetware!
How is that relevant? The human brain is at the point of birth (or some time before that). We compare that with an LLM model doing inference. The training part is irrelevant, the same way the human brains' evolution is.
Do you think evolutionary pressures are the best explanation for why humans were able to posit the Poincaré conjecture and solve it? While our mental architecture evolved over a very long time, we still learn from miniscule amounts of data compared to LLMs.
We were optimized to rapidly adapt to changing environments by solving the problems that arise through tool-making and cooperation in complex multi-stage tasks (like say hunting that mammoth to make clothing out of it). It turns out that the cheapest evolutionary pathway to get there has some interesting emergent phenomena.
20 watts ignores the startup cost: Tens of millions of calories. Hundreds of thousands of gallons of water. Substantial resources from at least one other human for several years.
Just an interesting thought experiment: if you took all the sensory information that a child experiences through their senses (sight, hearing, smell, touch, taste) between, say, birth and age five, how many books worth of data would that be? I asked Claude, and their estimate was about 200 million books. Maybe that number is off ± by an order of magnitude. ...but then again Claude is only three years old, not five.
We have a tremendous amount of raw information flowing through our brains 24/7 from before we are born, from the external world through all our senses and from within our minds as it attempts to make sense of that information, make predictions, generally reason about our existence, hallucinate alternative realities, etc. etc.
If you were able to somehow capture all that information in full detail as you've had access to by the age of say 25, it would likely dwarf the amount of information in millions of books by several orders of magnitude.
When you are 25 years old and are presented a strange looking ball and told to throw it into a strange looking basket for the first time. You are relying on an unfathomable amount of information turned into knowledge and countless prior experiments that you've accumulated/exercised to that point relating to the way your body and the world works.
Humans are "multi-modal". Sure we get plenty of non-textual information, but LLMs were trained on basically every human-written world ever. They definitely see many orders of magnitude more language than any human has ever seen. And yet humans get fluent based after 3+ years.
If you treat the human brain as a model, and account for the full complexity of neurons (one neuron != one parameter!) it has several orders of magnitude more parameters than any LLM we've made to date, so it shouldn't come as a surprise.
What is surprising is that our brain, as complex as it is, can train so fast on such a meager energy budget.
You are right, but at the same time the human brain does way more stuff (muscle coordination, smell, touch sensing) and all those others take up at least some budget.
So interesting question, but I'm not convinced it's only a scale issue. Like finished models don't really learn the same way as humans do - we actually change the parameters "at runtime", basically updating the model and learning is not only for the current context.
For sure, it seems like there's something there primed to pick up human language quickly, clearly evolutionarily driven.
Not necessarily so for the dynamics of magnetic fields, or nonhuman animal communications, or dark energy/matter.
We are bombarded nonstop by magnetic fields, nonhuman animal communications, and live in a universe which seems to be majority dominated by dark energy and matter, and yet understand little to none of it all.
To be fair, the knowledge embedded in an LLM is also, at this point, a couple orders of magnitude (at least) larger than what the average human being can retain. So it's not like all those books and text in the internet are used just to bring them to our level, they go way beyond.
Math and coding competition problems are easier to train because of strict rules and cheap verification.
But once you go beyond that to less defined things such as code quality, where even humans have hard time putting down concrete axioms, they start to hallucinate more and become less useful.
We are missing the value function that allowed AlphaGo to go from mid range player trained on human moves to superhuman by playing itself.
As we have only made progress on unsupervised learning, and RL is constrained as above, I don't see this getting better.
I’ve seen this style of take so much that I’m dying for someone to name a logical fallacy for it, like “appeal to progress” or something.
Step away from LLMs for a second and recognize that “Yesterday it was X, so today it must be X+1” is such a naive take and obviously something that humans so easily fall into a trap of believing (see: flying cars).
In finance we say "past performance does not guarantee future returns." Not because we don't believe that, statistically, returns will continue to grow at x rate, but because there is a chance that they won't. The reality bias is actually in favour of these getting better faster, but there is a chance they do not.
this is true because markets are generally efficient. It's very hard to find predictive signals. That is a completely different space than what we're talking about here. Performance is incredibly predictable through scaling laws that continue to hold even at the largest scales we've built
I agree this is a new space and prediction volatility is much higher. We have evidence going back to at least 2019 that improvements have been exponential (https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...). The benchmarks are all over the place because improvements don't happen in a straight line. Even composites aren't that useful because the last 10% improvement can require more effort than the first 90%.
To be frank, from what I can see, even if all progress stopped right now, it would take 1-2 decades to fully operationalise the existing potential of LLMs. There would be massive economic and social change. But progress is not stopping, and in some measurements, continues to improve exponentially. I really think this is incredibly transformative. Moreso than anything humanity has ever experienced. In the last year, OpenAI and potentially Claude have been working on recursive self-improvement. Meaning these models are designing better versions of themselves. This means we have effectively entered the singularity.
I agree with all of this -- the one nit I'll say is that scaling laws (e.g. Chinchilla -- classic paper on this that still holds) are based on next-token log loss on an evaluation set for pretraining, and follow (empirically) very consistent powerlaw relationships with compute / data (there is an ideal mixture of compute + data, and the thing you scale is the compute at this ideal mixture). So that's all I mean by performance -- we do also have as you observe benchmark performance trends (which are measured on the final model, after post-training, RL stages etc). These follow less predictable relationships, but it's the pretraining loss that dominates anyway.
Even more insane than assuming the trend will continue is assuming it will not continue. We don't know for sure (especially not by pure reason), but the weight of probability sure seems to lean one direction.
Logical fallacies are vastly overrated. Unless the conversation is formal logic in the first place, "logical fallacies" are just a way to apply quick pattern matching to dismiss people without spending time on more substantive responses. In this case, both you and the other are speculating about the near future of a thing, neither of you knows.
Hard to make a more substantive response when the OP’s entire comment was a one-sentence logical fallacy. I’m not cherry-picking here.
> In this case, both you and the other are speculating about the near future of a thing, neither of you knows.
One of us is making a much grander claim than the other:
- LLMs have limitless potential for growth; because they are not capable of something today does not mean they won’t be capable of it tomorrow
- LLMs have fundamental limitations due to their underlying architecture and therefore are not limitless in capability
> We went from 2 + 7 = 11 to "solved a frontier math problem" in 3 years, yet people don't think this will improve?
All that says is that the speaker thinks models will improve past where they are today. Not that it's a logical certainty (the first thing you jumped on them for), and certainly not anything about "limitless potential for growth" (which nobody even mentioned). With replies like this, invoking fallacies and attacking claims nobody made, you're adding a lot of heat and very little light here (and a few other threads on the page).
> All that says is that the speaker thinks models will improve past where they are today. Not that it's a logical certainty
Exceedingly generous interpretation in my opinion. I tend to interpret rhetorical questions of that form as “it’s so obvious that I shouldn’t even have to ask it”.
The term of art for that is steelmanning, and HN tries to foster a culture of it. Please check the guidelines link in the footer and ctrl+f "strongest".
A possibility is not a fact. Assuming a possibility will happen is not justified. Therefore it is false as an assumption, even if it is true it is a possiblity.
I genuinely have no idea what you're on about. One guy expressed his belief about how the future will play out, and another disagreed. Time will be the judge of it, not either of us.
Hmm...the sun comes up today is a pretty good bet that the sun comes up tomorrow.
We have robust scaling laws that continue to hold at the largest scales. It is absolutely a very safe bet that more compute + more training + algorithmic improvements will certainly improve performance it's not like we're rolling a 1 trillion dollar die.
Well if people give the exact same 'reasons' why it could not do x task in the past that it did manage to do then it is tiring seeing the same nonsense again. The reason here does not even make much sense. This result is not easily verifiable math.
Yeah, and even if we accept that models are improving in every possible way, going from this to 'AI is exponential, singularity etc.' is just as large a leap.
Scaling law is a power law , requiring orders of magnitude more compute and data for better accuracy from pre-training. Most companies have maxed it out.
Next stop is inference scaling with longer context window and longer reasoning. But instead of it being a one-off training cost, it becomes a running cost.
In essence we are chasing ever smaller gains in exchange for exponentially increasing costs. This energy will run out. There needs to be something completely different than LLMs for meaningful further progress.
I tend to disagree that improvement is inherent. Really I'm just expressing an aesthetic preference when I say this, because I don't disagree that a lot of things improve. But it's not a guarantee, and it does take people doing the work and thinking about the same thing every day for years. In many cases there's only one person uniquely positioned to make a discovery, and it's by no means guaranteed to happen. Of course, in many cases there are a whole bunch of people who seem almost equally capable of solving something first, but I think if you say things like "I'm sure they're going to make it better" you're leaving to chance something you yourself could have an impact on. You can participate in pushing the boundaries or even making a small push on something that accelerates someone else's work. You can also donate money to research you are interested in to help pay people who might come up with breakthroughs. Don't assume other people will build the future, you should do it too! (Not saying you DON'T)
Unfair - human beats AI in this comparison, as human will instantly answer "I don't know" instead of yelling a random number.
Or at best "I don't know, but maybe I can find out" and proceed to finding out/ But he is unlikely to shout "6" because he heard this number once when someone talked about light.
Because LLMs dont have a textual representation of any text they consume. Its just vectors to them. Which is why they are so good at ignoring typos, the vector distance is so small it makes no difference to them.
what bothers me is not that this issue will certainly disappear now that it has been identified, but that that we have yet to identify the category of these "stupid" bugs ...
We already know exactly what causes these bugs. They are not a fundamental problem of LLMs, they are a problem of tokenizers. The actual model simply doesn't get to see the same text that you see. It can only infer this stuff from related info it was trained on. It's as if someone asked you how many 1s there are in the binary representation of this text. You'd also need to convert it first to think it through, or use some external tool, even though your computer never saw anything else.
> It's as if someone asked you how many 1s there are in the binary representation of this text.
I'm actually kinda pleased with how close I guessed! I estimated 4 set bits per character, which with 491 characters in your post (including spaces) comes to 1964.
Then I ran your message through a program to get the actual number, and turns out it has 1800 exactly.
Okay but, genuinely not an expert on the latest with LLMs, but isn’t tokenization an inherent part of LLM construction? Kind of like support vectors in SVMs, or nodes in neural networks? Once we remove tokenization from the equation, aren’t we no longer talking about LLMs?
It's not a side effect of tokenization per se, but of the tokenizers people use in actual practice. If somebody really wanted an LLM that can flawlessly count letters in words, they could train one with a naive tokenizer (like just ascii characters). But the resulting model would be very bad (for its size) at language or reasoning tasks.
Basically it's an engineering tradeoff. There is more demand for LLMs that can solve open math problems, but can't count the Rs in strawberry, than there is for models that can count letters but are bad at everything else.
LLMs in some form will likely be a key component in the first AGI system we (help) build. We might still lack something essential. However, people who keep doubting AGI is even possible should learn more about The Church-Turing Thesis.
AGI is definitely possible - there is nothing fundamentally different in the human brain that would surpass a Turing machine's computational power (unless you believe in some higher powers, etc).
We are just meat-computers.
But at the same time, there is absolutely no indication or reason to believe that this wave of AI hype is the AGI one and that LLMs can be scaled further. We absolutely don't know almost anything about the nature of human intelligence, so we can't even really claim whether we are close or far.
> We went from 2 + 7 = 11 to "solved a frontier math problem" in 3 years, yet people don't think this will improve?
This is disingenuous... I don't think people were impressed by GPT 3.5 because it was bad at math.
It's like saying: "We went from being unable to take off and the crew dying in a fire to a moon landing in 2 years, imagine how soon we'll have people on Mars"
This is not formally verified math so there is no real verifiable-feedback aspect here. The best models for formalized math are still specialized ones. although general purpose models can assist formalization somewhat.
Maybe to get a real breakthrough we have to make programming languages / tools better suited for LLM strengths not fuss so much about making it write code we like. What we need is correct code not nice looking code.
> programming languages / tools better suited for LLM strengths
The bitter lesson is that the best languages / tools are the ones for which the most quality training data exists, and that's pretty much necessarily the same languages / tools most commonly used by humans.
> Correct code not nice looking code
"Nice looking" is subjective, but simple, clear, readable code is just as important as ever for projects to be long-term successful. Arguably even more so. The aphorism about code being read much more often than it's written applies to LLMs "reading" code as well. They can go over the complexity cliff very fast. Just look at OpenClaw.
I guess it's hard to tell until we see more long-term AI-generated project, but many of the ones we have so far (OpenClaw and OpenCode for instance) are well-known for their stability issues, and it seems "even more AI" is not about to fix that.
> But once you go beyond that to less defined things such as code quality
I think they have a good optimization target with SWE-Bench-CI.
You are tested for continuous changes to a repository, spanning multiple years in the original repository. Cumulative edits needs to be kept maintainable and composable.
If there are something missing with the definition of "can be maintained for multiple years incorporating bugfixes and feature additions" for code quality, then more work is needed, but I think it's a good starting point.
What is possible today is one thing. Sure people debate the details, but at this point it's pretty uncontroversial that AI tooling is beneficial in certain use cases.
Whether or not selling access to massive frontier models is a viable business model, or trillion-dollar valuations for AI companies can be justified... These questions are of a completely different scale, with near-term implications for the global economy.
Except it's not how this specific instance works. In this case the problem isn't written in a formal language and the AI's solution is not something one can automatically verify.
I mean, even if the technology stopped to improve immediately forever (which is unlikely), LLMs are already better than most humans at most tasks.
Including code quality. Not because they are exceptionally good (you are right that they aren’t superhuman like AlphaGo) but because most humans are rather not that good at it anyway and also somehow « hallucinate » because of tiredness.
Even today’s models are far from being exploited at their full potential because we actually developed pretty much no tools around it except tooling to generate code.
I’m also a long time « doubter » but as a curious person I used the tool anyway with all its flaws in the latest 3 years. And I’m forced to admit that hallucinations are pretty rare nowadays. Errors still happen but they are very rare and it’s easier than ever to get it back in track.
I think I’m also a « believer » now and believe me, I really don’t want to because as much as I’m excited by this, I’m also pretty much frightened of all the bad things that this tech could to the world in the wrong hands and I don’t feel like it’s particularly in the right hands.
Yep, I remember a friend saying they did a maths course at university that had the correct answer given for each question - this was so that if you made some silly arithmetic mistake you could go back and fix it and all the marks were for the steps to actually solve the problem.
This would have greatly helped me. I always was at a loss which trick I had to apply to solve this exam problem, while knowing the mathematics behind it. Just at some point you had to add a zero that was actually a part of a binomial that then collapsed the whole fromula
That is also how humans work mostly. Once every full moon we may get an "intuition" but most of the time we lean on collective knowledge, biases and behavior patterns to take decisions, write and talk.
What’s funny is that there are total cranks in human form that do the same thing. Lots of unsolicited “proofs” being submitted by “amateur mathematicians” where the content is utter nonsense, but like a monkey with a typewriter, there’s the possibility that they stumble upon an incredible insight.
In simpler terms - they create an MCP server, essentially an API that the coding agent can call, that can fill in context about previous decisions done by the coding agent earlier in development. Agent equivalent of asking someone who's been working there longer "why is this this way".
This means that the agent will can have context of previous decisions, something that they currently struggle with as they are always starting from a blank slate.
Coding agents starting from a blank slate isn’t good practice to begin with. That’s a vibe coding practice, not a practice that you’d start with when you want to build a real business serving customers. You start with specifications and design documents; you don’t leave those decisions to agents (although you can use agents to help design them). So the context ought to be there already.
Red meat (a known carcinogen) at the top is gold. All that saturated fat the energy will come from (not from protein or veggies) will probably cause heart problems and plaque formation in arteries, not to mention insulin resistance just from increased FFAs in blood.
Vegetarians and vegans have lower T2D incidence on average FWIW.
> Vegetarians and vegans have lower T2D incidence on average FWIW.
Anecdotally, my dad tried vegetarianism for quite a while to address his T2D, but it had no effect. My mom cut out sugar and processed carbohydrates and her T2D was gone in ~3 months or so.
Following any diet is probably better than nothing at all, which could explain the lower incidence of T2D in that group vs the general public. I’d be more curious about the rates in vegetarians/vegans vs people who eat paleo or even carnivore.
Treating T2D and preventing T2D are completely different things from a dietary perspective. Same way you wouldn't give chemotherapy to a healthy person to prevent cancer
There are studies that support it. Here is a meta analysis of low carb diets on T2D, the majority show it works, though as always, there is going to be some individual variability.
Also, red meat isn't a known carcinogen. Processed meat is. And plaque formation in arteries is a consequence of inflammation... which is caused by sugar, a.k.a. carbohydrates. Insulin resistance is also a consequence of increased carbohydrate consumption.
But as I said, it is a combination of fats and carbs that is the worst killed. Eliminating either one of those from the diet leads to an automatic improvement.
i would interpret physical fitness as cardio exercise routine and depleted muscle glycogen stores:
so breakfast is very welcome and without it is not possible to keep up exercise routine
Yeah, that's vanilla syntax. The semantics are fairly magic though. The component function that calls useState though isn't a normal function, it's a function that must be called in a special way by the React runtime in order to line up all of the hidden state that React maintains so that it can magically infer the data that your `useState` call maps to and then there's more magic to maintain the lifetime of that data.
Yes, but it is not syntax. It's a contract with the library. React is completely usable using vanilla JS syntax. Same cannot be said for Vue and Angular.
It feels a bit like talking about apples and oranges in this thread.
Your example usage only implies what I would consider the non-magic implementation behavior. I could fulfill that contract with `(initial) => { let s = initial; return [ s, v => s = v]; }`. No hidden magic there, and no chance of breaking referential identity.
The swapping is indeed faster as the SSD is on the SoC and so fast to access.
To the point that an 4 year old 8gb M1 Air is enough for simpler development work, at least for me.
In Amsterdam they are commuting, and in a fantastic infrastructure where cars get red lights when bicycles approach on an intersecting cycleway. That's probably the main reason for safety and why they ride so much.
There's very few places where the light changes automatically for bikes in Amsterdam - all that I can remember now, don't. The large majority of lights do respond to input from pressing the cross button (also pressable by bycicles), but it's not automated.
They do use "change on approach" lights outside of the cities way more, but in cities it's usually only for trams and buses.
I dislike this black n white rhetoric from both sides. "Just do some workout" - "no this doesn't work for me". Yes, workout does help, but mental illness is still real. Both sides should try to be more sensitive and more understanding in my opinion.
I can't fix my social anxiety through workout. But I sure can feel better about myself when doing it and then approach those anxieties with more confidence, but the anxieties are still there.
I'm speaking from experience regarding mental illnesses and exercise. And I never discounted medication.
Just that exercise is critically underprescribed, I'm fairly sure it would work better for milder cases compared to meds. Not to mention the other health benefits listed in the thread.
Same way an opioid pill is still prescribed in cases of cancer or severe pain. Just that there are probably better, milder alternatives that don't have as many side effects that could fit a lot of these people with milder problems.
I am 90% on your side (my experience is just that most doctors or therapists ask for my workout before considering meds). The truth is just that every mental illness is different.
So yeah, my takeaway is we should embrace workout, or maybe not necesserily straight workout, but just simple movement/exercise, more than meds. Especially here in germany I notice a very bad/prejudice mindset about doctors. I myself had very good experience. Either just because of pure luck or because I went to them, because I geniuenely believe that they are professionals who can help me. And that's what they (most of the time, not always) did.
I'm not saying that workouts will necessarily fix your social anxiety or any other mental disorder, but I don't know of anything else that necessarily will - meds and psychotherapy are also quite limited in their effectiveness.
All I'm saying is people should at least consider that exercise (and more specifically - mild to rigorous cardio workouts) can be just as effective as psychotherapy / meds are. The evidence is there.
I don't expect this understanding to come from therapists, this needs to come from society at large.
Also, it doesn't have to be mutually exclusive, you can do both.
My depression is severe and exercise does make a difference. I’m not sure why you’d think that.
It doesn’t make the bad thoughts go away. It doesn’t turn off the bad feelings. I’d still be diagnosed with severe depression if I went in with a fresh slate. What it does is give me the energy to endure it, though. The physiological symptoms subside quite a bit, and it makes a meaningful difference.
It also helps more than medication since I seem to be a non-responder. It’s a big help in my life.
Agreed, but mild to moderate depression is the majority of people with mental health issues - that's where we should start.
Also , I suspect it will also help a lot in severe depression but its hard to get someone with severe depression to exercise - in that case meds should be the way to go.
Yep exactly this.
The thing is we are now so removed from exercise (and healthy living in general) as a society - take the car, take the elevator, sit at your desk the whole day and then fall asleep in the couch at home. And paradoxically this lifestyle makes us so tired and energy depleted, that even the thought of starting to exercise seems ridiculous to many.
This is making it super hard for many people to start exercising and persist, it seems like everything in modern society is geared to make us couch zombies - so no surprise we have high levels of obesity, anxiety, depression and what not.
It's not that hard a leap is it? For some reason it seems oblivious to non-sufferers that the idea of a physical treatment for a mental ailment is a given?
For most folks, that connection doesn't exist. Hell, I work out 3x a week and even I don't notice the obvious side-effects even though I'm certain they exist.
When we're dealing with ordinary people living their daily lives, the idea that something so "non-mental" - in the most literal sense "physical" can have an effect on the mental, is a really tough thing to swallow, understand and hell, even percept when things are going well?
Sorry. But I'm an avid gym-goer and even I have to remind myself of the positive it's doing. We're not all the same.
> Sorry. But I'm an avid gym-goer and even I have to remind myself of the positive it's doing. We're not all the same.
Maybe you're one of the people that for whatever reason exercise does nothing to - though I highly doubt it. I'm not sure what training you do exactly but to reap most of the benefits the workout should include moderate cardio work. I don't think going to lift weights for a 40-60 minutes with plenty of rest between sets will cut it. Running for 45+ minutes is what most people should aim for, of course beginners will do less.
Anyway I agree with you - for most folks the connection doesn't exist, perhaps its time this changed.
Hmm, I've never felt any "noticeable" positive effect of exercise either.
I didn't exercise basically at all for well over decade. But I felt fine, wasn't overweight or gaining weight or anything like that.
But I decided that I should probably exercise, so have been for awhile now. Fortunately I have what appears to be a very high amount of self control so I'm able to just force myself to go exercise even though I hate every second of it and it just feels like a waste of time.
I haven't noticed any changes to anything that I can think of since going from no exercise to 3-4 days a week of about 1 hour sessions of zone 2+ exercise.
I just keep waiting for this magical benefit that some people talk about, but I get nothing.
I'm only doing it because if I don't, supposedly "bad things" will happen to my body in the future.
Interesting. What kind of exercise do you do? and how do you define Zone 2 (there are many contradicting definitions out there)?
I would try going a bit to zone 3/4 at times (of course while being careful not to injure yourself) not only to get the 'noticeable' positive effect, but also because it seems like you get bored/frustrated a lot in your workouts and it doesn't have to be that boring.
Anyway keep at it, I hope you will enjoy it more and get that nice feelings everyone is talking about. Try noticing if your sleep is a bit deeper and better after hard sessions, how your energy levels are etc. For most people there will be improvement in those areas (you could be an outlier but I kinda doubt it).
Also - don't take it the wrong way but it's going to be very hard for you to notice anything positive about the whole thing if you're convinced you hate every minute of it. I'm not sure how you can get out of this mindset but I think it's important that you do. Or try different kinds of exercise that you don't hate.
Well aside from weight lifting I mostly use the stair machine because I feel that it best matches the activity that I do like to do sometimes (hiking).
I was defining zone 2 mainly by how it feels. Not too hard and where I can breathe fairly normally and easily have a conversation. But also by the heartrate being in the 65-75% range, so for age 36 I was keeping it around 125-130. My resting heart rate is 52 but not sure if that matters. It's always been that even when I was more than a decade of sedentary.
Another reason I was doing all zone 2 is because I thought that I had some sort of aerobic deficiency syndrome thing from being sedentary for so long. Basically my heart rate would shoot up into zone 3 with pretty minimal exercise, and I read that the only way to fix this was to do lots and lots of long zone 2 exercise for months.
I'm sure if I did more fun things it would be easier to be enthusiastic about it, but I am not even sure what active activity I would like. Sure I like hiking, but that's something I like to do on a trip somewhere exotic like a state or national park, not something I can easily do regularly locally.
My energy levels honestly feel somewhat more depleted when I am working out. Like I just want to take a nap after a workout and I feel like nodding off. Not like instantly, but maybe like an hour after or so.
I just haven't been able to understand or feel the connection people find with exercise. Like I said, I never felt any issue or lacking in my energy levels or mood, or sleep or focus when I was more or less completely sedentary, and I always watched what I ate so I was healthy in that regard (perfect scores on biometric blood tests), normal weight, etc.
So exercise just feels like a time waster, just an uncomfortable time sweating etc and overall possibly a little more tired and drained because of it.
My only motivation for keeping doing it is the prospect that it will help prevent some sort of future complications and health issues, and I guess that's good enough for me to convince myself to keep going.
Hating it is maybe too strong of a word, but I definitely don't look forward to it in any way and I just want to get it over with for the day so I can move onto something that I actually enjoy. It just feels like a chore. Something that we need to do to live a good life, so we do it.
Its possible your max HR is way above your age (the 220 - age thing is really inaccurate). You can buy a chest strap / get properly tested to find out.
I would try to make at least some of the workouts a bit more challenging - you can try pick up running. Or if you're working the stairs machine, do it faster and for longer for at least part of the time. This will not only make it more interesting for you - its also the only way to improve your vO2 max if all you're doing is 3 workouts a week (and a higher v02 max should be your goal is you're thinking about longevity / health).
As for energy levels - I meant in general. After hard workouts I can be quite exhausted for 24 hours sometimes. That's normal. But my sleep is usually higher quality and the day after I'm a bit more vital. My mood is more stabilized etc.
Anyway good luck! I hope you'll take away from this that its OK to change /mix things up and see what works for you.
I did always kind of wonder if i'm even using the right heart rate zones. But it's not like I am trying to train for any specific purpose. Just to be healthy, so I doubt I need to go to the level of doing a lab test to find that out.
I can definitely just increase the intensity.
Overall I don't feel like I have ever been in touch with my body or mood or things like that.
Like I don't know how I would judge how well I slept on a given night. I don't normally wake up in the middle of the night or anything like that. I don't really feel I can gauge my sleep quality by how I feel when I wake up because when I wake up for work on a weekday I am always tired. I just assume that's because I am not a morning person, and I only feel rested and good if I wake up more like around 10-11am on a day that I don't have to be up earlier.
As for mood, I rarely feel like I am in tune with that much either. Maybe it's a learned skill and I just never took the time to develop it or something. I feel more or less the same the vast majority of the time and that's about how well I can put it. If my mood is changing, it's not something I normally "notice" on its own I guess.
It's always a struggle for me to decide what exactly to do, because I feel like when looking up that information all you find is that it's all contradictory in some way the more you keep reading into it.
reply