Capital is not what most people conceive of it to be. Capital is commonly (and correctly) viewed as a "store of wealth", or as "the crystallisation of labour-time" - but capital comes in different flavours, and when you deal with capital in the form of goods which provide an incremental benefit to the user, what you have is indeed capital, but when you deal with money... it's up to you whether it's capital or not.
Often, people confuse money with capital. Money is an exchange mechanism which facilitates the transfer of value between parties through a common unit of exchange. Money can be viewed as a subset of capital (i.e. as a sedentary store of value), or it can be viewed as the destruction of the benefits of labour, as it converts an active instrument (i.e. the work I am doing, or the goods I have produced) into a sedentary instrument which provides no incremental benefit, unless it is spent.
So - how this relates to the article - capitalism is a system which currently optimises for money. When you take a step back and consider that by converting labour and value into money, you are effectively freezing the benefit of that labour away, accruing money as an end of its own ceases to make sense, as much of this value ends up lying in the ledgers of corporates, governments, or wealthy individuals, doing nothing for anybody other than perhaps dealing with some of the more fragile elements of human psychology (security).
We have let a means become the end, and have forgotten that the Purpose of all of this is to promote human happiness - I mean, what other possible Purpose can there be?
Some would disagree that the purpose of existence is to promote happiness.
Robert Nozick offered this thought experiment to which I would not opt to maximize happiness:
> Nozick asks us to imagine a machine that could give us whatever desirable or pleasurable experiences we could want. Psychologists have figured out a way to stimulate a person's brain to induce pleasurable experiences that the subject could not distinguish from those he would have apart from the machine. He then asks, if given the choice, would we prefer the machine to real life? [0]
The problem here is that you've created a scenario that talks about a single human brain, but "maximizing human happiness" doesn't refer to any single person's brain. It refers the state of our entire social environment as viewed through the aggregate of individual experience.
And you're probably vastly underestimating the complexity and comprehensiveness demanded of such a machine.
Would I like to impose the experience machine to all of humanity or at least all those that I love?
My answer doesn't change. Actually, I would go to great lengths to prevent imposing such a fate on humanity.
I may be naive in thinking this but human emotions and euphoria can probably be boiled down to a chemical reaction in our brain such that brain activity of someone experiencing something pleasurable in "real life" and that of someone taking a substance simulating these same feelings would be indistinguishable.
However, I believe there is so much more to existence than our own perception even though I am not spiritual at all.
Again, I don't think you are aware of what "indistinguishable from reality" actually entails. The capabilities are so extreme there that you'll have essentially created your own universe.
And what do you care if you live in a different universe? You don't even understand this one well enough to know the difference. So you're basically making this decision on the grounds of your own ignorance.
What if this world is a simulation, and the real world is a wonderful and happy place? You don't know the difference, so what business do you have taking a stand on the issue?
>However, I believe there is so much more to existence than our own perception even though I am not spiritual at all.
What I had originally thought of as an experience machine was drugs that would produce chemically equivalent reactions as those produced from our experiences.
Spiritual usually has an anthropomorphic meaning which is what I object to:
Defined as "of, relating to, or affecting the human spirit or soul as opposed to material or physical things"
Maybe a better way of stating it is that I am not an empiricist that believes that sensory experience is the end-all be-all.
That's a better way of putting it, though what I was really getting at was that, whatever reason you have for believing in the importance non-empirical information, you could employ that same reasoning to believe in anything spiritual as well. If what you believe isn't supported by evidence, then a rational person has no reason to agree to it.
Don't underestimate how easy it is for a human brain to confuse wishful thinking with rational belief. I know what you are talking about -- a sense of objectivity -- but I see no reason to believe it is a valid point of view, rather than a pretense of human desire.
I think this response has something to do with the fact that artificial stimulation is often regarded as deceptive in western culture. So the decision is heavily influenced by how people see it being judged by their social environment. You can achieve an interesting twist by asking: Would you hook your brain to a pleasure machine if everybody else agrees to use it too (and of course it wouldn't cause any social or cognitive side-effects, boredom, fatigue etc.)?
I wonder what Robert Nozick would think of recreational drug use.
That is as close to that "machine" that we have in reality and its pretty clear people are willing to accept/risk the negative impacts and use such drugs.
This being the case, I'm pretty sure, people would use that machine if it was legal and cheap. Similarly, I'm pretty sure at least some of those people would prefer it to real life.
Similarly, the only reason I prefer "real life" is the relationships I built with people. If I had been given the choice, say, 10+ years ago to have such a machine ... I'd prefer it to RL and the effort it took to build the kind of life I want.
So its a philosophical question with answer of "it depends on the person and RL situation they have".
Of course some unhappy with their lives prefer to live in an alternate reality. I don't mean to suggest everyone would prefer "real life" but that the majority of people don't simply chose to maximize happiness at the expense of other things.
One could argue that the value you get from your "real life" relationships with people are nothing more than a simple chemical reaction taking place in your brain that could also be simulated.
The internal experience of an individual is totally beyond the point (the stimulation of watching an engaging movie is literally real in your brain!); all the issues about generating happiness for the human species revolve around the sustainability of that happiness. Kindof can't handwave away the social implications of promoting happiness within society. Also the arguments of "we'd rather DOOOOOO something than just EXPERIENCE doing it" obviously either rely on some sort of dualist existence or just straight out say the experience machine isn't as good as advertised.
Replace drugs with machine and we already have such a thing. However the pleasure from drugs is stigmatized and prohibited, and the addiction to the pleasure machine is seen as life destroying.
IMO - it's not the pleasure itself that is stigmatized, but the side-effects. Side effects including deteriorating health of the user, as well as the occasional resorting to immoral or even criminal means to feed the habit.
There's also a whole dog-whistle aspect to the 'war on drugs' (e.g. massive differences in sentences on crack vs. cocaine offences)
You live life with significantly more "cheat codes" than the rest of the world, now what makes you think you aren't already living with "cheat codes" you aren't aware of? Since you could also program your reality to enjoy the benefits of the "cheat codes" without actually being aware of them, then there's a bit of cosmic irony to your statement.
Because I happen to care about things-in-themselves, and I can have a lot of fun and be very happy with things-in-themselves. Experience machines are the kind of thought experiment invented by people who can't conceive of real life being a nice place to live, and thus constantly want to escape into the un-real.
>capitalism is a system which currently optimises for money.
If that were the case then cash reserves would be where the bulk of people poured their resources. In fact the majority of non-debt "capital" is held in real-estate and some combination of financial instruments (CD, bonds, common stock etc...).
You are confusing how the assets are valued with what is actually being valued. What I think you are saying though is that capitalism is optimizing for having the highest dollar denominated balance sheet - which could be true, but I think that is a function of the people, not the function of open markets or the asset value function of assigning ownership to capital.
Out of my depth here, but aren't the real estate and financial instruments being held merely as a generator of more money (except in the rare case that one actually lives on the real estate, I suppose)?
Generally yes, but the yields on real estate are small relative to the notional value.
Further, by necessity most things are held at a notional value and the sum of all these notional values far far outweighs the total sum of all the money people are holding in bank accounts. Now obviously everyone in the world couldn't sell everything and achieve the notional values and put those in bank accounts, because in order for everyone to sell there would have to be buyers, but we can't have buyers because everyone is selling. But if somehow we could, the amount of money in banks would probably go up 100x or so, but definitely at least 10x.
That means that real assets far outweigh money in the economy and so it's kind of strange to think about 'capital' as money rather than as assets. Assets are things which are generally useful in some particular way (a car, a house, a factory, etc) rather than as their potential which is more what money is.
It's a mix, but in general you can look at the risk profile of these assets as a range between "storage" and "return." The left end is more (historically) stable asset classes with lower returns like government bonds and the latter with volatile and high tech startups.
Again though, it's not "money" per se, ie. cash, that they are trying to generate - though in the cases of dividend yielding assets as part of a retirement portfolio that is true - but rather to put their money into an asset that beats inflation with variable levels of security and maintenance, aka Risk. I guess you could say that you always want a higher cash value over time but I think that is a vulgar interpretation of how a market works.
Real estate, especially in major Western metropolises (London, Paris, New York, etc.) is also seen as a very stable store of value. If you are a newly-minted millionaire capitalist from an unstable developing country, parking your cash in foreign real estate is quite attractive.
You might think so at first glance, but the telling question is whether you're holding those things to generate money to hold onto or whether you just want that money to buy other things?
It depends upon whether capitalism == the current form of managing the money supply (e.g. fractional reserve loans, Federal Reserve's open market operations).
>Capital is commonly (and correctly) viewed as a "store of wealth", or as "the crystallisation of labour-time"
Sorry, but what? I'm guessing that you don't even really believe this and that you don't even know what those phrases are supposed to mean. Why else would you put them in quotes -- twice?
I can only guess that this is supposed to mean that capital somehow binds up prior effort into a claim on (others?) future effort?
That is a fiction (likely told to you by someone with "capital"). It's a fine theory, but it is only that. There is no physicality to capital in your definition. It is purely a social construct.
A more traditional definition of capital is that it refers to a physical machine which substitutes labor.
capital somehow binds up prior effort into a claim on (others?) future effort?
Actually, yes. Money is a lien on future production. Or demand rights. That's specifically what debt creates: an obligation to someone else for future efforts if the debtor.
Debt is not a demand on future effort or production. Debt is merely an accounting artifact; it can make no real claims on the physical world.
Many will exert effort in exchange for money, but calling it a lien on future production is incorrect. No one has any obligation to perform effort in exchange for whatever it is that you call money.
No, not all money is debt in the sense that it's loaned into existence. But it does create a general expectation, and quite often a legal obligation, to accept it as payment. That is, any money-denominated debt is fulfilled by a credible offer to pay in cash (unless previously otherwise negotiated).
Even if simply printed into existence, money is a right of demand, that is, if I run a printing press and drop-ship you a pallet-wrapped billion in Benjamins, you've got equal rights to anyone else to bid that money against anyone else on goods or products.
Capital wealth is similar: it's a legal fiction (though an awfully well established one) that by right of claim to some property, the owner thus has other capabilities, including, significantly, collateral for debts, which can be used to provide liquid debts. Hocking a ring at a pawn shop, home equity loan, corporate paper, or the very significant sovereign debt available to holders of petrochemical resource claims or reserve currency authorities
Aside: which is among the reasons why there's much hay made over claims of personal worth, the solvency of reserve currencies, and over whether and to what extent petrochemical reserves might be stranded -- above and beyond merely comparing net worth, these determine access to loans from others. Donald Trump and oil companies / oil-producing nations are notably testy about the first and second, much of the noise eminating from RT (and much echoed through ZeroHedge) are based on concerns over access to capital (Trump, Oil) and in weakening the dollar and through that, the United States' access to credit markets.
Since definitions of what property is are determined by law, precedent, force of arms, etc., you'll also see a lot of resorting to that as well.
But yes: money is ultimately a created fiction, a universally exchangeable good, whose fundamental supply is unlimited, a reality constrained largely by the fact that increasing its supply without restraint does lead to a dimunution in the exchange value of units currently in existence, and all instruments denominated in currency amounts (e.g., debts, bonds, contracts, etc.). Each individual currency unit is a demand on future production, though the increment of that demand is not itself fixed for all time (inflation/deflation).
There's more on this concept in Modern Monetary Theory. My own thoughts are independently conceived, but I'm finding it's quite similar.
It's a faulty analogy. In fact, a paperclip maximizer goes against every single principle of capitalism out there.
The paperclip maximizer is bad exactly because it isn't based on capitalism; it will maximize paperclips regardless of whether someone values them or not! At _any cost_, it will maximize paperclips.
Markets, on the other hand, have the goal of providing products/services that people are willing to pay for. It adapts. The more people value it, the more capital it will attract, and the more resources will be allocated to it.
You've identified one of the key defects in capitalism today...the disconnect of a "true" market where value is measured by folk's willingness to pay for things.
Not more capitalism :), just determining a better way to manage the money supply then a few folks on the board of central banks. If we could somehow connect an increase or decrease of the money supply to a classic market definition (e.g. people buying and selling) I think it would make a very positive impact to everyone.
The article irrationally assumes the best effort maximizer has no competition and long life, which we have not seen in endless lower powered implementations, and there seems to be no trend so unless it starts going exponential...
There might be a barrier here, like speed of light or various mathematical limits that formalize the old saying of too many cooks spoil the broth. The required complexity of a uber-maximizer to handle the complex universe means its unprovably complicated and unreliable. Or you'd need a bigger computer than the universe to answer the halting question for a universe controlling control loop, or even something much smaller like a mere multinational corporation. Or another way to put it is a predatory competitor maximizer would inevitably be better at predation than its victim that focuses more on some arbitrary goal than on avoid becoming a meal, so in a hyper optimized playing field all you'd get is ever better predators eating each other and eating weaker non-predation focused orgs, or basically the modern business/finance system.
Implicit in the statement "giving people what they want" is a set of assumptions about what "want" means. Do I want life-saving medicine? Do I want a soul-sucking job that pays the bills? Do I want to be de-facto-required to participate in Facebook in order to socialize? I may voluntarily choose all of these things, and while that is unambiguously better than being forced to do them by violence, that's subtly distinct from "wanting" them.
There are also varied spectra of wants: on some level, I want to pursue self-actualization goals that are years away. On another level, I want to step out of the rat race and read a good book. On yet another level, I want to watch TV, eat candy, and click a button that delivers a dopamine reward directly to my brain. These wants are frequently in conflict, and capitalism is more incentivized (or capable) to satisfy some of those than others. (I remember a good PG essay about the phenomenon of short wants vs. long wants, but I can't find it; neighboring ideas are found in http://paulgraham.com/addiction.html and http://paulgraham.com/distraction.html)
I don't think markets are a bad technology, any more than any other technology. But the paperclip analogy fits: people do want paperclips, but they also want a variety of other things that aren't paperclips. If we create social machines that optimize some wants over others, a plethora of human needs and desires may be drastically under-supplied as an unintended side effect.
"Implicit in the statement "giving people what they want" is a set of assumptions about what "want" means."
And while that's all very interesting and important in the abstract, it's worth double-checking that you don't get yourself lost in the weeds... which I think you did... and forget that the paperclip maximizer is scary precisely because it has no concept of wants in it. With capitalism, wants can be changed in real time, and the system reacts. The paperclip maximizer does not. They are fundamentally different.
There is a memeset that strongly encourages you to fling up whatever word smokescreen is necessary to ensure that nothing positive is said about Emmanual Goldstein, today played by Capitalism, but it's still just a word smokescreen. There is a fundamental different between a paperclip maximizer and capitalism, and you should not throw yourself into a word tizzy until you've confused your rational brain enough that you can fall back on the comfortable emotional judgments about capitalism. Not every bad thing that can be said about capitalism is true simply because it's a bad thing said about capitalism.
I think the point here is that in no reasonable definition of the words we are using does capitalism optimize (or even attempt at) giving people what they want. The only way to claim that is to redefine "what people want" as "what capitalism gives them" tautologically, which is I think the point the poster you're responding to was getting at which I think you've completely missed.
And telling people they are thinking emotionally and not rationally is unbelievably condescending and unnecessarily insulting and has no place in a discussion forum like this. Please try to be respectful.
I think the point to take away is that capitalism is a machine which mindlessly maximizes SOMETHING, disregarding all other things, and that thing isn't necessarily good. I think you're very right to point out that the 'something' is actually dynamic in some sense, though that doesn't save capitalism from the argument here. Dynamic can be bad: I suspect it's dynamic based on the material desires of the particular tiny group of human beings who happen to currently posses a great deal of capital, which is not a particularly good thing.
You are right that the analogy breaks down at the assumption that humans would allow the paperclip machine to run amok; human wants do powerfully change and influence the market machine. At the same time, the abyss gazes into us, and human wants change in response to the machine as well.
I'll give an example that intersects the Pavlovian scare words of both "capitalism" and "government": the Prison-Industrial Complex. It's reasonable to want to remove members of society who are dangerously violent, or "cheaters" of sufficient scope. Yet iterate this desire over enough time, and it takes on a life of its own: politicians who want to look tough on crime, parents who want their fears assuaged, private companies who want to profit from correctional tax dollars, and most insidiously, the swaths of police, prison guards, and support staff who want paychecks, benefits and job security.
The final product is something we don't actually want: the most populous prison system in the world, which is massively expensive, socioeconomically predatory, and fails to rehabilitate most of its inmates. Yet there are enough stakeholders who do want the institution to persist for reasons of personal benefit (and enough taxpayers who want to imagine a cute moralistic story and are willing to pay for the luxury of ignorance), that a massive machine of oppression is created and sustained out of a simple, reasonable human want.
I'm not intrinsically anti-capitalist; I'm in the camp of Jaron Lanier, in that money, markets, and corporations are all technologies, and neither good nor evil. But technology should serve humans, and not the other way around; that means paying attention to the power of unchecked, autonomous feedback loops and their power to influence human behavior and our various social ecosystems.
>It's reasonable to want to remove members of society who are dangerously violent, or "cheaters" of sufficient scope. Yet iterate this desire over enough time, and it takes on a life of its own: politicians who want to look tough on crime, parents who want their fears assuaged, private companies who want to profit from correctional tax dollars, and most insidiously, the swaths of police, prison guards, and support staff who want paychecks, benefits and job security.
Of course, this assumes all the actors were initially well-intentioned, which they weren't. Plenty of people supported harsh policies of criminalization and "justice" because they wanted to come down hard on the dark-skinned and the poor.
While that may be true in some sense there is a very important caveat. Capitalism maximizes giving people they want in proportion to the capital they have.
The fact that the distribution is skewed by capital ownership and that agents other than people can own capital creates a powerful distortion and an entirely different systematic effect.
The problem isn't power, it's the behavior generated by rules of the system. Because agents only receive output proportional to the capital they possess, they are incentivized to accumulate capital and the system ends up optimizing for capital instead.
As the system progresses, agents get the choice between loosing their allocation or using it to accumulate capital. This is unfortunate since, theoretically, the whole point of capitalism is to maximize long term consumption.
> Capitalism maximizes "giving people what they want".
So do drug dealers who get girls hooked on heroin and put them on the streets. Simply because you serve some compulsion or desire that someone has does not make you benevolent, there are such things as high local utility traps.
>Capitalism maximizes "giving people what they want". This is much closer to the benevolent AI models than to paperclip maxizers.
Yes, I'm sure that what I want involves working lots of hours to buy Boston real-estate. That surely wasn't determined for me by capitalism, in roughly the same way that a not-quite benevolent AI might determine that what I really ought to want is paperclips.
You want it more than you want not having any real estate, or having real estate in some place that isn't Boston, obviously! You want real estate in Boston - so do many other people, many more than who can physically own real estate in Boston. How is it decided who receives and who goes without?
Wants cannot be considered seriously independent of the reality of scarcity. The sum of the world's wants is greater than the sum of the world's resources, so we have to have some system of allocating those resources among those wants. There are many theories on how to do this, but free market trade is somewhat unique in that it is one of the only systems that acknowledges and allows for subjective value. So long as you are permitted to freely trade your time and resources, you are able of exchanging them for whatever you want most - which is not the same as being able to exchange them for all that you want.
>You want it more than you want not having any real estate, or having real estate in some place that isn't Boston, obviously!
Well, no. What I actually want is to live with my fiancee and not be homeless, and she's from Boston and needs to live near family right now, so here we are. Revealed-preference theory is one subgoal stomp after another.
Certainly, and if we dig deeper, it's not that you want to live with your fiancee, it's that you want human companionship and sex, and we eventually get down into Maslow, but the fact of the matter is that you are making the choice that having real estate in Boston is more valuable to you than not working long hours - your want is indeed to work long hours to live in Boston, rather than to not work long hours to not live in Boston.
Among the choices available to you, you are getting what you want. That's not to suggest that the choices you have available are the choices you want, but I would suggest that the blame for that lies with the fact that we do not live in a post-scarcity society rather than on the resource distribution model we use.
>and if we dig deeper, it's not that you want to live with your fiancee, it's that you want human companionship and sex,
No, actually. I want to live with my fiancee. I could perhaps come up with a supergoal for that if I tried, but it would still involve this specific fiancee.
Some things are terminally good, others instrumentally. The chain of reduction stops at things I want for their own sake, long before it hits "Maslow".
>Among the choices available to you, you are getting what you want.
This is vacuous. Firstly, there might be some superior option nobody told me about, or that I didn't think of. Secondly, defining "what I want" as "what was available to me" amounts to rationalizing economics backwards into psychology while ignoring the available empirical evidence. You don't get to tell me what I want based on what I did, because there are in fact many, many filters between what I want and what I do.
>but I would suggest that the blame for that lies with the fact that we do not live in a post-scarcity society rather than on the resource distribution model we use.
These are effectively the same statement. We have more than enough resources to put everyone in the realm of diminishing returns to additional economic resources -- we just have capitalism instead of happiness.
(This is another reason why revealed preferences theories are crap.)
Economic systems are optimizers (each with slightly different objective functions).
One of the first things we learn about optimizers is that they are rarely globally suited to solving problems, and that the best technique is to identify and apply the locally optimal solution over the range of input.
TLDR: We need to admit that there is not a globally applicable economic system, and learn to apply different models where they are most efficient/least disastrous.
Capital doesn't maximize capital; it maximizes productivity. In absence of controls, capital naturally flows to productivity in order to be maximized by the holders of capital; but only because productivity creates wealth, not the capital itself.
Capital only maximises productivity if it gains an advantage by doing so. In other words, productivity is a means for Capital to maximise itself, not an end. There are infinite examples of this: shrimp peeling machines have existed for ages, but shrimp is still mostly manually peeled in Thailand because it has a cheap workforce.
Productivity is the ratio of outputs to inputs. In an efficient competitive market, profit tends to zero. That would make capitalism a minimizer of productivity, as long as productivity is greater than 1 (otherwise the enterprise is not profitable).
Labour productivity, by economic definition, is the ratio of output to people. If most of the wealth generation in an organization is automated, labour productivity will be high, whereas if most of it is manual, labour productivity will be low, even when both organizations are producing the same amount of wealth.
Does it? For whom does productivity create wealth for?
Productivity is about getting set results based on the smallest set of resources (time, money, materials, etc...). It does not include anything about wealth, nor where that wealth is distributed to, or whether resources could have generated more wealth used in a different way. It's actually possible for increased productivity to decrease wealth.
Productivity is the creation of goods and services. Goods and services are the building blocks of wealth: one who has more goods and services is more wealthy. It is impossible to create more wealth without increased productivity.
Now, to your point, the fruits of productivity could be unfairly captured by non-producers, in which case wealth is being misappropriated from those who created it to those who have the power through whatever form to capture it (i.e., taxes, feudal land ownership, etc).
If I'm understanding this observation correctly, you're saying that capital flows to the productive, which creates wealth, which in principle generates more productivity, and therefore draws more capital?
People fantasize too much about capitalism. The definition of capitalism is: capital and income are expressed under the same monetary unit (like mass and energy could be):
income = capital x interest rate
The former alternatives was to put capital in a separate "namespace". In Middle Ages, only nobles could have land and land was the only capital that could exist ("usury" was forbidden, you know). Then came mercantilism, that defined wealth as a quantity of gold that one had; and so on.
Capitalism will certainly pass, but hopefully not going back to restricting access to capital. I imagined something like a "complex" interest rate, with the imaginary part representing natural resources, while the real part is the human effort that is the only think currently measured by monetary unit. But bimetalism failed everywhere it has been adopted, I think that the imaginary part of a price would become dominant since natural resources are becoming the limit of economic output.
What we really want is a happiness maximizer but no one knows how to make one of those which leverages the innate characteristics of people. Capitalism seems like a pretty good system to pragmatically approximate this happiness maximizer.
Yes, there are negative externalities to capitalism including possibly maximizing short term happiness at the expense of long term. Is there something better?
Framing this in terms of the paperclip maximizer made this idea appear more interesting than it turned out to be.
War can only destroy capital, perhaps the most breathtaking destruction of capital in history was WWII. So War would be antithetical to this hypothetical capital maximizer and it would be it's among its top priorities to avoid it. Similarly, the market system based on human rationality is far too volatile for our capital maximizer, it would look to replace that ASAP.
Thinking about this some more, it would seem that if we really have a Capital Maximizer then the system's process toward its objective (creating capital) is highly inefficient. If your goal is to create capital in the world you need better infrastructure, after all what capital is more useful than that which can be easily used to create more capital? Long term, if you're a Capital Maximizer trapped on this planet, you only have a small finite amount of resources than can be converted to capital, you would want to spread out to other planets very soon. These are obvious things we aren't doing very well.
There are currently a few rules crippling the Capital Maximizer:
1) Don't think in a longer term than five years. That's risky.
2) Labor is the worst expense to have, so minimize the amount of expensively useful work that has to be done.
3) Collude with others when it's gainful, but don't coordinate with others to get around rules (1) and (2). Coordination is for commies.
"It does often seem that, whenever there is a choice between one option that makes capitalism seem the only possible economic system, and another that would actually make capitalism a more viable economic system, neoliberalism means always choosing the former. The combined result is a relentless campaign against the human imagination." -- David Graeber
The root problem here is not economics, it is basic human nature. Humans have forever been seeking to increase their power base at the expense of others, and been corruptible by greed. This is not new, and it is not limited to capitalism. It's basic human nature, and it's one of our key weaknesses.
I'm not seeking to increase my power base at the expense of others. If I would get less money doing something that would make the world a better place, I would happily choose this. Am I not human?
I see this kind of answer as a kind of explanation widely disseminated to justify our economic system as inevitable, but which have very little scientific basis. For me it's wishful thinking.
The kind of economic growth observed in capitalism and some of its characteristics like the cyclic crisis not related with natural changes like climate is something unique to our civilization. And it's a very new thing at the human history (less than 300 years). What will be the effects of these things at long scale is something still to be comprehended.
Capitalism depends on price theory, which predicts that making too many paperclips will cause them to be in glut and the price will drop. So a paperclip-maximizing AI might oughta be built to take that into account.
It's not that any one product is being produced. "Capital" (or profit or wealth) here is the combined market aggregate of all production. The problem is that this too, as with any single optima process, leads to madness.
In the case of capitalism, ecternalities, equity/equality, sustainability, and systemic global risk are all undervalued, and simple short-term profit maximisation tends strongly to "underproduce" these.
I get the point - it's necessary to reanimate the mouldering bones of Malthus now and again. I just disagree with that because the emiprics of the case are pretty lopsided against him. He could not have known about engines. Which now appear to cause one whale of an externality.
The only thing Capitalism did was replace Mercantilism at scale. You had production before.
What's weird is that economists talk about ways to actually price things like externalities, equality, sustainability and/or risk. That might be madness or it might be reason.
If we can't price them, then are we not dropping the checkbook and reaching for the sword?
Corporations can be usefully described as 'profit-maximizers', but capitalism itself doesn't have a coherent behavior - you can't describe it as any kind of AI.
AI is just a subset of intelligence. You absolutely can describe capitalism as an intelligent system since it is primarily made up of units (people) which are intelligent.
If capitalism didn't have a coherent behavior, then how could we ever rely upon it as a primary driver of our economy? It must be coherent and on some level understandable for us to make predictions around it.
You can call an ecosystem 'intelligent' if you want to, I suppose, but there's not really any way to discuss it as a 'maximizer' when it has no goals and no intentional behavior. You could as easily describe water as a 'gravitational potential' minimizer - it has a collective aggregate behavior that you can characterize, but that doesn't make it intelligent by most definitions of the word.
>If capitalism didn't have a coherent behavior, then how could we ever rely upon it as a primary driver of our economy?
It is not a 'primary driver' of our economy, it is a description of our economy. The primary driver of our economy is corporations - entities obeying a profit motive. They have behaviors, intentions, and goals - they attempt maximize shareholder profits. "Capitalism" doesn't intend anything, have any goals; it doesn't even have any non-emergent behaviors.
Every time I start writing a response to this, I can't help but feel it will simply be misinterpreted. I thought my original response was pretty succinct and clear. If you'd like to continue, please select one discrepancy and I'll be happy to discuss it further with you.
Our disagreement may be definitional - your original response was succinct and clear, but did not make an argument, merely an assertion.
A) I see no reason that 'being made up of units which are intelligent' ought to classify a system as intelligent.
B) I see no way in which you can usefully talk about anything without intentional behavior as a 'maximizing intelligence'.
You can't call 'capitalism' the 'driver of our economy' for the same reason you can't call 'ecology' the 'driver of natural selection' or 'conflict' the 'driver of our military conquests'. You are conflating a system with the units in the system, and talking about an ecology of intelligent units as if it is satisfying some behavioral utility function.
'Capitalism' doesn't have a utility function, because it doesn't have decision-making capacity, which is pretty much the only prerequisite for intelligent behavior. It doesn't drive our economy, it is our economy. The actors in the system are corporations and humans (and governments, unions, and a few other united organizational structures), therefore those are the things which can exhibit intelligence.
I hope you don't mind, but I'm only going to address one thing right now. If the conversation continues, I will continue to discuss more with you.
Lets start with (A). You are correct, the way I described it does not explicitly explain how it is intelligent. Lets adopt the strictest definition of intelligence which is the ability to predict.
When I refer to capitalism here, I'll be referring to capitalism implemented, not the theory. Capitalism is intelligent. It is made up of people which are intelligent. These components are at times working together, and at times working against each other. Much like your conscious mind and subconscious mind. Conflicts are resolved in the market or through the state via legislation and trials. The results of components working together and resolving their conflicts is how capitalism has decided to behave. Having behavior is not sufficient for intelligence. A virus has behavior but is not intelligent.
So does a capitalist system make predictions? One example should be sufficient to make the answer to this question yes. If I can show that a capitalist system makes predictions, then I will have proven that it is intelligent.
Lets use America for this example. One thing American Capitalism seeks is to mute the highs / lows of the inherit booms and busts in-order to increase the level of stability of the economy. The Fed watches the rates of lending / borrowing to predict bubbles and raises interest rates to slow / stop the growth of bubbles. Once it has popped, and lending/borrowing slows, the Fed then lowers interest rates to promote lending / borrowing. I know this is not new info for you. I just wanted to be thorough in my response.
So American Capitalism predicts bubbles and acts on those predictions. Thus, intelligent.
You might say the Fed isn't capitalism. Which would be true. But your brain isn't you yet it is the component which makes you intelligent. The Fed is part of America's capitalist system.
You might say that capitalism does not involve government. Which I would say, pure capitalism doesn't involve government, but the active capitalist systems we have today do.
>If I can show that a capitalist system makes predictions, then I will have proven that it is intelligent.
Your example applies perfectly well to a localized ecosystem. I can for example show that a surge in the population of rabbits on a small island 'predicts' a surge of the population of their main predator. That's the island ecosystem predicting a population bubble and then acts on those predictions. Intelligent?
That's beside the point though, because your example didn't show 'capitalism' making a prediction, it showed the Fed making a prediction. The federal reserve is a functional organization, an entity with goals and intentional behavior. The Fed is a perfect example of an entity that can demonstrate intelligent behavior
It is indeed 'part of America's capitalist system' - again, you fall back on the assertion that a system is intelligent if the actors in it are intelligent, the only obvious source of our disagreement. By your logic, you'd have to consider a baseball game a functional intelligence, a three-legged race a maximizing entity.
Systems of intelligences are not automatically intelligent. It is possible for them to be so; corporations qualify, governments qualify, unions qualify. In order to have intentional behavior, an entity has to be self-coordinated; it has to be able to decide to perform an action, and then perform that action. A market has no coordination, and has no decision-making power - markets are inherently decentralized in that way.
If you choose to define 'intelligence' such that a thing which cannot make decisions or choices, has no goals or intentions, and has no self-impelled behavior can qualify, then your definition of the word is fundamentally confusing to me, and I'm not really interested in trying to puzzle out what you mean by it.
> I can for example show that a surge in the population of rabbits on a small island 'predicts' a surge of the population of their main predator. That's the island ecosystem predicting a population bubble and then acts on those predictions. Intelligent?
So I am assuming you are claiming to be part of the ecosystem?
> That's beside the point though, because your example didn't show 'capitalism' making a prediction, it showed the Fed making a prediction.
Not besides the point, it attacks a necessary part of my argument.
Anyways, you have made some good points. I'll give it some thought and get back to you.
>So I am assuming you are claiming to be part of the ecosystem?
In general, I am part of many systems (eco- and otherwise), but no; that section of reply was an argument against the idea that being able to 'predict' the future was a reasonable way to define intelligence. It doesn't really change the argument if you stick me on the island though - I behave intelligently, and that affects the behavior of the ecosystem, but that doesn't cause the ecosystem be 'intelligent'.
It's interesting to have a debate this slowly, I feel like it's unusually civil :-)
Ya, I've been enjoying it :) I was worried you hadn't based upon your previous comment. I'm sorry I haven't returned to this yet, but will soon I promise.
I think we should try to find out where we have agreement and where we begin to diverge. That should make the remainder of this discussion tighter.
I believe our divergence is somewhere under the definition of 'intelligence'. The word is very ill-defined, to the point that we are reduced to describing it functionally in many cases, as with the Turing test.
>Human intelligence is the intellectual capacity of humans, which is characterized by perception, consciousness, self-awareness, and volition.
That's a very normal definition of intelligence. AI is defined (among many other way, naturally) as "the study of intelligent agents"; 'agency' is a requirement for an artificial intelligence.
Since we started with the context of 'paperclip maximizers', I've been talking about intelligence in that context - in order to be studied as an intelligence, a thing must have 'agency', the ability to intentionally act. A thing that doesn't have agency can still have behavior, and that behavior can still be studied, but the behavior is emergent, not intentional - the system does not have agency unless it is capable of having goals, and of making decisions to achieve or progress toward them.
In particular, capitalism isn't 'trying to maximize capital' - that's an effect it's (supposedly) having, but not an intentional one. It's a pretty clear emergent effect - if many of the actors in a system are trying to maximize their personal capital, then the system turning out to maximize net capital should surprise nobody. It's very equivalent to calling a diffusion chamber a 'maximizer' of entropy - a diffusion chamber does maximize entropy, but by calling it a 'maximizer' (in the context of 'paperclip maximizer'), you would ascribe agency to the chamber - it's not trying to maximize entropy, that's just something that it happens to do.
I disagree with much of what was written, but I find it hard to address what seems, to me, more like a long analogy than anything else.
"Capitalism" doesn't maximize "capital", in any sense. The word "capital" generally refers either: to some asset that might be used in production, such as a piece of machinery; or structures that might render services over a time, like buildings; or to some sum which might be used in buying either of these, through financial services.
Now, "capital" wouldn't be maximized by "capitalist" firms. Why should they? Firms produce additional capital units so long as they can be sold above production cost, but they won't produce indefinitely. They are concerned with their profits, not with the amount of product. They aren't even concerned with the products' value, so long as it is above production costs. Hence, there's no incentive to inflate the capital good prices, which would be necessary to maximize the amount of money "capital".
Also, an it really be said that capitalism is an adaptive intelligence system? That, I'm not knowledgeable enough to tell. On the other hand, human interaction is unstructured enough that it seems unlikely that there's any goal to such supposed intelligence, or that it has capital-maximization as it's goal. Rather, I see a collection of different goals, each corresponding to different agents, and some of which might even be contradictory. The noisy neighbor, the ambitious monopolist, or the NIMBY all seek to maximize their private benefit, but often beyond social benefits. Hence I find it hard, whenever markets fail through the presence of either externalities, imperfect competition, or adverse incentives, that it ought nonetheless be the work of some powerful demiurge.
Yet, in so far as it puts into words an anxiety that so many feel, I can't say much else beyond that I don't believe it. The author certainly proceeds by analogy when he likens "capitalism" to an artificial intelligence whose goals themselves mean nothing to humanity at large; and mentions alienating "monuments" to capital, as if to liken it to a 20th century pharaoh, or to cruel tyrant. But since so many feel this is really the case, I have nothing else to say (other than that it's a very vivid picture). I disagree, for instance, that the majority of goods are worthless, as they ought to have some value in order to be sold. Even Marx reckoned this much, and thinking otherwise seems like relying too much in the stupidity of others (which should never be trivially admitted). That would be the case if capital were built beyond all possible utility (say, like a bridge between London and NY). And I don't think it is necessary to make analogies to speak of wealth and income distribution issues, which are real problems in themselves, and which the majority of people are already acquainted with. Rather, I think that such analogies make problems much larger than they ought to be, by giving some monstrous character. Would be it be easier to tackle inequality by conceiving it as a result of an incredibly power artificial intelligence? I don't think so (even if the system is rotten, we'd be waiting on a constructive proof of one that isn't).
Oh good. The Americans are rediscovering Das Kapital.
Forgive the snark, but when I first heard about paperclip maximizers, I thought, "Oh, you mean like capitalism." That these two things have a likeness is the reemergence of an idea that had been buried in order to make capitalism look good.
Often, people confuse money with capital. Money is an exchange mechanism which facilitates the transfer of value between parties through a common unit of exchange. Money can be viewed as a subset of capital (i.e. as a sedentary store of value), or it can be viewed as the destruction of the benefits of labour, as it converts an active instrument (i.e. the work I am doing, or the goods I have produced) into a sedentary instrument which provides no incremental benefit, unless it is spent.
So - how this relates to the article - capitalism is a system which currently optimises for money. When you take a step back and consider that by converting labour and value into money, you are effectively freezing the benefit of that labour away, accruing money as an end of its own ceases to make sense, as much of this value ends up lying in the ledgers of corporates, governments, or wealthy individuals, doing nothing for anybody other than perhaps dealing with some of the more fragile elements of human psychology (security).
We have let a means become the end, and have forgotten that the Purpose of all of this is to promote human happiness - I mean, what other possible Purpose can there be?