> Modern journalism goes for clicks, which means generating outrage.
Is this about journalists talking about musk, or about musk himself? I mostly learn about his views through his own tweets that twitter always makes sure to serve me in my home page, and "goes for clicks"/"generating outrage" seems to fit well how musk uses his platform. In any case, his politics seem awful to me even without any journalistic mediation of them.
It is also a case in jurisdictions where it is all legal, because to meet demand it becomes then more profitable to bring (illegally) trafficked women to meet it. In particular, netherlands appear to be one of the highest in trafficking inflows [0, Appendix B].
From [0]:
> Our empirical analysis for a cross-section of up to 150 countries shows that the scale effect dominates the substitution effect. On average, countries where prostitution is legal experience larger reported human trafficking inflows.
It is not an either-or. Many bluetooth headphones (not earbuds) also have an audio jack to be used wired. I use my bluetooth headphones (sony wh-1000xm3) through bluetooth when I am on the go, and wired when I am at home, especially if I want low latency. If anything, I would rather be able to replace a simple jack cable if it breaks, as it would consistently happen back when I was using wired earphones.
I find using all these cables when I am on the go inconvenient, and I cannot imagine going back there. Especially with earbuds, I have probably changed over 10 or sth over the years due to cables failing (but I hate earbuds now anyway regardless). On the other hand, eg when gaming I definitely notice latency issues, especially if I compare them with wired, so I prefer to use them wired.
Regulating gambling is not "nanny state", esp in relation to kids. Your personal experience as a kid, about whether you had money or not, is completely irrelevant as an argument.
Not at all. My experience in this case indicates that there is a correct behavioural pattern which avoids the issue entirely and requires zero government's intervention.
But if you insist on having a regulation, okay, I'm fine with it. What about the following regulation: each time a minor is found gambling or smoking, his/her parents are fined 100x times the stake/the price of cigarettes?
So now you've made it impossible to actually stop a child from smoking. They're free to smoke as much as they want, because it's the parents that get punished, not them. And regardless of who gets punished, the fact remains, they can go to the store and buy cigarettes.
You really think not being given money by parents is going to stop kids from accessing smokes?
I can understand somebody not liking wikipedia, I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.
So you can understand someone not liking something, but you cannot understand that person liking the idea of an alternative? What is the idea for you if not just an alternative to the established service with the undesired part changed?
Because not liking something does not imply liking any possible alternative.
Which one is the "undesirable part changed" here? Wikipedia is written by humans, it has a not-for-profit governance model, it encompasses a large, international community of authors/editors that attempt to operate democratically, it has an investment/commitment in being an openly available and public source of information. Grokipedia, on the other hand, is AI-generated, and operated by a for-profit AI company. Even if "grokipedia" managed somehow to get traction and "overthrow" wikipedia, there is no reason on earth why a company would operate it for free and not try to make profit out of it, or use it for their ends in ways much more direct than what may or may not be happening to wikipedia. Having a billionaire basically control something that may be considered "ground truth" of information seems a bad idea, and having AI generate that an even worse one.
I can understand somebody not liking something in how wikipedia is governed or operating, after all whatever has to do with getting humans work together in such a scale is bound to be challenging. I can understand somebody ideologically disagreeing with some of the stances that such a project has to take eventually (even if one tries to be neutral as much as possible, it is inevitable to avoid some clash somewhere about where this neutrality exactly lies). But grokipedia much more than "wikipedia but different ideologically".
edit: just to be clear, I see a critique of the "idea of grokipedia" as eg the critique of it being a billionaire controlled, AI generated project to substitute wikipedia; a critique of the implementation would be finding flaws to actual articles in grokipedia (overall). I think the idea of it is already flawed enough.
Wikipedia is fine for uncontroversial facts. The obscure ones can have individual mistakes but it's generally correct.
For controversial topics, it's an eternal battle between factions of "volunteers" trying to present their view of a conflict. The articles reflect which side has the best organized influencer operations. Factual truth may or may not shine through, but as a side effect, not a result of the governing process.
Grokipedia operates by Grok writing what it considers the true and interesting facts. That doesn't mean it's always right, but it's a model far less influenced by influencer operations.
I wildly disagree with the critique based on the wealth of the top executive. I care about the truth and quality of the articles.
>Grokipedia operates by Grok writing what it considers the true and interesting facts. That doesn't mean it's always right, but it's a model far less influenced by influencer operations.
If Grok is trained on a corpus of information written by humans trying to influence other humans, and it has no ability to perform its own original investigation in the real world, then how can it be anything but the product of influence?
Maybe ask a Ukrainian soldier which they prefer (modern armor is often made of depleted uranium). Environment shapes such preferences far more than personality.
> I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.
Really? Have you used AI to write documentation for software? Or used AI to generate deep research reports by scouring the internet?
Because, while both can have some issues (but so do humans), AI already does extremely well at both those tasks (multiple models do, look at the various labs' Deep Research products, or look at NotebookLM).
Grokipedia is roughly the same concept of "take these 10,000 topics, and for each topic make a deep research report, verify stuff, etc, and make minimal changes to the existing deep research report on it. preserve citations"
So it's not like it's automatically some anti-woke can't-be-trusted thing. In fact, if you trust the idea of an AI doing deep research reports, this is a generalizable and automated form of that.
We can judge an idea by its merits, politics aside. I think it's a fascinating idea in general (like the idea of writing software documentation or doing deep research reports), whether it needs tweaks to remove political bias aside.
> Have you used AI to write documentation for software?
Hi. I have edited AI-generated first drafts of documentation -- in the last few months, so we are not talking about old and moldy models -- and describing the performance as "extremely well" is exceedingly generous. Large language models write documentation the same way they do all tasks, i.e., through statistical computation of the most likely output. So, in no particular order:
- AI-authored documentation is not aware of your house style guide. (No, giving it your style guide will not help.)
- AI-authored documentation will not match your house voice. (No, saying "please write this in the voice of the other documentation in this repo" will not help.)
- The generated documentation will tend to be extremely generic and repetitive, often effectively duplicating other work in your documentation repo.
- Internal links to other pages will often be incorrect.
- Summaries will often be superfluous.
- It will love "here is a common problem and here is how to fix it" sections, whether or not that's appropriate for the kind of document it's writing. (It won't distinguish reliably between tutorial documentation, reference documentation, and cookbook articles.)
- The common problems it tells you how to fix are sometimes imagined and frequently not actually problems worth documenting.
- It's subject to unnecessary digression, e.g., while writing a high-level overview of how to accomplish a task, it will mention that using version control is a good idea, then detour for a hundred lines giving you a quick introduction to Git.
As for using AI "to generate deep research reports by scouring the internet", that sounds like an incredibly fraught idea. LLMs are not doing searches, they are doing statistical computation of likely results. In practice the results of that computation and a web search frequently line up, but "frequently" is not good enough for "deep research": the fewer points of reference for a complex query there are in an LLM's training corpus, the more likely it is to generate a bullshit answer delivered with a veneer of absolute confidence. Perhaps you can make the case that that's still a good place to start, but it is absolutely not something to rely on.
>LLMs are not doing searches, they are doing statistical computation of likely results.
This was true of ChatGPT in 2022, but any modern platform that advertises a "deep research" feature provides its LLMs with tools to actually do a web search, pull the results it finds into context and cite them in the generated text.
That's not at all been my experience. My experience has been one of constant amazement (and still surprise) when it catches nuances in behavior from just reading the code.
I'm sure there are many variables across our experiences. But I know I'm not imagining what I'm seeing, so I'm bullish on the idea of an AI-curated encyclopedia, whether Elon Musk is involved or not.
No, I don't trust an encyclopedia generated by AI. Projects with much narrower scopes are not comparable.
edit: I am not very excited by AI-generated documentations either. I think that LLMs are very useful tools, but I see a potential problem when the sources of information that their usefulness is largely based on is also LLM-generated. I am afraid that this will inevitably result in drop in quality that will also affect the LLMs themselves downstream. I think we underestimate the importance that intentionality in human-written text plays in being in the training sets/context windows of LLMs for them to give relevant/useful output.
Elon at some point threatened to have an LLM rewrite all of the training data to remove woke. I assume Grokipedia is his experiment at doing this (and perhaps hoping it will infect other training sets too?) ...
Many projects in his companies seem to be more and more Musk's vanity projects than ideas/products one can take seriously. This is also how tesla ended up with a huge cybertruck stock that nobody wants to buy and thus had to be bought by his other companies. And it is becoming worse and worse, especially ever since he bought twitter and sped up his twitting rates.
Sales are artificial boosts yes. The difference is in the connotation. A sale is given for something that people generally would buy anyway, but now more people will. An artificial boost is given to stuff nobody wants, but at a lower price can be convinced to buy.
Or in other words, sales raise $high_number to $higher_number while artificial boosts raise $essentially_zero to $acceptable_number.
the claim is that it moved sales forward in time, but it'll have a corresponding dip in sales later, whereas a good sales campaign increases total volume (virtually no dip, brings in new customers, etc)
look around your house and see how much shit you got that you really want(ed). great salesman (and elon is the best in the history of the civilization) will sell you shit you never thought you wanted :)
The motivation to buy something is always because you want it. That a product doesn’t meet your needs or expectations later is a different story. What’s your evidence to claim that people spending 60k in a cybertruck don’t want it? What’s your evidence to make a similar claim or the opposite for any other purchase? Without evidence it feels you are making baseless claims about peoples motivations.
Is it still your claim that people spending 60k on Cybertruck don’t want it? How do you know? Given the lack evidence feels like motivated thinking. You don’t like Elon and can’t accept that tons of people actually like him and his products.
I think you might be slightly misinformed on how many 10,000+ dollar purchases the average person makes in their lifetime to make sweeping statements of that nature. Advertizing sales on medical procedures or daycare could have the opposite effect I would imagine
Look up what their production targets were and compare that to their sales. A small temporary demand surge isn't going to be enough to chew through their current inventory, let alone keep the production lines busy.
The cybertruck is an amazing vehicle, it was mostly just bad timing- Inflation more than doubled between the announcement and release date so it seemed to come out more expensive than promised, the USA Democratic party abandoned it's environmental side for unions, and the whole "woke" movement ballooned and got violent to the point where people were lighting certain car dealerships on fire and vandalizing people's vehicles on sight.
This explicitly says "Multi-Touch trackpad for precise cursor control and support for gestures", so at most it's the clicking action that is mechanical (rather than the click being faked with haptic feedback, as it is on the current models)
M3 was a weird generation, as they contained fewer transistors than the previous ones. It is slightly faster in single core tasks, and has a few more cores, but they are very close. But in terms of gpu, m3s are quite nerfed esp because they lowered the memory bandwidth, so on llm performance they are on par. I have both an m3 and an M1 Max, one of them from work, so I have tested them extensively (though the m3 is binned and 14”, the m1 full and 16”). M3 had better TTFT but the M1 had a bit higher tokens/s.
Wow! Thanks for sharing. I didn't know this. Time to upgrade to M5? What do you think about the M5? I know it's too early for tests. But I would love to hear your opinion.
M4 was already imo a more meaningful upgrade compared to m2/m3, and they increased the memory bandwidths too. But then, all apple silicon is good hardware, and I do not personally feel in any rush to upgrade, unless you want a specific upgrade like more ram.
Is this about journalists talking about musk, or about musk himself? I mostly learn about his views through his own tweets that twitter always makes sure to serve me in my home page, and "goes for clicks"/"generating outrage" seems to fit well how musk uses his platform. In any case, his politics seem awful to me even without any journalistic mediation of them.
reply