> Have you ever given a generative AI model a short input, been really pleased with the output, and felt like you accomplished the result? I have! It's probably common.
i mean, you did. becoming good at writing succinct and clever prompts, adding constraints, choosing good models for your use case, etc are all skills like any other.
eh, that's a leap of faith assumption without knowing one's own dosage and personal effects.
someone who has 5 drinks a week and 5 drinks a day are going to have radically different longterm health consequences. but here we do not have said info.
light or microdose cannabis is way safer than alcohol.
the notion that "contains —" ~= "AI generated" is a really dumb popular misconception: dashes have existed for hundreds of years. just because many people use them incorrectly or treat the hyphen as if it's some universal dash doesn't change that.
strunk & white taught me to use em dashes in something like elementary or middle school [1] — it's not hard to understand how to use them or type them... i'm baffled as to why people act like this is the case.
I've been using a reasonable gamut of Unicode punctuation in English for I think the majority of my life now as well—including this very comment, https://qht.co/item?id=19365079 from 2019, and the above comment where I typed a horizontal ellipsis. I tend to attribute it to taking my language usage from relatively formal sources and being a desktop Linux user with a Compose key. I used to constrain myself to ASCII for email and source code, though, and would use TeX-like “--” and “---” and such instead; sometimes I would also just do that when temporarily on some setup where accessing the real stuff was harder.
But then, people have also been asking me whether I'm an AI for over twenty years, so…
Like it or not, current LLMs really like em-dashes and so usage of them is quite a lot of bayesian evidence in favor of the author being an LLM. It's unfortunate for the humans who use em-dashes but that's how it is.
AI guardrails continue to make safety improvements — comparing a rapidly evolving advanced technology to a drug is a broken analogy to me. One gets safer over time; the other gets more dangerous.
But also, the risk profile and statistics are radically different: alcohol is inherently dangerous (toxic) to everyone. Chatbots are just another tool — there are a small percentage of people with unhealthy relationships to any tool, but that does not make the tool a dangerous drug.
The underlying models are improving at the same time as the guardrails and I'm not convinced the guardrails will keep up, especially given the perverse incentives. At some point the endless investor billions will dry up and a whole bunch of folks will be desperate to monetize their AI projects any way possible.
is your idea of granular control (roughly) a group of agents in separate containers writing back to their own designated store each sufficient, or more control than that?
Moltbook Valuation & Funding
Deal Type Date Amount Raised to Date Post-Val Status Stage
2. Merger/Acquisition 10-Mar-2026 - - - Announced Startup
1. Early Stage VC 01-Mar-2026 - - - Completed Startup
i mean, you did. becoming good at writing succinct and clever prompts, adding constraints, choosing good models for your use case, etc are all skills like any other.
most people are really bad at it though.
reply