This is...disquieting. It's one thing to know that it's possible, another thing to know nation states or large megacorps are doing it, but another thing entirely to see such verbose output from free models about, well, me.
The first two, I've made peace with (nothing I can do about it anyway). The last one picks quite fiercely at old trauma that really makes me reconsider my socials in general, not just HN.
But maybe that's just the anxiety and trauma talking, encouraging me to recede back into the shadows and re-apply the old mask of "acceptableness" I've been trying to toss aside. Maybe the fact a free chatbot can do such a thorough analysis is in fact reason enough to stop worrying about every aspect of my identity and its perception by others, and instead just...be me, and deal with whatever consequences arise from that.
I dunno. Just...lot of emotions, here, most of them quite bad.
Right, as is so often the case with AI stuff the thing that's disconcerting is how cheap and low friction and friction adopt available this ability is now.
Anyone with access to a decent LLM can now perform a version of this in just a few seconds.
It's a lot to take in, if I'm being honest. Growing up in the sort of cultures where gossip and tabloids were the norm, this tool is painful to me in a way I'm not sure many folks can understand. It's not even low friction anymore; it's no friction, in the sense that anyone with a chatbot and minimal rails can just ask it to do these sorts of profiles now, on anyone they choose.
We desperately need to modernize laws around discrimination in light of the proliferation of these tools. No longer does someone need to thread the needle in interviews around "illegal" questions to find something to (metaphorically) hang an interviewee with, as these tools can pick it apart quite cleanly. People in protected classes are going to get reamed by bad actors leveraging these tools.
That said, after rubber ducking with a friend on this, I've come to the conclusion that there's two paths forward from this point: flight (scrubbing socials, hiding online, creating an acceptable persona) or fight (being firmly authentic, owning your weirdness, and accepting you can't control the outcomes of others' actions using these tools). I've spent decades in 'flight', and I'm tired of it. I can't control who uses these tools and to what end, so I may as well just be my damn self anyhow and do regular threat assessments accordingly. The more people who behave authentically, the less power these tools have over us.
I think it's not unreasonable, if one is in an oft discriminated protected class, to aim ones career / expense trajectory towards stability for the next couple decades. (Prioritizing remote, focusing towards subfields where there's more tolerance, working for companies in financially stable industries)
The law, currently predicated on the difficulty of discriminating en masse without leaving a paper trail, will take a while to catch up with de facto use.
We've come a long way in some aspects, while staying pretty much in place in others.
I taught infosec 101 course at a university ~20 years ago. (Twice.) On the topic of privacy I used an example of harvesting data on peoples' habits, movements and behaviours and then said that as a society we use two different terms for the same thing. "When an individual does this, it's called stalking. When a company does this, it's called data mining."
The economics department students, many of who already knew they would want to work in marketing, were quite offended.
From another perspective, it's like hearing others judging you behind your back. First few times it's awkward and maybe even annoying, but given enough time you stops to give a damn about it.
But, the problem is real if it's a nation states or megacorps are doing it. They'll use such tech in an unjustified way, make a misjudgement, and then ask you to explain yourself out of the situation. Yeah, they're definitely going that, because they don't give a damn about it.
I have a sneaking suspicion this is going to be 21st century communism's (read China's) fatal flaw: the corrosive effect of panopticon monitoring on population productivity.
Because eventually apparatchiks with the data at their fingertips are going to use it to rule out the next Einsteins from participating in {insert major Chinese project}, and you've effectively self-selected at scale for people whose shared characteristic is "not being different."
We'll see if the US and Europe course correct on individual freedom enough to reap the benefits of that though.
This is not entirely surprising, as the evidence was always weakly correlated. I say this as a proponent of legalization, mind you.
Cannabis, like alcohol and tobacco, is a vice. It definitely helps with some physical ailments (like helping stimulate hunger in cancer patients), just like alcohol and tobacco can with other ailments, but it’s not a panacea for mental health disorders.
We need to stop marketing these things as curatives when they’re mostly just coping mechanisms or social lubricants. We’re doing more harm than good by leaning into the “legitimate pharmaceutical” angle.
No real idea why this bubbled to the front page, as it’s just another milquetoast subjective take from a single point of view recapping the same events from their context. Nothing added, no food for thought, just the same old, same old.
We need less folks ringing alarm bells without guidance and more folks offering help in troubling times. This, is not helpful.
I don't think milquetoast is the right term (not less because it hasn't really left North America). People seem more angry than timid.
There are problems, some are very visible, some are present but not very visible, and some are imaginary.
There is a growing discontent like I have not seen in my lifetime, and I don't think much of it can be characterised as unjustified. The problem is the narratives that people are forming about those problems are mostly all wrong. This is easily provable by showing how mutually exclusive all of the narratives are, the correctness of any one of them would invalidate many more.
I don't don't have the expertise or data to conclusively show what the cause of any particular aspect. To make a guess, I would look to history and note that those who blame the new thing are usually wrong, and those who warn about actions being done without regard to the consequences are usually right.
People are angry and want to understand what is causing the problems, and it seems the unscrupulous have found that an answer to their questions is effective, even if it is a fiction.
This is why I'm taking a wait-and-see approach to these tools on HN myself. My month with Claude Code (the TUI, not the GUI) was amazing from an IT POV, just slop-generating niche tools I could quickly implement and audit (not giant-ass projects), but I ain't outsourcing that to another company when Qwen et al are right there for running on my M1 Pro or RTX 3090.
I'm looking forward to more folks building these kinds of tools with a stronger focus on portability via API or loading local models, as means of having a genuinely useful assistant or co-programmer rather than paying some big corp way too much money (and letting them use my data) for roughly the same experience.
This. Microsoft has said similar things before, and always tripled down on bad behavior afterward. Their priority is business outcomes, not user experiences or support, and that’s why even this non-apology makes it clear the stuff customers, engineers, and support staff hate - invasive telemetry, outright surveillance/spyware, online-only requirements, AI-everywhere, constant arbitrary deprecation of APIs and endpoints for external tools to drive internal product adoption, refusal to support consumer technologies long-term (MCE, WMR) or do things contrary to everyone else (print drivers) - isn’t actually getting addressed.
Don’t listen to the smooth talk. Plan an exit strategy now, before you need it later.
> Plan an exit strategy now, before you need it later.
The idea that we'll all be forced off of Windows one day sounds like a dream, but so far we continue to be in a state where myself and many other are long past the point of wanting to leave, but we can't for some reason or another.
Microsoft knows that, which is why they've been able to do whatever they want and not worry about the consequences.
Microsoft won't force you off, but everyone - and every business - has a line in the sand somewhere. In my experience, most folks don't realize where it is until it's too late, and by then the costs are far higher (opportunity, financial, time) than they would've been with a defined strategy.
Even if you're not leaving the ecosystem anytime soon, you should always know where those lines are and what the landscape looks like on the other side of things.
I keep a VM with windows on it. Unfortunately you have to purchase a license. Hopefully I'll be able to upgrade it like they've allowed since ~Vista. But now anyone tracking user agents knows I'm not using Microsoft. I didn't even put a browser on the VM. I have used the VM under 10 times over the past year and that's usually just to use Quick Assist to help others with their Microslop OS. Sometimes to deal with a particularly obnoxious excel file.
I've used this before in the early days of my Linux SysAdmin work, especially in the homelab.
It's pretty solid, but the limited amount of projects and lack of visibility into the CLI it uses on the backend hinder the ability to translate sysadmin work into tangible Linux skills, so I dumped it at home in favor of straight SSH sessions and some TUI stuff.
That said, if I gotta babysit Linux in an Enterprise without something like Centrify? Yeah, Cockpit is a solid, user-friendly abstraction layer, especially for WinFolks.
Part of the technical assessment I have for hiring new platform engineers involves troubleshooting a service hosted in a headless Linux vm.
Troubleshooting and fluency on the command line are among what I consider core skills. Being able to dig through abstraction layers is not just essential for when things go wrong, they are essential for building infrastructure, and really tells you whether an architecture is fit for purpose.
When I was interviewing people on behalf of a client, I was surprised at the number of people who didn't even know what SSH was. This was for a mid-level software developer and not a junior and they all came with glowing resumes.
They all insisted that it was essential to have a CI/CD process but didn't even know what the "CD" part even did. Apparently you just "git push" and the code magically gets on the server. There are many ways to do deployments and a CI/CD process isn't always suitable and can have many forms, in my opinion, but I was happy to discuss any and all. But it's difficult to do that without the basics. As you said, before I was commissioned the platform had no documentation, was crumbling under tech debt and failing constantly so something like getting on the server to at least figure out what's going on was essential.
I went for a senior sysadmin interview role and they asked me to debug a website in the browser that was only visible on localhost, ssh was available.
They asked me to double check that part because they assumed I just hadn't done that part, because apparently I was the first person who didn't need help with an SSH tunnel.
There’s a lot of that going around, lately. I recently had an interviewer admit I was not in the first round of candidates sent for in-person finals, but they had all bombed out on very basic SSO questions despite having a decade managing Entra; I was a “second choice” candidate and the first one to correctly answer the broad strokes of setting up an SSO app, despite not having touched Entra since it was called Azure AD.
I suspect this is AI’s doing, but cannot be sure. It’s really critical that technical interviewers weed out the over-inflaters though, now more than ever.
This predates AI. I've been interviewing candidates(SRE/DevOps) since 2018, so many candidates that claim to have extensive experience with things completely fall apart when you put them in front of a terminal.
Gathering and mapping unfamiliar systems is part of that skillset. I’m also looking at being able to think laterally, being able descend abstraction layers, and understanding architectural characteristics and constraints (Roy Fielding’s Dissertation), which will recur at each level of abstraction.
From a professional perspective, this is a solid question. And yeah, between the basic tool suite (top/cd/ls -l/df -H/grep/pipe '|'/ssh) and some common sysadmin/engie knowledge, I could get by with Linux just fine. "Just fine" doesn't cut it for troubleshooting sludgepipes and Kubernetes though, and my skills with Powershell finally gave me the confidence boost to take CLI/TUI seriously on Linux.
And man, zero regrets. It's nice having an OS not fight me tooth and nail to do shit, even if it means letting me blow my feet off with some commands (which is why, to any junior readers out there, we always start with a snapshot when troubleshooting servers).
Now to finish my mono-compose for my homelab and get back to enjoying the fruits of my labor...
It's the management structure focused on short-term gains and promotion cycles, combined with a corporate culture focused very much on the same as management with the added twist of politicking, backstabbing, and undercutting other teams.
I've spent much of my life inside Microsoft's ecosystems. Not merely my career, but my technological life itself started with Win 3.11 on a parental laptop. I've spent so long in their orbit that I can generally infer what their latest thing does and how it works from an IT POV based on its product name alone, because I understand how Microsoft thinks from a marketing and engineering perspective.
As you say, they have some truly brilliant folks in their ranks. Those few diamonds are buried under mountains of garbage and slop from above, though. I mean, this is the company who pioneered full-fat PC handhelds 20 years before the Steam Deck, the smart watch a full decade before Apple, the home media ecosystem years before streaming apps dominated, smartphones before the iPhone, I can go on and on. The problem isn't the engineers so much as corporate mismanagement, but they somehow survive like a cockroach based on install size alone.
Sure... but, I’ve got decades of experience doing that stuff, just not frequently enough to keep it in my head, these days. I usually want a small project server to just do shit and the less there is between that and booting up a fresh Linux install, the better. For example, I don’t keep firewall command line syntax in my head, but I know what needs to be done, and I always seem to need it with small home projects. I lose nothing by having a trustworthy gui do it. I’d give this a shot. I doubt I’d use it in a professional environment, but that’s not really my use case these days.
Which goes to show, experience and maturity changes how people use tools. The person I was responding to was at an earlier maturity stage and realized it was hampering their growth
I am more of a TUI person anyways. I have never found web based server management to be as responsive as TUI, same reason I prefer direct attaching than live tailing on a web tool.
I configure my router through a web interface and not the command line either. It isn’t something I want to mess with on my downtime.
I think I finally know what to do with my second NUC: FreeBSD.
I'm in the process of converting and consolidating all my home infra into a mono-compose, for the simple reason I don't want to fiddle with shit, I just want to set-and-forget. The joy of technology was in communications and experiences, not having to dive through abstraction layers to figure out why something was being fiddly. Containers promised to remove the fiddliness (as every virtualization advancement inevitably promises), and now I'm forced to either fiddle with Docker and its root security issues, fiddle with Podman and reconfiguring the OS for lower security so containers don't stop (or worse, converting compose to systemd files to make them services), or fiddle with Kubernetes to make things work with a myriad of ancillary services and CRDs for enterprises, not homelabs.
For two years now, there's been a pretty consistent campaign of love-letters for the BSDs that keep tugging at what I love about technology: that the whole point was to enable you to spend more time living, rather than wrangling what a computer does and how it does it. The concept of jails where I can just run software again, no abstractions needed, and trust it to not misbehave? Amazing, I want to learn more.
So yeah, in lieu of setting up the second NUC as a Debian HA node for Docker/QEMU failover, I think I'm going to slap FreeBSD on it and try porting my workloads to it via Jails. Worst case scenario, I learn something new; best case scenario, I finally get what I want and can finally catch up on my books, movies, shows, and music instead of constantly fiddling with why Plex or Jellyfin or my RSS Aggregator stopped functioning, again.
No wireless lossless audio means these are a hard pass for me. I really expected Apple of all folks to figure that out since they engineer their entire stack, hardware to software, but they’re still just pushing the same bluetooth audio that my Airpods Pro 2’s consume (which are half the price and incredibly excellent). Sony’s LDAC is niche, but sounds objectively better to my ears than the AAC used on Apple’s kit when I opt to use my Walkman+XM4s.
As for wired listening? My XM4s sound okay wired in, and at home I’ve got critical-listening kit already. Adding a USB-C cable to the Max is not appealing given that 3.5mm already exists, USB-C cables are heavier than analog audio wires, and more corps block USB ports in general or mess with them in ways that corrupts the audio stack.
Give me wireless CD-quality audio and I’ll be a happy dinosaur. Until then, I have zero reason to upgrade what I currently have.
Yeah, at least for certain kinds of music. Don't get me wrong, I'm not soapboxing out here against folks who enjoy lossy music (my flatmates enjoy our local library transcoded to MP3s), nor am I going to praise-be the "high-res" audio movement. I just happened to have someone sit me down for a critical listening session on quality kit with a CD I had ripped before and my iPod with the MP3, and it was night and day to my ears.
Am I some golden-eared savant? Heck nah. I still listen to electronic mixes in shitty YouTube audio, because a lot of it isn't mastered in CD quality anyhow; I also enjoy leaning back with a good CD rip of classic rock or orchestral jams on my HD800s or my B&W 684s. I like the different experiences these setups offer, but my preference is always for lossless just as a matter of preservation regardless of whether I can hear it or not.
But have you tried the ABX test I linked to with a proper set up, e.g. your HD800?
If you compare a CD on a proper system with an MP3 on an iPod, you're really comparing Apples to oranges. Also depending on with which encoded the MP3 was created. The iTunes encoder for example was infamous for rather bad quality.
Yes. I've done ABX testing on my HT, and my HD800s, repeatedly. I do ABX testing with album versions on occasion to determine if, as an example, the 1989 release of Pretty Hate Machine actually sounds (subjectively) better (to my ears) than the 2010 remaster, or that the original issue of Garbage 2.0 sounds better than the 20th Anniversary remaster.
The original test that got me into the scene way back in the 00s was toggling between a CD player and my iPod on the same Hi-Fi setup, with different kinds of material to cover my tastes at the time. First it was done blindly by the store owner (so I couldn't see which was which), and then he let me take over to see for myself for a while.
I get the skepticism, really, and I'm not making claims that I'm some golden-eared savant (because I have medical confirmation I'm very much not, at least when it comes to audible speech). I am saying that for me specifically, I find the differences in the files on my equipment to generally be noticeable enough to warrant the increase in stored quality.
Would I recommend the HD800s to folks? Hell no, I bought those because I wanted them, Colorware paint job and all. That's the secret to most audiophile kit in my experience: we're all just paying for stuff we think looks and sounds cool, and that comes with a premium over function alone.
The H2 enables lossless audio over wireless. So this reads like a temporary limitation that software might solve down the road. But knowing Apple's track record for enabling features in partially dormant hardware ... I wouldn't buy these expecting that.
Given Apple's very recent track record on promising things and then watching them vanish into the ether - not to mention a lifetime being burned buying into future promises that never materialize ("MCE is the future of the entertainment experience!" (RIP in Win7), "CableCARD will free you from the tyranny of locked down hardware!" (RIP from the get-go), "Unfolded Circle 3 will finally support serial from the dock!") - means I don't buy on what it could do tomorrow, but what's on offer out of the box from day one.
Tired of accumulating scar tissue and burn marks in the name of shareholder value.
Could you point to where Apple claims that H2 enables lossless audio over wireless for the AirPods Max 2? I don’t see that claim on the spec sheet. What I see is this note:
“Ultra-low latency audio and Lossless Audio listening requires a wired USB‑C connection and compatible content from supported apps and services.”
So it doesn’t appear that lossless wireless is supported at all, even with Vision Pro.
While you nailed me on the subjectively vs objectively (early morning flub on my part), I'm going to respectfully push back on the "no audible difference between lossy and lossless" with a huge asterisk: it depends on the content, it depends on the mastering techniques, and it depends on the equipment, but there is a discernible difference in a lot of media between lossy and lossless audio, and that difference is easier to pick out by folks who take care of their hearing and listen on quality kit.
Which excludes 90% of the populace by default, and thus I never bought into the whole audiophile hype anyway. Let folks enjoy what they like, on the equipment they like. I ain't here to judge, just share.
I dig the core concept, because it's what I'm replicating in my own homelab at present sans GHA and with a brief flirtation with Podman over Docker.
Thing is, like others have pointed out, relying solely on GHA is just not a great idea. If you're doing your own self-hosted runners you can effectively debug, then sure, that's not a bad idea necessarily, but using the GitHub runners?
Nope. Sorry, just not something I can trust on the free tier.
That being said, I do like the core concept (deploying the essentials to a plain-jane Debian instance - bare metal or virtual - and just bootstrapping via compose files and some form of push), and I'd like to see it refined more for homelab users, especially if you can guarantee some degree of security best practices with it (e.g., SELinux compatibility and/or auto-deploy tools like Wazuh).
I'll poke at it since I gotta blow away my Debian install anyway (went down a rabbit hole on GPU acceleration and Podman that has left it butchered far more than I would've liked to support), just give folks more options than GHA and focus more on essential services.
The first two, I've made peace with (nothing I can do about it anyway). The last one picks quite fiercely at old trauma that really makes me reconsider my socials in general, not just HN.
But maybe that's just the anxiety and trauma talking, encouraging me to recede back into the shadows and re-apply the old mask of "acceptableness" I've been trying to toss aside. Maybe the fact a free chatbot can do such a thorough analysis is in fact reason enough to stop worrying about every aspect of my identity and its perception by others, and instead just...be me, and deal with whatever consequences arise from that.
I dunno. Just...lot of emotions, here, most of them quite bad.
reply