You've had enough arguments with people in both this thread and the previous that I'm pretty sure you understand what the issue is with your use of the word "free".
What you are offering is NOT a free tool -- it is a demo, for a tool for which you are charging $12/month. No reasonable person would interpret a grand total of 3 exports as enough to justify calling this a "free" tool.
This is to say nothing of your violation of AGPL on the use of MuPDF, which has been pointed out here and elsewhere.
But of course, you're free to Show HN a paid product; just kindly don't insult our collective intelligences in the process.
agreed. i have never seen anyone (let alone an assortment) of hacker news users saying "i switched my 2fa to this after seeing how great it was!" Not really sure how one 'switches their 2fa' to an LLM...
This thread is about the 2FA app, not the LLM app. I don't care about the LLM app. What's this witch hunt? This app literally solved a (self-inflicted) problem I was having for some years now where I was keeping an old phone around just for MFA. I even thought about creating an iOS app that's compatible with Aegis files (actually I even _started_ working on that, but didn't get far) just to solve my problem. Now I don't have to, thanks to a comment here, and that's why I posted. Geez. I guess I'll stay with negative comments for the future, they seem to be more trustworthy.
I mean I get it, astroturfing is a real problem and an annoying one for communities. But I also have no idea how to prove to you that I am neither a bot nor shilling here.
I really wish those offering speech-to-text models provided transcription benchmarks specific to particular fields of endeavor. I imagine performance would vary wildly when using jargon peculiar to software development, medicine, physics, and law, as compared to everyday speech. Considering that "enterprise" use is often specialized or sub-specialized, it seems like they're leaving money on Dragon's table by not catering to any of those needs.
Its a cohort study, so you can only control for confounders. The 2nd paragraph of the discussion addresses the healthy-vaccinee effect you're referring to.
> At the heart of the problem is the tendency for AI language models to confabulate, which means they may confidently generate a false output that is stated as being factual.
"Confabulate" is precisely the correct term; I don't know how we ended up settling on "hallucinate".
The bigger problem is that, whichever term you choose (confabulate or hallucinate), that's what they're always doing. When they produce a factually correct answer, that's just as much of a random fabrication based on training data as when they're factually incorrect. Either of those terms falsely implies that they "know" the answer when they get it right, but "confabulate" is worse because there isn't "gaps in their memory", they're just always making things up.
About 2 years ago I was using Whisper AI locally to translate some videos, and "hallucinations" is definitely the right phrase for some of its output! So just like you might expect from a stereotypical schizo: it would stay on-task for a while, but then start ranting about random things, or "hearing things", etc.
He apparently pretended to not have written it despite its DNS pointing to his servers, and Certificate Transparency logs and Internet Archive all attributing the page to his domain. Compare the top comment thread in the first link above to his reply there:
Which part of the second link? Some of it is very accurately sourced, he 100% operates a loli bot which targetted subreddits banned by reddit for illegal content. Theres no walking around that. Near the end they also point out that Drew changes his TOS for SourceHut to align with banning projects he disagrees with, which makes GitHub look like paradise.
> the incident is that he wrote a document detailing repeated bad behaviour from a well known community figure? And this is a bad thing?
He collected all Stallman statements about Epstein and related subjects (this is perfectly ok) and then wrote his own summaries which completely misrepresent the things which were actually said. So what happened was that a lot of people just skimmed the summaries and concluded that Stallman molests children, or says that it's ok to do so etc etc.
If fact I have taken to link the Stallman report and add "don't read the summaries, read only the things that Stallman actually said". This only works if I believe the person is in good faith, of course. I would suggest the same to you.
Kinda horrible to see that the 4chan bigots use the same strategy to try to discredit drew devault, and implying things of ownership through their own created fake accounts and smearing campaigns. Pretty much all allegations on that page are circumstantial evidence, especially the bot ownership parts that sircmpwn even took down while citing those bigots using it to scrape child porn.
And then the dude of dmpwn posting things on image boards with the tag dmpwn, and forgetting to remove that from screenshots? lol, really?
Having experienced the same kind of doxxing attempts by 4chan bigots, /pol/ and kiwifarms, I think I am qualified to comment on how they operate.
Maybe someone needs to summon the Antichrist a second time to thin out the herd, huh?
Thanks for mentioning it! Makes me glad to live a life out of the spotlight and to be generally ignorant of stuff like this going on. Would not want to be targeted like that :/
I hate that this is now a thing you can ask unsarcastically.
Just use the tool you like the best man, screw what other people think. Yes, there's people who will go "you're bad because your use a tool that's made by a guy who said something wrong about Stallman" (or whatever he did exactly again). These people are not worth your attention.
My bad, I shouldn't have said tainted. Trustworthy is what I had in mind.
I moved my private repos to sr.ht ages ago because it was the open source, free software, ethical, longevitable approach. And stepping away from the mega corporations and everything going on with those.
Certain aspects of human nature, as they apply to the corporate world, can be acknowledged and understood, even if they're not excuses when they lead to the downfall of a prominent organization. When you give someone a big title, a dump truck full of cash, and a mandate to innovate, human nature dictates that most people will internalize the idea that "because I was given all this, I must be competent", even if they very obviously are not. Typically the outcome is a "bold plan forward" which is notable for lacking any actual clear solution to the company's main problems. In one example I know of, the CEO decided to pivot from an unrelated field towards launching a cryptocurrency, and cooked up a cartoonishly-dangerous marketing scheme to support the idea. One person ended up dying as a result, and the company then purged every mention of crypto from its website. (And yes, the company collapsed soon afterwards.)
While it's easy to blame the CEO with their oversized salary, the blame for such disasters doesn't just lie with them. After all, arguably the most important roles of the board are to hire a good CEO, ensure the CEO is actually performing as they should, and fire them if they're not. When politics, cronyism, or again, simple incompetence, lead the board to also fail at its job, you end up with the long, slow decline into obscurity we've seen so often in the tech world.
I also don’t think people should equate their history with their current state. They lied to their users and told them they’d never sell their data, and then they did. That is much worse than never having made the promise. I don’t trust them.
But, they have far too much support and are far too embedded to disappear anytime soon.
First, your business model isn't really clear, as what you've described so far sounds more like a research project than a go-to-market premise. Computational pathology is a crowded market, and the main players all have two things in common: access to huge numbers of labeled whole-slide images, and workflows designed to handle such images. Without the former, your project sounds like a non-starter, and given the latter, the idea you've pitched doesn't seem like an advantage. Notably, some of the existing models even have open weights (e.g. Prov-GigaPath, CTransPath).
Second, you've talked about using this approach to make diagnoses, but it's not clear exactly how this would be pitched as a market solution. The range of possible diagnoses is almost unlimited, so a useful model would need training data for everything (not possible). My understanding is that foundation models solve this problem by focusing on one or a few diagnoses in a restricted scope, e.g. prostate cancer in prostate core biopsies. The other approach is to screen for normal in clearly-defined settings, e.g. Pap smears, so that anything that isn't "normal" is flagged for manual review. Either approach, as you can see, demands a very different training and market positioning strategy.
Finally, do you have pathologists advising you, and have you done any sort of market analysis? Unless you're already a pathologist (and probably even if you were), I suspect that having both would be of immense value in deciding a go-forward plan.
Hi, thanks for the comment! Just wanted to respond to some of comments here:
>> First, your business model isn't really clear, as what you've described so far sounds more like a research project than a go-to-market premise.
This is not really a core component of our business but more so was just something cool that I built and wanted to share!
>> Computational pathology is a crowded market, and the main players all have two things in common: access to huge numbers of labeled whole-slide images, and workflows designed to handle such images. Without the former, your project sounds like a non-starter, and given the latter, the idea you've pitched doesn't seem like an advantage. Notably, some of the existing models even have open weights (e.g. Prov-GigaPath, CTransPath).
We have partnerships with a few labs to get access to a large amount of WSIs, both H&E and IHC, but our core business really isn't building workflow tools for pathologists at the moment.
>> Second, you've talked about using this approach to make diagnoses, but it's not clear exactly how this would be pitched as a market solution. The range of possible diagnoses is almost unlimited, so a useful model would need training data for everything (not possible). My understanding is that foundation models solve this problem by focusing on one or a few diagnoses in a restricted scope, e.g. prostate cancer in prostate core biopsies.
I agree with you in that I don’t necessarily think this is really a market solution at the current state (it isn't even close to accurate enough), but I think that the beauty of this solution is the general-purpose nature of it in that it can work not only across tissue types, but also different pathology tasks like IHC scoring along with cancer sub typing. The value of foundation models is in the fact that tasks can generalize. For example, part of what made this super interesting to me was the fact that the general purpose foundation models like GPT 5 are able to even perform this super niche task! Obviously there are path-specific foundation models too that have their own ViT backbones, but it is pretty incredible that GPT 5 and Claude 4.5 can perform at this level already.
Yes to the best of my knowledge, most FDA-approved solutions are point solutions, but I am not yet convinced this is the best way to deploy solutions in the long-term. For example, there will always be rare diseases where there isn't enough of a market for there to be a specialized solution for and in those cases, general-purpose models that can generalize to some degree may be crucial.
-- Exactly 193 of 200 participants completing the study in each group (which, for a study administered in a community setting, is an essentially impossibly-high completion rate).
-- No author disclosures -- in fact, no information about the authors whatsoever, other than their names.
-- No information on exposures, lifestyles, or other factors which invariably influence infection rates.
-- Inappropriate statistical methods, which focus very heavily on p values.
-- Only 3 authors, which for a randomized controlled trial involving hundreds of people in different settings with regular follow-up, seems rather unlikely.
Also, look at the timings:
Received: 16-09-2025
Accepted: 29-09-2025
Available online: 14-10-2025
That's relatively fast but also the paper is not super in-depth.
And in general it seems like that the "International Journal of Medical and Pharmaceutical Research" is not quite well known.
See the Editors, not even pictures there: https://ijmpr.in/editorial-board/
> Incidence of ARIs was documented through monthly follow-up visits and self-reported symptom diaries validated by physician assessment.
This is basically impossible to accomplish for 386 participants who aren't in some form of captivity (e.g. incarcerated, institutionalized, in the military, or a boarding school). Nobody cares enough to maintain a "self-reported symptoms diary" and make monthly visits for some study. If they actually ran the study as designed, they would've have zero usable participants even starting from 400.
Saying nothing of the ethics of giving half the Vitamin D deficient patients presenting at your clinic with a placebo.
> (e.g. incarcerated, institutionalized, in the military, or a boarding school).
That's a pretty big list. Add Retirement communities and your pool increases even more. Add to that the fact that this is India where the population is at least 5x bigger and much more concentrated..
Most retirement communities don't have that much supervision.
Regardless, you can get a lot of data, but of it is from people who have other significant differences in lifestyle from the average person and so it is questionable how it applies. Military gets more physical fitness (we already know most of us need more). Boarding school implies young - children or just older, and so while not useful there are differences related to that to control for (military as well, unless you can get officers who are older thus allowing controlling for age).
> Most retirement communities don't have that much supervision
Retirement communities in India are relatively new. Most older folks get taken care of at home by domestic staff, which, given India's demographics, are incredibly cheap and thus plentiful.
There are retirement communities in India and end-of-life care centers as well. Societies change, and thanks to the internet, societies change faster than ever.
> It is: Negative, Unproductive, Antagonist, non Factual and frankly futile (unless provocative).
The comment gives clear reasoning and makes claims about the contents of the paper that are supported by reading the paper. To call it "non-factual" is simply incorrect. The word "futile" is nonsensical in this context.
You used three different words to complain that the comment critiques the study. There is nothing wrong with such critique in comments here, and indeed a healthy community requires that critique can rise to the top where it's warranted.
> Have you done an experiment lately to show counter proof? Beside claims what else do you have!
This is completely logically irrelevant, and suggests a fundamental misunderstanding of logic. Pointing out that a study is flawed does not require providing evidence for the opposite of the study's conclusion.
> This paper is very positive
A paper being "positive" has nothing whatsoever to do with whether its finding is correct, and it also has nothing whatsoever to do with whether its methodology is valid, and it also has nothing whatsoever to do with whether it accurately reports what was actually observed (i.e. whether any kind of fraud was involved).
> It is in fact (by personal experience)...
It is fundamentally impossible to know those things "by personal experience". That's why studies exist.
This was meant for Gwerbret (but he deleted the comment). Now is to whom may concern :)
Standing by your words you think this paper is shady and you are questioning the work and results behind it.
Moreover your comment somehow is on the very top it misleading the users or at least ridiculing the paper.
Answering to you: It is indeed very much connected to the LEVELS of Vit D not the absence of it. You fail to understand and acknowledge the importance of the results (even though you already know and confirm the benefits of Vit D).
Regulating the levels of it (keeping them higher then average) it prevents health issues by regulating many biological functions/pathways, raising the immunity and lifespan in general. This is the real cure which prevent incredibly terrible future health issues and suffering.
Edit: Just for this effort, this paper deserves Credit. Bravo.
> Just for this effort, this paper deserves Credit. Bravo.
I just went out and did a study myself. But I got 10,000 people, and 100% of the participants gave usable data, with a full record of every action taken, and every possible result. My study shows with 99.99% confidence that vit D is actually _bad_ for you. I hope you will congratulate my positive result (saving people from the dangerous effects of vit D !!) Or at the very least, congratulate me for my effort.
Obviously I completely fabricated that. Do you see how _claiming_ something doesn't mean it's true? Can you see the many red flags in my paragraph above? The other posters are pointing out similar red flags in the main article that's been shared.
I think the strongest criticism is just that being short of just about anything would cause significant effects. being short of water, calories, any vitamin, protein, etc.
Wanting to agree with a study’s conclusions and so ignoring its weaknesses and red flags is bad scientific practice, further reinforcing the comment questioning the value of this publication
Maybe you are right, on ignoring a study weaknesses and red flags is bad scientific practice.
Is that I have been involved personally so long in this topic, that I know for a fact that the results are Good. As I also know many good studies being ridiculed and buried on purpose. No one in the scientific community would dare to criticize a paper in that way. Constructive criticism is connected to intellectual, educated minds, all the rest deserves the same coin or being ignored.
I still don´t understand why that comment on top, (I have seen this to many times).
What you are offering is NOT a free tool -- it is a demo, for a tool for which you are charging $12/month. No reasonable person would interpret a grand total of 3 exports as enough to justify calling this a "free" tool.
This is to say nothing of your violation of AGPL on the use of MuPDF, which has been pointed out here and elsewhere.
But of course, you're free to Show HN a paid product; just kindly don't insult our collective intelligences in the process.
reply