Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I was kinda tempted, but I don't think there is such a good metric to measure it. (Since the job market is heavily influenced by interest rate etc... and if I were confident at predicting macro economics I would just bet on stock)


What if we picked 10 prompts that it seems like an AI should be able to depict well, but it can't yet? And then iff the best AI tool in February can do the majority of them you win?


I'll bet against it tho. I don't actually believe pure text-prompt-to-image will improve much (not in a few months at least). I just believe there will be more non-text tools to guide AI, like LoRA and control net, and they will be more accessible.

Control net kinda did what you said but in a different timeframe: it was quite difficult to tell AI to generate a person "sitting with their legs cross". Today, it's relatively easy to do this with control net, but still hard with text prompt only.

Edit: and the sibling comment made me question myself why I would ever take a random bet on the internet.


We started this thread with:

sambleckley> producing a specific image with generative AI is sometimes almost impossible

justrealist> Who could possibly think this will be the case six months from now?

me> I'd bet on this being true six months from now

I thought you disagreed with me on this, but it sounds like maybe not?


I was just stating the improvement on SD we've seen since DELL-E 2 and Midjourney came out wasn't just about "quality of image", but also about "have something specific and moderately complex in mind". Thus I mentioned textual inversion vs LoRA/ControlNet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: