Hacker Timesnew | past | comments | ask | show | jobs | submit | marko-k's commentslogin

He's a brilliant poet, though often misunderstood. Skip Stopping by Woods on a Snowy Evening and try Birches [1], A Tuft of Flowers [2], or the The Witch of Coös [3] instead.

[1] https://www.poetryfoundation.org/poems/44260/birches

[2] https://www.poetryfoundation.org/poems/44275/the-tuft-of-flo...

[3] https://www.poetryfoundation.org/poetrymagazine/browse?volum...


>Skip Stopping by Woods on a Snowy Evening

No way, that's one of my favorites. I don't want to let pretension keep me from liking what I like.


Sound out the syllables in _Stopping by Woods on a Snowy Evening_. That's where you'll find some of the depth that you might feel is lacking there.

https://www.poetryfoundation.org/poems/42891/stopping-by-woo...

Even Frost's _specific choice of letters_, and how they're placed in the meter, supports the idea that death is coming soon for the reader. His consonants are more liquid and smooth in the first stanza ("whose", "village", "will", "th-"), with the few stop-sounding consonants getting softened either by another sound ("woods") or getting quickly passed over by the meter ("I think I know", "stopping here").

The consonance for these stop consonants then increases until the end, which is what makes the infamous line, "The woods are lovely, dark and deep," stand out with such contrast.

And with such precise use of sound in the poem, it's very likely not an accident that it ends with the full stop "p" in "sleep." Despite his protestations to the otherwise, this is the moment the speaker is stopping and drifting off to sleep/possibly dying.


I agree with everything you've said here. I've scanned it many times myself, and it's obviously brilliant. I probably just chose my words poorly.

What I meant was that "Stopping by Woods" is so widely read that I wanted to share a few other poems that don't get as much attention.


If Responses is replacing Assistants, is there a quickstart template available—similar to the one you had for Assistants?

https://github.com/openai/openai-assistants-quickstart


  Location: New York City, NY. 
  Remote: Yes  
  Willing to Relocate: No  
  Technologies: LLM & AI APIs (OpenAI, Anthropic, Structured Outputs, Tool/Function Calling, Assistants), ReactJS, NextJS, Python, Django, NodeJS, SQL, JavaScript/TypeScript, Stripe  
  Résumé/CV: Available upon request  
  Email: mkrkelj1@gmail.com  
I've built viral products and lead development on AI-powered tools. I'm a senior software developer specializing in AI applications for education and document processing. I code across the entire stack, have sharp design intuition, and deliver standout UI/UX. I also regularly run technical workshops and communicate complex ideas clearly.


I’d pivot away from targeting developers and focus on "normal" power users—maybe even pick a specific vertical. Most devs won’t pay for a form builder, but people *will* pay for beauty and ease of use (which is why Typeform works). If you want paying users, you’ll probably need to add integrations so they don’t have to mess with the backend.

Also, I’d rethink the UX for the AI era. Instead of a traditional form builder, what if users just *described* the form they need, and AI generated it instantly? Then they could refine it with natural language—“Add an email field” or “Make question 3 optional”. That way, they go from zero to a polished form in seconds.

You already have the structured data and the code logic—you'd just have to build the AI integration. Luckily, most of the models can speak JSON now with structured outputs, so it's entirely possible.


Thank you, this is definitely on the roadmap for me.


  Location: New York City, NY. 
  Remote: Yes  
  Willing to Relocate: No  
  Technologies: LLM & AI APIs (OpenAI, Anthropic, Structured Outputs, Tool/Function Calling, Assistants), ReactJS, NextJS, Python, Django, NodeJS, SQL, JavaScript/TypeScript, Stripe  
  Résumé/CV: Available upon request  
  Email: mkrkelj1@gmail.com  
I've built viral products and led development on AI-powered tools. Senior software developer with extensive experience in AI tooling, particularly structured outputs and applications in education, text, and document processing. I can code across the stack and have a keen sense for design, UI, and UX. I also run technical workshops for large groups and communicate complex topics clearly.


If that were the case, they’d be neck and neck with Anthropic and Claude. But ChatGPT has far more market share and name recognition, especially among normies. Branding clearly plays a huge role.


ChatGPT is still benefitting from first mover advantage. Which they’ve leveraged to get to the position they’re at today.

Over time, competitors catch up and first mover advantage melts away.

I wouldn’t attribute OpenAI’s success to any extremely smart marketing moves. I think a big part of their market share grab was simply going (and staying) viral for a long time. Manufacturing virality is notoriously difficult (and based on the usability and poor UI of ChatGPT early versions, it feels like they got lucky in a lot of ways)


I think that has more to do with the multiple year head start and multiple tens of billions of dollars in funding advantage.


And you think that is due to their model naming?


I prefer Anthropic's models but ChatGPT (the web interface) is far superior to Claude IMHO. Web search, long-term memory, and chat history sharing are hard to give up.


That's first mover advantage.


Not OP, but I work on AI in higher ed at a major university.

I get the concerns about AI grading. The solution isn't to have AI grade entire assignments at once. Instead, break down the assessment into smaller, discrete tasks and develop a grading rubric around those. The idea is to limit how the AI can respond - usually to simple binary choices like completed/not completed, true/false, etc. (Also, the models have been RLHF’d to generally put a positive spin on things, so if anything they’re likely to be overly generous in assessment.)

From there, provide the AI with the answer key, student response, rubric, and any other necessary context then use the Structured Outputs API to force consistent responses for each discrete task. I've had the most success using boolean values or simple enums (like "Correct", "Partially Correct", "Incorrect"). You can include a field for reasoning, then chain AI calls to get a second assessment as verification.

That's the high-level gist of it, though I'm skipping a lot of details. I have a basic demo of how this works on my site if you're interested: https://www.markokrkeljas.com/projects/real-time-task-tracki...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: