The key difference is that the model is able to write the program as it’s executing it.
Before it needs to write the code and have an external program execute it. Here it can change its mind mid execution. Kinda like what was observed in the CoT’s ah ha moment
Was hoping to upgrade from my 2017 iMac and this new studio display is quite a bummer. 9 years later and it's basically the same display spec. (500 nits spec bump to 600 nits)
It’s probably because their paid advertising spots are being viewed by bots instead of humans. Amazon still charges the advertisers, since it detects the traffic as coming from a non-bot user based on the user agent.
This figure is sort of an overclaim imho. If you look inside the paper, the reported figure is 97 FPS actually (vs 135 FPS for 3DGS on their device). This 2400FPS they advertise is for a degraded version that completely ignores the transparency... but the transprency is both what makes these representation support interesting volumetric effects and what makes rendering challenging (because it requires sorting things). Drawing 1M triangles at 2400FPS on their hardware is probably just quite normal.
The puzzle assumes that the room temperature is greater than the cold milk's temperature. When I added that the room temperature is, say, -10 °C, Mercury fails to see the difference.
Under any reasonable assumptions for the size and shape of the cup, the amount of coffee, the makeup of the air, etc., the room being -10c won't change the result.
It would only matter if the air were able to cool the coffee to a temperature less than that of the milk in under 2 minutes.
Installing it in Edge makes Edge freak the fuck out about your default search engine being changed. It tried to force me 3 times to change my search engine back, one time saying it had already changed it for me to protect me.
Ctrl+F'd for Perplexity. I knew Google was cooked the minute Perplexity worked better for questions about an obscure embedded systems SDK. It has little documentation, but a lot of mailing list and github issues. Google spits out the front page of the project and shrugs; Perplexity actually answers the question. The usual caveats for LLM hallucination apply.
Same. Try the "books on the Battle of Midway" query on Perplexity. The results are great and include the book mentioned in the article (authored by the Naval Aviator).
The difference in the dates example seems right to me
20 October 2024 and 2024-20-10 are not the same.
Months in different locales can be written as yyyy-MM-dd. It can also be a catalog/reference number. So, it seems right that their embedding similarity is not perfectly aligned.
So, it's not a tokenizer problem. The text meant different things according to the LLM.
Before it needs to write the code and have an external program execute it. Here it can change its mind mid execution. Kinda like what was observed in the CoT’s ah ha moment