Hacker Timesnew | past | comments | ask | show | jobs | submit | throwaway81523's commentslogin

It's news to me that "elementary functions" include roots of arbitrary polynomials, but the wiki article in fact says that they're included at least some of the time. I remember reading about the Risch algorithm (for finding closed form antiderivatives) a long time ago and elementary functions were just the ordinary ones found on calculators.

Interestingly, the abs (absolute value) function is non-elementary. I wonder if exp-minus-log can represent it.


EML can represent the real absolute value, so long as we agree with the original author's proviso that we define log(0) and exp(-∞), by way of sqrt(x^2) as f(x) = exp((1/2)log x). Traditionally, log(0) isn't defined, but the original author stipulated it to be -∞, and that all arithmetic works over the "extended reals", which makes

    abs(0)
    = f(0)            ; by defn
    = exp(1/2 log 0)  ; by defn
    = exp(-∞/2)       ; log 0 rule
    = exp(-∞)         ; extended real arith
    = 0               ; exp(-∞) rule
If we don't agree with this, then abs() could be defined with a hole punched out of the real line. The logarithm function isn't exactly elegant in this regard with its domain restrictions. :)

abs(x) = sqrt(x*x), no?

I think the issue might be the branch cut in the sqrt function. Per the wiki article, elementary functions have to be differentiable in the complex plane at all but a finite number of points.

> When a user clicks the "back" button in the browser, they have a clear expectation: they want to return to the previous page. Back button hijacking breaks this fundamental expectation.

It seems pretty stupid. Instead of expanding the SEO policy bureaucracy to address a situation where a spammer hijacks the back button, the browser should have been designed in the first place to never allow that hijacking to happen. Second best approach is modify it now. While they're at it, they should also make it impossible to hijack the mode one.... oh yes, Google itself does that.


What about all the very legitimate uses of programmatically adding history entries?

Please explain the legitimate uses. Not once I have ever encountered a website that does something useful by modifying the behavior of my browsing history.

Any single page application, such as YouTube, Gmail, or discord.

It lets persistent content (videos) or connections (chat) persist while emulating a pagenated browsing experience.

When it's done right you don't notice it at all.


Youtube doesn't implement a back function. A real back function would take you back to the same page you came from. If you click a video from the Youtube home page, then click the back button, Youtube will regenerate a different home page with different recommendations, losing the potentially interesting set of recommendations you saw before. You are forced to open every link in a new tab if you want true back functionality.

(rant warning)

Well, if I wanted to return to the parent screen in a single page application, I'd click on the back button in the app itself. No need to prevent me from back tracking in the exact order of my browsing should I need it.

I especially hate YouTube's implementation, I can never know the true state on my older PC during whatever it's trying to accomplish, often playing audio from a previous video when I backspace out. I resort to opening every link in a new tab.


https://html.spec.whatwg.org/multipage/nav-history-apis.html...

The spec kind of goes into it, but aside from the whole SPAs needing to behave like individual static documents, the big thing is that it's a place to store state. Some of this can be preserved through form actions and anchor tags but some cannot.

Let's say you are on an ecommerce website. It has a page for a shirt you're interested in. That shirt has different variations - color, size, sleeve length, etc.

If you use input elements and a form action, you can save the state that way, and the server redirects the user to the same page but with additional form attributes in the url. You now have a link to that specific variation for you to copy and send to your friend.

Would anyone really ever do that? probably not. More than likely there'd just be an add to cart button. This is serviceable but it's not necessarily great UX.

With the History API you can replace the url with one that will embed the state of the shirt so that when you link it to your friend it is exactly the one you want. Or if you bookmark it to come back to later you can. Or you can bookmark multiple variations without having to interact with the server at all.

Similarly on that web page, you have an image gallery for the shirt. Without History API, maybe You click on a thumbnail and it opens a preview which is a round trip to the server and a hard reload. Then you click next. same thing. new image. then again. and again. and each time you are adding a new item to the history stack. that might be fine or even preferred, but not always! If I want to get back to my shirt, I now have to navigate back several pages because each image has been added to the stack.

If you use the History API, you can add a new url to the stack when you open the image viewer. then as you navigate it, it updates it to point to the specific image, which gives the user the ability to link to that specific image in the gallery. when you're done. If you want to go back you only have to press back once because we weren't polluting the stack with history state with each image change.


Thanks for the detailed and thoughtful reply! I agree that in both of the scenarios you mentioned, this API does provide better usability.

I guess what feels wrong to me is the implicitness of this feature, I'm not sure whether clicking on something is going to add to history or not (until the back button breaks, then I really know).


Especially since, who cares about traditional SEO any more?

Can someone give a quick explanation of why this is important? It looks interesting but that it would take a lot of background to really understand it.

Hey, author here, so clearly I'm biased.

There is a branch of computer science (close to SAT/constraints solving communities) studying data structures allowing to represent Boolean functions succinctly yet in a way that allows you to do something with it. Like, quickly finding how many models you have where x1=true and x2=false. Of course, if you list every model, that is easy, but not succinct. So we are looking to tradeoff between succinctness and tractability.

OBDDs are one of the earliest such data structure. You can see it in many different ways depending on your taste/background (automata for finite language, decision-trees with sharing, flowcharts, nested if-then-else...), but they are a very natural way of representing a Boolean function. Plus, they have nice properties. One of them being that you can take any OBDD and minimize it in linear time into a canonical form, that is, a given Boolean function has exactly one minimal canonical OBDD (like every regular language have a canonical minimal DFA, this is actually the same result).

The problem with OBDD is that they are not the most succinct-yet-tractable thing you can come up with and Knowledge Compilation has studied many interesting, more succinct, generalization of them. But almost all such generalizations loose the canonicity, that is, there is no way of naturally definining a minimal and unique representation of the Boolean function with these data structures. Nor any way of "minimizing" them. You get better succinctness but you also loose somewhere else.

There is SDD which are also canonical in some sense but the canonical version may be way bigger than the smallest SDD, which is not satisfactory (though they are interesting in practice).

TDD, introduced in this paper, give such a generalization of OBDD, where you can minimize it toward a canonical form. The idea is to go from "testing along a path" to "testing along a tree" which allows to compute more compact circuits. For example, one big limitation of OBDD is that they cannot efficiently represent Cartesian product, that is, function of the form f(X,Y) = f1(X) AND f2(Y) with X and Y disjoint. You can do this kind of things with TDDs.

That said, they are less succinct than SDDs or other generalizations, so canonicity is not free. The main advantage of having canonicity and minimization is to unlock the following algorithm (used in practice for OBDD) to transform a CNF formula (the standard input of SAT solver) into a TDD:

Let F = C1 AND ...AND Cn where each Ci is a clause: - Build a small TDD Ti for each Ci (that is not too hard, clauses are just disjunction of variables or negated variables) - Combine T1 and T2 into a new TDD T' and minimize. - Iteratively combine T' with T3 ... Tn and minimize at each step.

In the end, you have a TDD computing F. The fact that you can minimize at each step helps the search space to stay reasonable (it will blow up though, we are solving something way harder than SAT) and it may give you interesting practical performances.


Thanks, I'll have to take your word that this TDD minimization is useful in practice, the way SAT solvers are useful despite SAT being NP-hard. The general problem of Boolean function minimization is of course horrendously intractable; as you say, way harder than SAT.

Out of curiosity: when you talk about SDD, are you referring to Hierarchical Set Decision Diagrams or Sentential decision diagrams? I did my Ph.D. on the former, hence the curiosity :-)

In my comment, it was for Sentential Decision Diagrams.

How does this relate to OTDDs - Ternary Decision Diagrams?

I did not know about Ternary Decision Diagrams, sorry for the name clash. I had a look. You can always reencode Ternary Decision Diagrams into Binary ones by just encoding variable x with two bits x^1 and x^2, so Ordered Ternary Decision Diagrams are the same (modulo encoding) as OBDD, hence less succinct that our Tree Decision Diagrams. If you consider Read-Once Ternary Decision Diagrams, then you get something roughly equivalent to FBDD (modulo encoding). So this is incomparable with our TreeDD (that is, some functions are easy for TreeDD and hard for RO-TernaryDD like a low treewidth/high pathwidth CNF formula and some functions are easy for RO-TernaryDD and hard for TreeDD ; take anything separating FBDD from OBDD for example).

Doesn't it depend what you're doing? xz data compression or some video codecs? Retrograde chess analysis (endgame tablebases)? Number Field Sieve factorization in the linear algebra phase?

Every day should be CSS Naked Day and also Javascript Naked Day.

This is a 54 minute video. I watched about 3 minutes and it seemed like some potentially interesting info wrapped in useless visuals. I thought about downloading and reading the transcript (that's faster than watching videos), but it seems to me that it's another video that would be much better as a blog post. Could someone summarize in a sentence or two? Yes we know about the refresh interval. What is the bypass?

Update: found the bypass via the youtube blurb: https://github.com/LaurieWired/tailslayer

"Tailslayer is a C++ library that reduces tail latency in RAM reads caused by DRAM refresh stalls.

"It replicates data across multiple, independent DRAM channels with uncorrelated refresh schedules, using (undocumented!) channel scrambling offsets that works on AMD, Intel, and Graviton. Once the request comes in, Tailslayer issues hedged reads across all replicas, allowing the work to be performed on whichever result responds first."


FYI if you have a video you can't be bothered watching but would like to know the details you have 2 options that I use (and others, of course):

1. Throw the video into notebooklm - it gives transcripts of all youtube videos (AFAIK) - go to sources on teh left and press the arrow key. Ask notbookelm to give you a summary, discuss anything etc.

2. Noticed that youtube now has a little Diamond icon and "Ask" next to it between the Share icon and Save icon. This brings up gemini and you can ask questions about the video (it has no internet access). This may be premium only. I still prefer Claude for general queries over Gemini.


I don't want an AI summary, I just want the author to write concisely, and hopefully make a text post instead of a video.

The video could be a shorter, some of the goofiness might not please the most pressed people but that is also what makes it fresh and stand out.

There was nothing goofy about the NERV-logo coffee mug, that was extremely serious business.

> using (undocumented!) channel scrambling offsets that works on AMD, Intel, and Graviton

Seems odd to me that all three architectures implement this yet all three leave it undocumented. Is it intended as some sort of debug functionality or what?


it's explained in the video, and there's no way I'll be explaining it better than her

you could however link to the timestamp where that particular explanation starts. i am afraid i don't have time to watch a one hour video just to satisfy my curiosity.

This is approximately the section in the video titled "Memory controllers hate you" (https://www.youtube.com/watch?v=KKbgulTp3FE&t=1399s), combined with the following section.

The actual explanation starts a couple minutes later, around https://youtu.be/KKbgulTp3FE?t=1553. The short explanation is performance (essentially load balancing against multiple RAM banks for large sequential RAM accesses), combined with a security-via-obscurity layer of defense against rowhammer.


I've found Gemini useful in extracting timestamps for particular spots in videos. Presumably it works with transcriptions, given how fast it is.

The three answers it found were:

- Avoiding lock-in to them: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1914

- Competitive advantage: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1852

- Perceived Lack of Use Case: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1971

Those points do actually exist in the video, I checked. If there are more, I don't know about them, as I haven't yet watched the rest of the video.


Just use the Ask button on YouTube videos to summarize, that's what it's for.

>Just use the Ask button on YouTube videos to summarize,

For anyone confused because they don't see the "Ask" button between the Share and Bookmark buttons...

It looks like you have to be signed-in to Youtube to see it. I always browse Youtube in incognito mode so I never saw the Ask button.

Another source of confusion is that some channels may not have it or some other unexplained reason: https://old.reddit.com/r/youtube/comments/1qaudqd/youtube_as...


Not complaining about the particular presenter here, this is an interesting video with some decent content, I don't find the presentation style overly irritating, and it is documenting a lot of work that has obviously been done experimenting in order to get the end result (rather than just summarising someone else's work). Such a goofy elongated style, that is infuriating if you are looking for quick hard information, is practically required in order to drive wider interest in the channel.

But the “ask the LLM” thing is a sign of how off kilter information passing has become in the current world. A lot of stuff is packaged deliberately inefficiently because that is the way to monetise it, or sometimes just to game the searching & recommendation systems so it gets out to potentially interested people at all, then we are encouraged to use a computationally expensive process to summarise that to distil the information back out.

MS's documentation the large chunks of Azure is that way, but with even less excuse (they aren't a content creator needing to drive interest by being a quirky presenter as well as a potential information source). Instead of telling me to ask copilot to guess what I need to know, why not write some good documentation that you can reference directly (or that I can search through)? Heck, use copilot to draft that documentation if you want to (but please have humans review the result for hallucinations, missed parts, and other inaccuracies, before publishing).


The video definitely wouldn't be over 50m if she was targeting views. 11m -15m is where you catch a lot of people repeating and bloviating 3m of content to hit that sweet spot of the algorithm. It's sad you can't appreciate when someone puts passion into a project.

This is the damage AI does to society. It robs talented people of appreciation. A phenomenal singer? Nah she just uses auto tune obviously. Great speech? Nah obviously LLM helped. Besides I don't have time to read it anyway. All I want is the summary.


> It's sad you can't appreciate when someone puts passion into a project.

It is sad that read comprehension is dropping such that you interpreted my comment that way.


I don't consider AI to threaten "damage to society" the way you seem to, but I did find it interesting to think about how ridiculously well-produced the video was, and what that might signify in the future.

I kept squinting and scrutinizing it, looking for signs that it was rendered by a video model. Loss of coherence in long shots with continuity flaws between them, unrealistic renderings of obscure objects and hardware, inconsistent textures for skin and clothing, that sort of thing... nope, it was all real, just the result of a lot of hard work and attention to detail.

Trouble is, this degree of perfection is itself unrealistic and distracting in a Goodhart's Law sense. Musicians complain when a drum track is too-perfectly quantized, or when vocals and instruments always stay in tune to within a fraction of a hertz, and I do have to wonder if that's a hazard here. I guess that's where you're coming from? If you wanted to train an AI model to create this type of content, this is exactly what you would want to use as source material. And at that point, success means all that effort is duplicated (or rather simulated) effortlessly.

So will that discourage the next-generation of LaurieWireds from even trying? Or are we going to see content creators deliberately back away from perfect production values, in order to appear more authentic?


Yes, I do want the summary because my time is (also) valuable. There is a reason why book covers have synopses, to figure out whether it's worth reading the book in the first place.

In this case the useful info in the book could be distilled down to the cover blurb.

This video really should have been two videos anyway. One to describe how DRAM works (old hat to some of us nerds, but interesting and new to lots of others), and the second one to explain how she got around the refresh interval. Then nerds could skip the first one completely. In reality the two videos could be about 5 minutes each.


Or give the video to notebooklm - you can also get the trasncript (unformatted) using this technique

If you just want the transcript, there is a Show Transcript button in the video description.

I think Laurie is still trying to develop her style. She's been at it for just a few years and her delivery greatly improved over that time span. Not a fan (yet?), but I've seen a few of her videos from different time periods.

Perhaps she or someone on her team (the camera work suggests at least a +1) thinks that this geeky/ditsy persona gets more clicks. Other successful YTers behave similarly. I don't find it useful or entertaining, but others might.

Having said this, I, myself, would've liked the video to be a bit more succinct


As requested:

https://qht.co/item?id=47713090

I agree, not everyone has 54 minutes to watch a video full of fluff (I tried, but only got so far, even on 1.5x speed).


Unnecessarily negative imo.

I like the video because I cant read a blog post in the background while doing other stuff, and I like Gadget Hackwrench narrating semi-obscure CS topics lol


> I cant read a blog post in the background

You can consume technical content in the background?


this is a thing people do. convince themselves they can consume technical content subconsciously. its now how the brain works though. it will just give you the idea you are following something.

not all technical content is the same, or has the same level of importance. this video does not introduce anything that i need to be able to replicate in my work, so i don't need to catch every detail of it, just grasp the basic concepts and reasons for doing something.

Lots of people will have a show on or something while they're cooking or cleaning or doing other things. Is it worse for it to be interesting technical content with fun other stuff thrown in than if was an episode of Friends or Fraiser or Iron Chef or 9-1-1: Lone Star or The Price is Right?

I guess I'm only allowed to have The Masked Singer on while I make dinner.


if your foreground work doesn't occupy your brain, why not?

Because I prefer not to think about the hair I'm removing from my shower drain?

FWIW, I like her videos but I usually prefer essays or blog posts in general as they're easier to scan and process at my own rate. It's not about this particular video, it's about videos in general.

I get a similar feeling for when friends send me 2minute+ Instagram reels, it's as if my brain can't engage with the content. I'd much rather read a few paragraphs about the topic, and It'd probably take less time too.

Same; thanks to modern technology, videos can be transcribed and translated into blog posts automatically though. I wish that was a default and / or easier to find though.

For years I've been thinking "I should watch the WWDC videos because there's a lot of Really Important Information" in there, but... they're videos. In general I find that I can't pay attention to spoken word (videos, presentations, meetings) that contain important information, probably because processing it costs a lot more energy than reading.

But then I tune out / fall asleep when trying to read long content too, lmao. Glad I never did university or do uni level work.


You're saying that the audio channel of that video has the useful information all by itself. The video channel, which consumes most of the bandwidth, is useless. You could go a little further and say about 80% of the 54 minute audio is also useless, and it could be cut to maybe 10 minutes. Keep going and say to post it as text instead of audio, so you can read it in 2 minutes. Now you don't have to put it in the background.

Your comment was several paragraphs, and I am busy so I can't read it all. Can you summarize what you are asking for, I might be able to help later.

I've exchanged emails with Dejan (Bunny CEO) about Bunny-related stuff. It's not THAT hard to contact him, or at least wasn't, some years back. Maybe he's a bigger cheese now.

These are audio recordings right? Could they just say so?

adding text to speech is a great idea, thank you!

Yeah something is wrong there, and I don't see actual breaks in the post, though maybe I missed something. Solving the rotors to get to the plugboard isn't so easy! The permutation changes every character, as the rotors advance.

Fraud as a service! The next big thing!!!

Presidential pardon insurance, like audit insurance but for breaking laws instead of filing taxes.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: