Hacker Timesnew | past | comments | ask | show | jobs | submit | comex's commentslogin

Typically, it doesn't have the ability to deal with a full 64 bits of memory, but it does have the ability to deal with more than 32 bits of memory, and all pointers are 64 bits long for alignment reasons.

It's possible but rare for systems to have 64-bit GPRs but a 32-bit address space. Examples I can think of include the Nintendo 64 (MIPS; apparently commercial games rarely actually used the 64-bit instructions, so the console's name was pretty much a misnomer), some Apple Watch models (standard 64-bit ARM but with a compiler ABI that made pointers 32 bits to save memory), and the ill-fated x32 ABI on Linux (same thing but on x86-64).

That said, even "32-bit" CPUs usually have some kind of support for 64-bit floats (except for tiny embedded CPUs).


The 360 and PS3 also ran like the N64. On PowerPC, 32 bit mode on a 64 bit processor just enables a 32 bit mask on effective addresses. All of the rest is still there line the upper halves of GPRs and the instructions like ldd.

See also this video comparing Corridor Key to traditional keyers:

https://www.youtube.com/watch?v=abNygtFqYR8


It depends on what you want to do with it.

If you just want the optimizer to be able to constant-fold a value, then yes, either of those will work.

If you want to be able to use the value in the other contexts the parent mentioned that require constant expressions as a language rule, then you generally need constexpr. As an exception, non-constexpr variable values can be used if they’re const (not ‘happens to not vary’) and have integer or enum type (no floats, structs, pointers, etc.). This exception exists for legacy reasons and there’s no particular reason to rely on it unless you’re aiming for compatibility with older versions of C++ or C.

Even if you don’t need to use a variable in those contexts, constexpr evaluation is different from optimizer constant evaluation, and generally better if you can use it. In particular, the optimizer will give up if an expression is too hard to evaluate (depending on implementation-specific heuristics), whereas constexpr will either succeed or give an error (depending only on language rules). It’s also a completely separate code path in the compiler. There are some cases where optimizer constant evaluation can do things constexpr can’t, but most of those have been removed or ameliorated in recent C++ standards.

So it’s often an improvement to tag anything you want to be evaluated at compile time as constexpr, and rarely worse. However, if an expression is so trivial that it’s obvious the optimizer will be able to evaluate it, and you don’t need it in contexts that require a constant expression, then there’s no concrete benefit either way and it becomes a matter of taste. Personally, I wouldn’t tag this particular pi/2 variable constexpr or const, because it does satisfy those criteria and I personally prefer brevity. But I understand why some people prefer a rule of “always constexpr if possible”, either because they like the explicitness or because it’s a simpler rule.


It doesn’t strike me as AI. The writing is reasonably information-dense and specific, logically coherent, a bit emotional. Rarely overconfident or vague. If it is AI then there was a lot more human effort put into refining it than most AI writing I’ve read.

As long as IPv4x support was just something you got via software update rather than a whole separate configuration you had to set up, the vast majority of servers probably would have supported IPv4x by the time addresses got scarce.

However, if it did become a problem, it might be solvable with something like CGNAT.


CGNAT would also be easier on routers too, since currently they need to maintain a table of their port being used to the destination ip and port. Whereas with ipv4x, the routing information can be determined from the packet itself and no extra memory would be required

That's only true when forwarding IPv4x -> IPv4. When you're going the reverse direction and you need to forward IPv4 -> IPv4x, well, still need a table then.

This article has a whole lot of "it's not X, it's Y"…

In reality this isn't much of a change. For decades it's been a given that mainstream CPUs have vector instructions. RISC-V was the odd man out in _not_ mandating vector instructions. Even so, most CPU code doesn't use them.

And this is unlikely to change anytime soon. Yes, ML workloads are becoming much more popular, but CPUs are still not parallel enough to do a good job at them. Only occasionally is it a good idea to try anyway.

Edit: Note that there is something novel about the approach that RISC-V and ARM are now following, namely being vector-length agnostic, but this is unlikely to have much impact on how much CPU code is vectorized in the first place. It improves scalability a little, but also gives compilers a little harder of a job. It is not something that's going to fundamentally transform the extent to which CPU code uses vector instructions.


That would not be a good approach on Macs where most users are using reduced/laptop keyboards that have no Insert key.

In this respect, Apple got pretty lucky. Most users were not using reduced keyboards in 1987 when they originally decided to add the Control key separate from Command. Plus, Mac OS didn't even have a native terminal at the time; I assume there were terminal emulators for networking/serial use but I can't imagine that was top-of-mind for Apple either.

Regardless, Cmd-C is definitely a more convenient shortcut than Control-Insert, even if you do have the keys for the latter.


> Mac OS didn't even have a native terminal at the time; I assume there were terminal emulators for networking/serial use but I can't imagine that was top-of-mind for Apple either.

I think it was in their mind. The manual for the keyboard (yes, keyboards had manuals back then) says the keyboard has “special keys that work in applications running in alternative operating systems” (https://www.cvxmelody.net/Apple%20Extended%20Keyboard%20II%2...)


I agree with you about Cmd-C being more convenient but that’s besides the point.

My point was that on all three operating systems Ctrl-C has an unambiguous feature: send SIGINT. It is more important to have SIGINT be consistent than have copy be consistent. Accidentally sending SIGINT to a job that has been running for an hour? That hour of work may now be gone. This is a deliberate action that should not be a mistake. Copying is not that? Win+C on Windows doesn’t do any destructive actions.


Based on a search, the SQLite reimplementation in question is Frankensqlite, featured on Hacker News a few days ago (but flagged):

https://qht.co/item?id=47176209


Flagging on HN is getting insane.

LLMs are really bad at anything visual, as demonstrated by pelicans riding bicycles, or Claude Plays Pokémon.

Opus would probably do better though.


How could they be any good at visuals? They are trained on text after all.

Supposedly the frontier LLMs are multimodal and trained on images as well, though I don't know how much that helps for tasks that don't use the native image input/output support.

Whatever the cause, LLMs have gotten significantly better over time at generating SVGs of pelicans riding bicycles:

https://simonwillison.net/tags/pelican-riding-a-bicycle/

But they're still not very good.


I have to admit I'm seeing this for the first time and am somewhat impressed by the results and even think they will get better with more training, why not... But are these multimodal LLMs still LLMs though? I mean, they're still LLMs but with a sidecar that does other things and the training of the image takes place outside the LLMs so in a way the LLMs still don't "know" anything about these images, they're just generating them on the fly upon request.

Some of the LLMs that can draw (bad) pelicans on bicycles are text-input-only LLMs.

The ones that have image input do tend to do better though, which I assume is because they have better "spatial awareness" as part of having been trained on images in addition to text.

I use the term vLLMs or vision LLMs to define LLMs that are multimodal for image and text input. I still don't have a great name for the ones that can also accept audio.

The pelican test requires SVG output because asking a multimodal output model like Gemini Flash Image (aka Nano Banana) to create an image is a different test entirely.


Maybe we should drop one of the L's

Claude is multimodal and can see images, though it's not good at thinking in them.

Shapes can be described as text or mathematical formulas.

An SVG is just text.

> We wanted to download a clip using yt_dlp (a Python program). Terminal told us, this would require dev tools (which it doesn't).

It is offering to install Apple's developer tools package which includes Python. The download is ~900MB, much of which consists of large Swift and C compiler binaries. That's pretty large if you only need Python, but in practice you probably do want the full dev tools because Python packages often compile C extensions when installed.

> Except, that non-blessed python could not access the internet because of some MacOS "security" feature.

There is no such security feature. Perhaps a TLS issue?

> Another "security" feature requires all apps on Apple computers to be notarized, even the ones I built myself. This used to have a relatively easy workaround (right click, open, accept the risk). Now it needs a terminal command.

You can also do it from System Settings. Or if you are actually building on the same machine, you can avoid the problem as described at the bottom of this page:

https://lapcatsoftware.com/articles/catalina-executables.htm...

> On some Apple systems, this fails to show any audio devices, "for security reasons".

While the implementation is somewhat janky, there are real and valid security reasons to require consent for using the microphone.

> There is no indication anywhere that the hard drive is getting full.

Not proactive warnings (does any OS do that?), but there are plenty of ways to see how full the disk is, including the newish System Settings -> General -> Storage, which breaks down storage use and offers some ways to save space.

> There is no simple way to reset the computer to factory conditions.

System Settings -> General -> Erase All Content and Settings.


> There is no such security feature. Perhaps a TLS issue?

Definitely user error. If you install Python from the website, instead of using the developer tools or Homebrew (which requires the developer tools), you also have to run the `Install Certificates.command` which comes with it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: