Defined numbers as int64? Or arbitrary precision bignums? Then JavaScript would not be able to support JSON without an external bignum library, which many/most applications do not actually need, and JSON would be less convenient to use.
Or would you prefer they had specifically defined IEEE double precision as the numeric representation? Then JSON numbers would be useless for qemu's offsets and other applications that need numbers not representable as IEEE double precision.
Leaving it unspecified means that implementations support what they can. If you end up needing actual int64s in JavaScript, you can drop in a BigNum library and get them. It's true that not all numbers can be represented by all implementations, but that was true already.
Provided a rich range of integer types, with specifications in the standard. Leaving it unspecified is really the worst choice if you trying to interoperate with real applications and libraries. Supporting "what they can" is great for crappy implementations, and terrible if you're trying to consume this stuff and get work done.
A "rich range" of anything is contrary to the spirit of JSON. JSON became popular in part because it was a much simpler contrast to technologies like XML that provided a "rich range" of everything.
Leaving it unspecified means that JSON as a format is capable of arbitrary precision, without requiring that implementations carry around bignum libraries if their particular application doesn't need them.
JSON also has no floating-point type. It has a generic number type, and it is up to the parsing application to decide how to handle it. You have complete freedom; There is nothing to stop you treating it like an integer, float, decimal, or bignum.
And this causes real bugs in real programs:
https://lists.gnu.org/archive/html/qemu-devel/2011-05/thread...