Well, the big problem to solve with SciPy is the fact that there is more, respectively, of C, C++, and Fortran in the SciPy codebase than Python (http://www.ohloh.net/p/scipy). Part of why Python has succeeded in scientific computing is integration of legacy codebases (via f2py or C extensions or...). There are probably man-years of work involved in devising a solution-- which by the time it's complete may be basically irrelevant or, worse, fragment the community. I personally think that going down this rabbit hole (porting 10 years of scientific Python libraries to PyPy) would amount to an exercise in vanity rather than producing the kinds of revolutionary changes to array-oriented computing that need to happen soon to deal with the large-scale data processing challenges of the present and future. Having recently used GPUs to speed up statistical inference algorithms by a factor of 50 or more, I am not that motivated by a JIT beating C in some cases (as Travis wrote: "C speed is the wrong target"). Many in the SciPy community are convinced that NumPy will not provide the computational foundation that we need going forward, and they are going to step up and start building the next generation NumPy (or whatever it's going to be called). We'd rather have more of the smartest computer scientists in the Python community focused on this problem (building more sophisticated data processing pipelines for use in Python) than on speeding up the Python code that by my estimation doesn't matter that much.
Have you looked at Theano ( http://deeplearning.net/software/theano/ ) ? It is a Python-based JIT for GPUs. Using Python you can build the computation pipeline symbolically, and the formulas are automatically converted to GPU code and scheduled as deemed fit (this can be extended to multiple GPUs, and could theoretically scale to an even higher level).
I think this is a promising idea for the future of array-oriented computing, as it can make use of one more level of parallelism / scaling than the current Numpy paradigm, which is limited to one operation at a time and the user providing the ordering of operations.
AFAIUI, what you're missing is that PyPy can (and often does) interface directly to C libraries, from RPython. So the prospect of re-implementing those specialized codebases isn't a real issue: only the CPython-API based wrappers would need re-implementing. I believe those are a small part of the total code you mention.
But those packages do not just depend on C libraries. They also depend on the numpy C API. If emulating the C API of CPython is too much non-fun work, I would expect the same to be true for numpy C API.