Actually, I'd say that neither of those are really lessons here. Facebook already does #1 (and has for some time). #2 was already understood as part of their threat model (hence why FBJS exists), but a bug allowed an attacker to bypass their filtering.
If you HAD to draw a lesson here, I'd go with something more along the lines of
Even if you protect your site properly against clickjacking and CSRF, an XSS vulnerability allows an attacker to bypass all of those protections.
Yes, but Facebook was already filtering out JavaScript: their filter just happened to be slightly broken ;)
"According to Facebook, it turned out that some older code was using PHP's built-in parse_url function to determine allowable URLs. For example, while parse_url("javascript:alert(1)") yields a scheme of "javascript" and a path of "alert(1)", adding whitespace gives a different result: parse_url(" javascript:alert(1)") does not return a scheme and has a path of "javascript:alert(1)"."
"This function parses a URL and returns an associative array containing any of the various components of the URL that are present.
This function is not meant to validate the given URL, it only breaks it up into the above listed parts. Partial URLs are also accepted, parse_url() tries its best to parse them correctly. "
It seems like parse_url is not designed for filtering javascript. Since it came out in front-end, it seems that Facebook initially accepted the embedded javascript in the link.
In essence, Facebook was checking the "scheme" return value and blocking any URLs where the scheme was "javascript". By adding a space, the scheme ended up blank and the URL slipped through. Facebook has never allowed FBML apps to use javascript: URLs in links.
If you HAD to draw a lesson here, I'd go with something more along the lines of
Even if you protect your site properly against clickjacking and CSRF, an XSS vulnerability allows an attacker to bypass all of those protections.