Selfless move. Have a lot of respect for the Dell philanthropy arm for doing this, will surely benefit a bunch of kids down the road even if they are more financially literate
It's difficult for me to parse what exactly your argument is. Facebook built a system to ingest third party data. Whether you feel that such technology should exist to ingest data and serve ads is, respectfully, completely irrelevant. Facebook requires any entity (e.g. the Flo app) to gather consent from their users to send user data into the ingestion pipeline per the terms of their SDK. The Flo app, in a phenomenally incompetent and negligent manner, not only sent unconsented data to Facebook, but sent -sensitive health data-. Facebook they did what Facebook does best, which is ingest this data _that Flo attested was not sensitive and collected with consent_ into their ads systems.
#1. Facebook did everything they could to evaluate Flo as a company and the data they were receiving, but they simply had no way to tell that the data was illegally acquired and privacy-invading.
#2. Facebook had inadequate mechanisms for evaluating their partners, and that while they could have caught this problem they failed to do so, and therefore Facebook was negligent.
#3. Facebook turned a blind eye to clear red flags that should've caused them to investigate further, and Facebook was malicious.
Personally, given Facebook's past extremely egregious behaviour, I think it's most likely to be a combination of #2 and #3: inadequate mechanisms to evaluate data partners, and conveniently ignoring signals that the data was ill-gotten, and that Facebook is in fact negligent if not malicious. In either case Facebook should be held liable.
pc86 is taking the position that the issue is #1: that Facebook did everything they could, and still, the bad data made it through because it's impossible to build a system to catch this sort of thing.
If that's true, then my argument is that the system Facebook built is too easily abused and should be torn down or significantly modified/curtailed as it cannot be operated safely, and that Facebook should still be held liable for building and operating a harmful technology that they could not adequately govern.
No one is arguing that FB has not engaged in egregious and illegal behavior in the past. What pc86 and I are trying to explain is that in this instance, based on the details of the court docs, Facebook did not make a conscious decision to process this data. It just did. Because this data, combined with the billion+ data points that Facebook receives every single second, was sent to Facebook with the label that it was "consented and non-sensitive health data" when it most certainly was not consented and very sensitive health data. But this is the fault of Flo. Not Facebook.
You could argue that Facebook should be more explicit in asking developers to self-certify and label their data correctly, or not send it at all. You could argue that Facebook should bolster their signal detection when it receives data from a new apps for the first time. But to argue that a human at Facebook blindly built a system to ingest data illegally without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents (which it did, that Flo sent to them). This case is very squarely #1 in your example and maybe a bit of #2.
>Facebook did not make a conscious decision to process this data.
Yes, it did. When Facebook built the system and allowed external entities to feed it unvetted information without human oversight, that was a choice to process this data.
> without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents
This seems like a giant assumption to make without evidence. Given the past bad behavior from Meta, they do not deserve this benefit of the doubt.
If those systems exist, they clearly failed to actually work. However, the court documents indicate that Facebook didn't build out systems to check if stuff is health data until afterwards.
> Facebook did not make a conscious decision to process this data. It just did.
What everyone else is saying is what they did is illegal, and they did it automatically, which is worse. What you're describing was, in fact, built to do that. They are advertising to people based on the honor system of whoever submits the data pinky promising it was consensual. That's absurd.
If FB is going to use the data, then it should have the responsibility to check whether they can legally use it. Having their supplier say "It's not sensitive health data, bro, and if it is, it's consented. Trust us" should not be enough.
To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.
>To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.
AFAIK that's only because of mandatory scanning laws for CSAM, which were only enacted recently. There's no such obligations for other sensitive data.
In some crimes actus reus is what matters. For example if you're handling stolen goods (in the US) the law can repossess these goods and any gains from them, even if you had no idea they were stolen.
Tech companies try to absolve themselves of mens rea by making sure no one says anything via email or other documented process that could otherwise be used in discovery. "If you don't admit your product could be used for wrong doing, then it can't!"
Yup, fair. I tried to acknowledge that in my paragraph about KYC in a follow-up edit to one of my earlier comments, but I agree, the language I've been using has been intentionally quite strong, and sometimes misleadingly so (I tend to communicate using strong contrasts between opposites as a way to ensure clarity in my arguments, but reality inevitably lands somewhere in the middle).
I understand where you're coming from but "more forward thinking than a bank" should not be the aspiration for the organization primarily responsible for cybersecurity of the United States gov. This is not a good look for CISA.
You're going to have to be more specific than "this is not a good look". It looks pretty reasonable to me, given CISA's remit. Which part do you have a problem with, the limited role CISA has to motivate and guide security adoption inside government agencies, or the specific recommendations and metrics they're managing?
This "strategic plan" is devoid of any meaningful, measurable metric. The language throughout this document is carefully crafted to appear measurable at the surface, but meticulously written to be able to accomplish one thing after X number of years: stand infront of a podium and declare that the metric has been achieved.
Example: "Help organizations safely use AI to
advance cybersecurity."
How do you measure this? What does this even mean? What does success look like if this is achieved?
I extracted all the metrics from the document and put them in a comment downthread. They look pretty reasonable to me. I'm sure every security team in America has some dumb metric about AI somewhere, but AI stuff is like 5% of the whole plan.
I think you're missing the point of my comment - the point is not that there is a meaningless metric about AI, the point is that it is -not a measurable metric- by any stretch of the imagination.
Easy way of knowing if a number is associated with RoboKiller is to determine if they are using this LLC as their carrier: "NATIVE AMERICAN TELECOM - PINE RIDGE LLC"