I'm sure had you omitted it - instead of that reply there would have been a series of comments talking about how Microsoft actually has a track record of doing things like this. It's impossible to please everyone on the internet but I very much appreciate when people lean towards making their communication clearer.
I'd rather the symbol be there and occasionally see this discussion happen then the symbol be omitted and occasionally have the discussion where we try and figure out if the person was serious. When talking in person there are all sorts of visual and vocal cues and the speaker has cues in response to confirm the sarcasm was received. There are two parties that can correct that misunderstanding and have well established tools to do so.
/s is basically the internet-enabled equivalent of a sarcasm tone or a wink - it is much more difficult to detect genuine subtle sarcasm on the internet because of the absence of common communication tools. /s is also a valuable accessibility tool for those that might have difficulty with social cues and subtlety so, for all my autistic friends, I'm happy to defend it.
The only setting I'm seeing is on a per-user basis. Does anyone know how to blanket disable training on an organizational basis?
Is there any information about how much information from an organization managed repo may be trained on if an individual user has this flag enabled? Will one leaky account cause all of our source code to be considered fair game?
The initial title and your reply are both too broad to be fully accurate. By April 24th Github will train on private repos (assuming a flag isn't set) but this change is limited to just non-Business/Pro users. So a number of private repos will be effected but it won't automatically affect all private repos (so my panic check on our corporate account wasn't necessary yet).
I am not certain if you're a spokesperson for github - but it's good to be careful in your language. Instead of "No we won't" a lead like "That isn't entirely accurate" would be more suitable. In the end both the original post title and your reply have ended up being misleading.
> By April 24th Github will train on private repos
This statement itself is misleading. Also, GitHub probably should have seen this coming.
They are not doing what I initially thought, which is slurping up your private repo, wholesale, into its training set. You don't have to opt out of anything to prevent that.
They are slurping any context and input containing code from your private repo which is provided to them as part of using Copilot.
So, in addition to the opt-out setting, there is an even easier way to avoid providing them your private repository data to train AI models, and that's by continuing to not use Copilot.
Probably extremely ineffective, it's an issue of scale and unless you really automate the terrible code generation and somehow manage to make it distinct enough in style that it isn't easy to detect and eliminate wholesale then you just won't have the volume to significantly impact the result set.
I'm absolutely sure that there are state actors with gigantic budgets that are putting a lot of effort into similar attacks, though.
> It feels like you can spin this idea for nearly anything. Apparently 25% of alcohol sales are to alcoholics.
I'd like to propose not letting the perfect be the enemy of the good. I accept this argument about gambling might be slippery-slope-able but I think it's pretty obvious to everyone without a vested interest that it's causing extreme societal harm.
Would you be opening to banning just this one thing and then calling it a day and opening the floor back up to such arguments? I think modern politics is too caught up in the bureaucracies of maybe to let good ideas be carried out - honestly, this thought line could easily be written up into an argument that parallels strong-towns. Local bureaucracy is rarely created for a downright malicious reason - here we have a change that could cause an outsized positive outcome so why should we get caught up in philosophical debates about how similar decisions might be less positive and let that cast doubt on our original problem?
> I'd like to propose not letting the perfect be the enemy of the good. I accept this argument about gambling might be slippery-slope-able but I think it's pretty obvious to everyone without a vested interest that it's causing extreme societal harm.
I am pretty sure anyone without a vested interest will also realize that alcoholism has caused extreme societal harm as well. I would say with pretty strong certainty that alcohol has caused more damage, and is currently causing more damage, than gambling. I would be VERY curious to hear someone try to make an argument that more damage is caused by gambling than drinking. Drunk driving kills about 13,000 people in the US every year. Drunk driving accounts for 30% of all traffic fatalities. THIRTY PERCENT! I am sure we all know alcoholics, and so many people have been abused by angry drunks. The raging abusive alcoholic parent is a trope for a reason.
So clearly, we should not get too 'caught up in the bureacracies of maybe' and go ahead and banning just this one thing. Surely banning alcohol will make the world a better place!
Well, we tried that. It was a horrible failure. It lead to the rise of organized crime, and that fact is STILL harming us to this day, almost 100 years after we reversed the decision to ban alcohol.
In fact, when we legalized alcohol, a lot of the organized crime moved into gambling, and have used the fact that it is illegal to fund crime for decades.
I also hate how sports gambling and now prop gambling has taken over. I don't think we should just sit here and do nothing, but there are a lot of things we can do that isn't outright banning, which I think is bad for a lot of reasons.
We should outlaw gambling advertising, just like we did with tobacco. I am fine with adding other restrictions, and placing more responsibility to identify and protect problem gamblers onto the gambling companies. I am open to hearing other ideas, too.
My biggest problem with your comment is the idea that we should stop thinking about the consequences of an outright ban and just go ahead and ban it now. This isn't a 'philosophical debate', it is trying to make sure your action doesn't cause more harm than good. I think looking at other vices, seeing how we deal with those and what has happened when we have tried things like banning in the past, to inform us about how we can mitigate the harm gambling does to our society is a good thing.
Corporate liability isolation has become absurd. People who make decisions that harm people should be held to account for those decisions even if they structured their decision making apparatus in a legal way that makes it look like they're just following the orders of the shareholders.
Zuckerberg has a brain, he decided to take this action, it is absurd he is not being hit with a personal penalty.
The legal system has two goals - to compensate individuals harmed and to discourage further violations of the law. This lawsuit seems to have fulfilled the first goal but fell flat on its face when it comes to punitive damages.
I think there's an axis of perceived wrongdoing here, and you and I fall on different points. Yours is more extreme, you say Meta was doing broad harm by exploring this activity, and want to see greater damages to scare other businesses off from the general territory of addictive interfaces. Mine is where we want businesses to continue to explore and develop 'sticky', compelling, user experiences but Meta went too deep in some specific ways.
EDIT: I see I'm mixing up the New Mexico case yesterday on sexploitation with the addiction case in Los Angeles I thought we were talking about here.
To start off with my personal beliefs... I agree - I see a much broader harm in how platforms try and make themselves addictive as I've worked on such systems in the past. I think the public and even most technical folks that aren't deep into engagement metrics underestimate how studied the field has been and how many iterations of approaches to daily engagement reminders, friction removal and FOMO have been worked through to get to the point we're at today. In my opinion, which absolutely isn't fact, this work is broadly unproductive at improving our daily lives - I can understand that there are some compelling counter arguments that these developments can be harnessed for good but I don't share them.
But, specific to this article and ignoring my personal beliefs - I still find this judgement to be severely lacking. I don't think this judgement is nearly noticeable enough to Meta to actually provide a significant impact on the way they do business outside of tidying up some specifically egregious corners and making sure they internally communicate moving forward in a way that appears to comply with the judgement. The judgement was enough when applied to this pool of users to make these specific users unprofitable in retrospect (e.g. Meta would have more money if it had refused to even do business with these users) but I'm also concerned that the pool of considered victims was so narrow that it excluded a significant number of similarly harmed victims and that the amortized damages end up being negligible.
I guess we have deep deep divisions on what everyone is doing in society, and what makes a 'good' society.
As I've aged, I've entered new-to-me territory where a good society needs to reflect the world as it is, so that its members have high survivability.
At the local family level for instance. When my kids were young. I had dreams of being super financially successful so that I could give them lots of nice things. I just don't want that for them anymore. Protection, and pandering, does not make a good lineage IMO. It's something of a leap I'm asking of you to connect this to my position here on Meta, but I've got other work to do, and I hope it's enough to convey my point.
> When my kids were young. I had dreams of being super financially successful so that I could give them lots of nice things. I just don't want that for them anymore.
That is a decision you had the freedom to make for yourself and your family. In this case, the millions of children didn’t get to make that choice and meta knowingly exploited that. I hope you see our point of view as to why meta doesn’t get the benefit of doubt here.
This was about Meta's platforms not doing enough to protect children from sexual material (and allegedly ignoring employee warnings and lying to the public about it), not intrinsically their addictive interface and compelling user experience. I suppose the actions necessary to protect children from exposure to sexual material/exploitation could limit their ability to make certain changes to their platform, eg tighter moderation would reduce the amount of content that could be uploaded, but they could also have just not allowed children on the platform (like how Facebook started) and then not worried about child exploration?
Similar to prompting hacks to produce better results. If the machine we built for taking dumb input that will transform it into an answer needs special structuring around the input then it's not doing a good job at taking dumb input.
Yeah, I think it is. It's printable if you want to have a hard copy and it's up to you when to check for a new version. Since it's auto-updated (ideally) no matter when you visit the site you'll get the most up to date version as of that day. The issues (which I don't think this suffers from) would be if formatting it nice for printing made it less accurate or if updating it regularly made it worse for printing - these feel like two problems you can generally solve with one fix, they aren't opposed.
reply