I don’t know why people feel the need for such revisionism but AI has been a field encompassing things far more basic than this for longer than most commenters have been alive.
> AI has been a field encompassing things far more basic than this for longer than most commenters have been alive.
When I was 13, having just started programming, I picked up a book from a "junk bin" at a book store on Artificial Intelligence. It must have been from the mid-80s if not older.
It had an entire chapter on syllogism[1] and how to implement a program to spit them out based on user input. As I recall it basically amounted to some string exteaction assuming user followed a template and string concatenation to generate the result. I distinctly recall not being impressed about such a trivial thing being part of a book on AI.
Dailymotion, Google Video, sevenload, german TV stations RTL and Pro7 even launched Clipfish and MyVideo respectively to compete with youtube. Youtube happens to be the only one that survived on Googles ad model, the others very quickly realized that paid premium content is much easier to handle (copyright, CSAM) and monetize.
While aviation is the origin of UX design, I'm uncertain whether modern cockpit design is born out of UX or out of a resistance to change. For example, for fuel-efficient takeoffs, you need to go in and override the ambient temperature and air pressure sensors and calculate what an efficient fuel mix would be yourself.
Whatever the reason may be, the fact that pilots regularly engage in rather complicated and obstruse workarounds shows that cockpit design shouldn't be taken as the holy grail of UX.
Incidentally, I also wonder if the many checklists pilots need to go through before the plane does anything are strictly necessary. It seems like automating these steps and removing associated buttons may be beneficial to reduce cognitive load and prevent operator error (such as happened with the Air India crash last year).
Most licenses, EULAs, contracts and so on don't have much precedent in court. There's no reason to believe that GPL would fold once subjected to sufficiently crafty lawyers.
Only if the code you copy pasted the LGPL part into is licenced under a compatible license, and Apache is not.
The simplest way to comply while keeping your incompatible license is to isolate the LGPL part into a dynamic library, there are other ways, but it is by far the most common.
copy/pasting, or using some other mechanism to do digital duplication is irrelevant - the removal of the existing license and essentially _re-license_ without authority is the problem, no matter what the mechanism of including the code is.
I mean, torrenting is decentralised and not technically takedownable. But it was entirely possible to make it legally painful for people involved in it, as seen in eg. The Pirate Bay, megaupload or an entire cease-and-desist letter industry around individual torrenting users
Intentional noncompliance with copyright law can get you quite a distance, but there's a lot of money involved, so if you ever catch the wrong kind of attention, usually by being too successful, you tend to get smacked.
> They just surveyed some college students and drew conclusions by running statistical analyses on the data until they got something that seemed significant.
Is this just cynicism or based on anything? From reading the methods section it doesn't appear this is what happened
> We used a mixed methods approach. First, qualitative data were collected through 41 exploratory, in-depth interviews (women: n=19, 46.3%; men: n=21, 51.2%; prefer not to disclose sex: n=11, 2.4%; mean age 22.51, SD 1.52 years) with university students who had experience playing Super Mario Bros. or Yoshi. Second, quantitative data were collected in a cross-sectional survey…
So interviews with a biased sample (students with experience playing the game) and then a survey.
Also, try adding up those n= numbers. They don’t sum to 41. The abstract can’t even get basic math or proofreading right.
If the body of the paper describes something different than the abstract, that’s another problem
EDIT: Yes, I know the n=11 was supposed to be an n=1. Having a glaring and easily caught error in the abstract is not a good signal for the quality of a paper. This is on the level of an undergraduate paper-writing exercise, not a scientific study as people are assuming.
Seems like n=11 should have been n=1. Use 19, 21, and 1 as a numerator of /41 and you end up with all the same percentages written in the abstract. A typo that should have been caught, but surely nothing more than that and certainly not substantive enough to qualify the claim below:
> This paper is very bad. The numbers in the abstract don’t even add up, which any reviewer should have caught.
> A typo that should have been caught, but surely nothing more than that and certainly not substantive enough to qualify the claim below:
Such an obvious error should have been caught by the authors proofreading their own work, to be honest. Any reviewer would also catch it when evaluating the quality of the sample size.
I find it strange that people are bending over backward to defend this paper and its obvious flaws and limitations.
It does seem to be cynicism, they're convinced the authors "gave people surveys with a lot of questions and then tried to find correlations in the data", but nothing indicates they did more than the 9 questions (plus one more for sex as a control) the paper includes, and restricted it to only Mario/Yoshi players. Ten questions is pretty short.
Do you not see the problem with drawing conclusions from a sample set that pre-selects for Mario/Yoshi players?
How do you think they’re determining that playing Mario/Yoshi prevents burnout if they only surveyed Mario/Yoshi players?
I really don’t understand all of the push to support this paper and disregard critiques as cynicism. The paper is not a serious study, or even a well written paper. Is it a contrarian reflex to deny any observations about a paper that don’t feel positive or agreeable enough?
I've critiqued it plenty in other comments, including that exact issue. However, that doesn't mean they "gave people surveys with a lot of questions" to p-hack, it seems like a study designed (albeit not well designed) to test one specific hypothesis. I see no reason to question that they did the methods as described in the paper, which were designed to test this very specific thing (they didn't even test "childlike wonder" in general, just self-reported Mario-induced childlike wonder), but their conclusions aren't supported by their data. If they were p-hacking as you accuse them of, why not have more questions? Why not survey non-Mario players too so there's a new variable to create significant results out of a null?
There is no server in Moscow, and I don't think there ever was. Muse Group left their original office in Kaliningrad for Cyprus pretty much the second the war started, and at this point has no offices or employees left in Russia. The servers always have been bog-standard cloud things, so Cloudflare, DigitalOcean, aws via Netlify and such.
Not good to hear they're based in Moscow, but that ship has presumably already sailed and sunk if you're running the auto-update code in an existing Audacity installation.
What other concerns besides national origin exist with this code? Nothing seems to qualify as a "back door," certainly.
Set the system language and timezone, the IP and originating ASN, to areas where APT28/APT29 is having active malware campaigns and see whether you'll receive a sample. Pretty simple.
The real question is whether they have changed their C2 behaviors since Valentine's day in 2023, and whether or not the AstraL1nvx botnet operator images are still available publicly.
5 years ago we would've called it a Machine Learning algorithm. 5 years before that, a Big Data algorithm.
reply