Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I'd wager that's more likely due to Windows than the hardware. Like sure the hardware does play a part in that but its not the whole story or even most of it.

My C++ projects have a python heavy build system attached where the main script that runs to prepare everything and kick off the build, takes significantly longer to run on Windows than Linux on the same hardware.

 help



Afaik a lot of it is ntfs. It’s just so slow with lots of small files. Compare unzipping moderately large source repos on windows vs. POSIX, it’s day and night.

No, it’s not NTFS, it’s the file system filter architecture of the NT kernel.

I had internalised that it was Windows Defender hooking every file operation and checking it against a blacklist? I've had it forced off for years.

Windows Defender is a file system filter which you cannot disable. You may have others (but they're fortunately rare, now).

All that said, you cannot disable the architecture, i.e. bypass the file system filter code.


You can with Dev Drives now apparently, which don't use NTFS and disable ALL the filter drivers (including the Defender one)

I stopped using Windows just as these were added so now I'm curious if there's any actual performance benefit to using the.


No, they don't disable the Windows Defender filter, they put it in async mode.

This guy gets it. Yes bingo. It's the VFS' filters/ACLs support afaik.

Just deleting 40,000 files from the node_modules of a modest Javascript project can thoroughly hammer NTFS.

I think part of that is Explorer, rather than NTFS. Try doing it from the console instead. rd /q /s <dir>.

It still takes a lot longer than Linux or Mac OS X.

NTFS is definitely slower to modify file system structures than ext4.

A big part of it is that NT has to check with the security manager service every time it does a file operation.

The original WSL for instance was a very NT answer to the problem of Linux compatibility: NT already had a personality that looked like Windows 95, just make one that looks like Linux. It worked great with the exception of the slow file operations which I think was seen as a crisis over Redmond because many software developers couldn’t or wouldn’t use WSL because of the slow file operations affecting many build systems. So we got the rather ugly WSL2 which uses a real Linux filesystem so the files perform like files on Linux.


I don't know about ugly. Virtualization seems like a more elegant solution to the problem, as I see it. Though it also makes WSL pointless; I don't get why people use it instead of just using Hyper-V.

Honestly, just cause it's easier if you've never done any kind of container or virtual os stuff before. It comes out of the box with windows, it's like a 3 click install and it usually "just works". Most people just want to run Linux things and don't care too much about the rest of the process



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: