Hacker Timesnew | past | comments | ask | show | jobs | submit | coldpie's commentslogin

Clickbait title could use another pass. What is this about?

This was the title used when I came across the video. Apparently YouTube uses many different titles for A/B testing but this is the one I got. Can't edit it now, unfortunately.

EDIT: seems like dang or team took care of it, thanks!


It makes more sense when seen on YouTube where you get the thumbnail of one of M. C. Eschers famous drawings is shown.

It’s a drawing of a guy looking at a picture of a town with himself standing in the town, but it’s all twirled and twisted so it’s self repetition isn’t obvious.


I clicked on the link and the video title is "Decoding Escher's most mind-bending piece", which is a lot better. I also had no idea what "3B1B video" meant, apparently it's a channel called "3Blue1Brown".

It's about examining the mathematical methods MC Escher used in one of his recursive drawings.

Probably he didn't use these techniques explicity: the video mentions but doesn't emphasise that he probably sketched out the map by feel instead of analytically, which is probably one reason why he didn't fill in the center.

> Examining the mathematical methods MC Escher used in one of his recursive drawings

This would be an excellent title :)


Depends how you define excellent. If the goal is to get more views then it's not all that great, and views are kind of the point of YouTube for many, especially if they are trying to make a living from it.

That's great for YouTube, but HN has some guidelines:

> please use the original title, unless it is misleading or linkbait


    1974: Intel 8080
    1978: Intel 8086
    1982: Intel 80286
    1985: Intel 80386
    1990: Intel 8010386
    1995: Intel 801040386
    2005: Intel 80107045386
    2025: Intel 8.010207659386e12

> What is it with people getting bitchy the moment a project starts asking for donations?

Being angry is easy and fun, and writing angry, misleading articles gets ad views.


> It is interesting that these are harder to get working than games.

Games are mostly just doing their own thing, only interacting with the system for input & output. MS Office is using every single corner of Windows: every feature in the XML libraries, tons of .NET type stuff, all the OLE and COM and typelib and compound storage features, tons of Explorer integrations, auto-updating stuff via Windows patching mechanisms... there's almost no corner of the Windows OS that MS Office doesn't use.


Yeah, people forget that MS Office, and Excel and Outlook in particular, are the real foundation of Microsoft's vendor lock-in on the desktop.

Outlook is now basically an Electron app, they've deprecated the old desktop Outlook in favor of a port of the web app to desktop, so it's basically just Excel remaining.

Yeah, that's because Microsoft can see the writing on the wall. They don't want Windows to die, but they know the whole OS is at a point where it's probably inevitable that it will.

Developers don't want to use Windows anymore. They all want to run Linux because servers do. Ballmer was right about one thing: It was about the developers.

Microsoft can't compete with Chrome at the K-12 level. A Chromebook is a fraction of the cost at twice the runtime, so nobody is going to learn Windows growing up. There won't be a generation of new ready-trained Microsoft consumers every year.

And the average consumer? Oh, they're running an iPhone and maybe an iPad that's it. If Apple were really smart they'd have released an iPhone screencast dock, but Apple still thinks the iPhone doesn't need multiple user profiles. However, even with Apple's stupid behavior, they're losing their core consumer audience.

Steam is tired of Microsoft, too, so they're pushing for compatibility. Video games are either cross platform, console exclusive, or easy enough to emulate. If nVidia's graphics drivers weren't so proprietary, it wouldn't be nearly as difficult.

The big holdouts are the same people that kept COBOL a live programming language in the 21st century: The business office folks.

Microsoft has missed the boat on smartphones, tablets, budget laptops, smart TVs, video game consoles (which is a little surprising), server-side infrastructure, development, and now AI. Their market prospects right now are Millenials and older that don't want change, people who need exactly Excel or Outlook, and PC video gamers that aren't interested in change. Their best product is VS Code and it's free, their second best product (SQL Server) is pricing people out, and their third best product (.Net) is also free.

At this point I think they're mainly hoping Adobe doesn't jump ship.


> Developers don't want to use Windows anymore.

I mean, did they ever?

I've been programming just since ~2010, but I've only ever saw majority prefer macs due to hardware (with exception being late intel macs) and linux on the regular PCs.

With exception of game devs, I've not seen person who _happily_ defaults to windows, not due to fact that they have to because of company policy or because company is too cheap for an Apple device.


Yes, developers used to like Microsoft. That was where all the money was, and Visual Studio was an extremely good IDE in the late 90s and early 2000s. And at the time, Microsoft's documentation was the best. C++, VB, and then .Net development combined with Sql Server (then a budget option) was a very enticing stack. Using ASP instead of Perl or ColdFusion or PHP was also attractive.

At the time Mac was still largely dominated by PowerPC and Classic OS. And Linux was still seen as an OS for hobbyists and universities. It was not taken seriously until well into the 00s and the 2.4 kernel. Sun was struggling with Java, and the unices were well into their decline from the 80s.

I would say that the transition was how much better Apache was than IIS when it came to operational and security issues.


> Outlook is now basically an Electron app

And it's horrible.


Electron apps also basically don’t work in Wine. I miss having Evernote on Linux.

Wouldnt it be possible to extract the files you need and sort of "repackage" it for linux?

I have no idea how electron apps look "internally" but it doesnt sound too bad.

Sort of like you can unzip .deb files and use them somewhere else, if what i heard was correct (never tried it myself)


If the Electron app is pure JS with no native extensions it can be doable. However, many Electron apps contain platform-specific js code, since features for stuff like Dock on Mac and Taskbar icons on Windows differ. Electron apps like Notion also contain native extensions - compiled C/C++/Objective-C code that are platform specific. For example in Notion, we use sqlite via better-sqlite3 (potentially replaceable since it’s open source, but will need more work than “just” repackaging js), but we also write our own native support libraries to use OS-specific APIs for microphone recording in meeting notes feature.

Thank you for the thorough reply :)

I really like the comments on HN, they are very often really interesting and you actually often learn something new that actually matters.


I have seen many people porting such apps, like there is aur package for notion, figma

It's cool that arch users give porting Notion a valiant effort. But as I predicted, custom native extensions will be [a problem](https://aur.archlinux.org/packages/notion-app-electron#comme...):

> Could it be possible to make the relatively new AI meeting notes feature to work?

> Right now I get the following error when I click the "start transcribing" button:

    Error occurred in handler for 'notion:get-media-access-status': TypeError: s.systemPreferences.getMediaAccessStatus is not a function
        at /usr/lib/notion-app/app.asar/.webpack/main/index.js:2:631015
        at WebContents.<anonymous> (node:electron/js2c/browser_init:2:87444)
        at WebContents.emit (node:events:524:28)

Not my area of expertise so I could be wrong but Electron apps just use Chromium underneath (which already works on linux), so in theory it should be easier to get them running on linux than a native Windows app

How is that even possible? I expect you have to go out of your way to make it platform dependent, are you sure?

Electron is basically just a GUI framework. The application itself can be arbitrarily complicated, nothing stops you from building a Java + .NET + C++&COM app that includes three Windows Services that interfaces with the Electron runtime just for UI.

Having worked in non-swe enterprise for two decades I would argue that this is less true today than it was 10 years ago. It used to be that new hires would come with a basic knowledge of windows and office, but that's no longer the case. At the same time, you have things like Smartsheets and so on, which are more popular, at least with our employees, than Excel and everyone seems to hate Outlook these days. I don't think it was ever really the case though. What Microsoft sells to enterprise is governance, and they really don't have any competition in this area.

Being in the European energy sector we're naturally looking into how we can replace every US tech product with an EU/FOSS one. It's actually relatively easy to buy the 365 experience through consultants which will setup a NextCloud, Libre/Only Office, Proton and a teams replacement I can't for the life of me remember the name of. Beneath it there is a mix of Identity Management systems, often based around Keycloak, at least for now. It works, from what we've seen in Germany (specificlaly with their military) it's also possible to roll it out relatively quickly. It's all the "other" stuff that gets murky. There isn't a real alternative to AD/Entra, yet, from a governance perspective. There are great tech solutions which does the same thing, but they require a lot of IT man hours. Something the public sector is always going to be more willing to deal with than the private sector. If we collectively decided that trains in Denmark should be free for passengers, then that would happen. You can't do that in a private business, though security obviously does factor into it.

This is the general story really. Microsoft's copilot studio is relatively new, and it's probably been flying under the radar in a lot of tech circles because it's basically what power automate always wished it could be. Having used it to build a HR flow, where an AI model will receive the applications, read them, auto-reply to irrelevant ones, create a teams site with files and the relevant people for the relevant applications, and invite the applicant to their first appointment. Well... I gotta say that I'm not sure what we have that's an alternative to that. It took me a couple of hours to build it, and it frankly works better than I thought it would. Granted, I did know the tool because I had previously done a PoC where I build a teams agent which "took over" my teams interactions. Everyone noticed because it spelled correctly and wasn't capable of posting Warhammer 40k ORK meme's in any form of quality, but it was frightenly easy. What Microsoft sells in this area is again the governance of it all. You can do these things because of how EntraID lets you connect services seamlessly with a few clicks. While behind the scenes all of those clicks are only available to you because your IT department control them... Again... without hundreds of manhours.

I'm sure we'll eventually get there, but it'll likely come down to change management. Because even if you're willing to retrain your IT operations crew, it's not likely that they will want to leave the Microsoft world where they are well paid and job-secure. Well, maybe I'm in a cheese bell, but I've never met an Azure/Microsoft IT person who would want to work with something else, and having been forced to work a little bit with it behind the scenes, I sort of get it... well not really.

Which boils down to why Microsoft has always been good with enterprise customers. The decision makers in your organisation will listen to everything, but their own IT departments will often sort of automatically recommend Microsoft products and at the end of the day, it'll all boil down to risk. Which is what Microsoft really sells... risk-mitigation. Sure their licenses are expensive, but is it really more expensive than losing your entire IT staff? (this isn't an actual question I'm asking, it's what goes through the considerations.)


> ...an AI model will receive the applications, read them, auto-reply to irrelevant ones...

You're probably breaking EU law by building this nightmare.

https://artificialintelligenceact.eu/article/86/


All that law says is that the applicant 'shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.'

And even then, only if a job application rejection 'produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights'.

So as long as the company is recording the decisions taken and the reasons for those decisions, and providing those to candidates on request, they're in the clear.

I doubt that they are, but maybe!


If they're using a LLM to make those decisions, then they're fundamentally unable to provide the reasons for those decisions, because of how LLMs work.

Not to mention you can't trust that the AI is actually filtering out applications properly. I've run into that myself when I was responsible for hiring at my last role. The AI solution my boss insisted we use was awful. It highly rated completely unqualified applicants and ignored the few good ones.

> Which is what Microsoft really sells... risk-mitigation. Sure their licenses are expensive, but is it really more expensive than losing your entire IT staff?

There's an old saying in IT that was pretty popular in the 70s and 80s: "Nobody ever got fired for buying IBM."

You'll notice that nobody says it anymore.


This probably reflects my own prejudices, but it always struck me that MS based IT people wouldn’t work with anything else, basically because they couldn’t.

That stack optimises for not really having to understand what you’re doing, but also avoiding any major foot guns (and having the general arse covering that buying IBM used to provide, but which MS now does). The price you pay is that everything is horrible to work with. But if the alternative is not really being able to get anything done at all then so be it?


The Windows ecosystem does a lot of things that, to me, as a Linux/MacOS user, seem like a weird bunch of crazy decisions that are different just because.

Whether that's true or not, it does mean that a lot of people who came up on Windows IT don't have a mental framework for how to run or manage Linux systems. Likewise, when I'm trying to diagnose something on Windows it just seems like the entire thing is a disaster; where are event logs? In the event viewer! How do I filter them? It's a mess! Can I search them? Kind of! Do they have information to help me diagnose the problem? Almost never!

On Linux, I know all the tools I need to solve all the problems that come up; on Windows, I have only minimal concept of how things work, and very little way to diagnose or debug them when they go wrong, which is often.

For example, when my Windows gaming machine comes out of hibernation my ethernet controller insists that there's no connection. I can't convince it otherwise except by disabling the device and re-enabling it. I can't figure out where I might find information that tells me why this is happening, so I just wrote a powershell script to turn it off and then on again. I bet some Windows IT dork could figure it out in 30 seconds, but I'm a Linux IT dork and I have no clue.


> For example, when my Windows gaming machine comes out of hibernation my ethernet controller insists that there's no connection. I can't convince it otherwise except by disabling the device and re-enabling it. I can't figure out where I might find information that tells me why this is happening, so I just wrote a powershell script to turn it off and then on again. I bet some Windows IT dork could figure it out in 30 seconds

Windows and Linux dork here (heh). It has to do with how various computer manufacturers implemented the Sleep/Standby State (S3/S4), how they've resisted implementing a common standard at the hardware level, and how Microsoft eventually gave up arguing and patched around it with their own Modern Standby system in the S0 state.

https://learn.microsoft.com/en-us/windows-hardware/design/de...

Tbh, though, the only computer I've ever seen Hibernate work well on are Macs. Every x86 computer usually has some sort of issue with it, except for maybe business laptop models (eg HP's Elitebook line).


> Tbh, though, the only computer I've ever seen Hibernate work well on are Macs. Every x86 computer usually has some sort of issue with it, except for maybe business laptop models (eg HP's Elitebook line).

This has always been my experience, going back I'd say at least to the early 2000s on cheap laptops, and all the way back to the earliest days of sleep and hibernate on desktops, where sleep just doesn't matter that much.

When I started dabbling in boot code around 2006, I read a bunch of the specs and one of them was ACPI, which I only scratched the surface of.

I think until then it had just not occurred to me that a modern paged protected OS would even want to call into any code supplied with the computer, vs. having it come from a driver disk, or be built in to the kernel where everyone can see it.

The whole idea of a bytecode interpreter running random code supplied by a fly-by-night system builder is a little unsettling.


You're onto something but that's not entirely true for all games. There's plenty of vintage games, made before DirectX standardized everything into the late 90s, that don't work well under wine because back in their day, they would try to bypass windows by "hacking" their way to the hardware via unsupported APIs and hooks, to squeeze every bit of performance from the hardware, and also because every hardware vendor back then from graphics to sound shipped their own APIs.

You mean dos games, just run them under a dos emulator then.

Oh, no, before everything kind of converged to OpenGL and DirectX, there were oodles of different things trying to be the next graphics API.

There are the more obvious ones like 3DFX/Glide, but there was also stuff like the Diamond Edge 3D, which used Sega Saturn style "quads".


NO, I meant Windows games.

90s Windows ran inside of DOS, and you can run e.g. Windows 98 games (through Windows itself) in DOSBox. Look up exowin9x where they're trying to compile all of the necessary configs for one-click launchers.

I didn't think that regular DOSBox had support for stuff like 3dFX does it? Or other weird APIs?

I had to use PCem to get support for that stuff.


I tried running the elder scrolls Redguard, on wine, which launches windows version of dosbox with glide support. Redguard is a weird beast which is installed only with windows installer, but the actual game runs in dos mode

Everything works but the frame rate isn't great

If anyone knows a good Redguard setup for Linux please mail me, you can guess my mail easily. Now I just run the gog version


I've had some success with installing windows in dosbox-x which has glide support. Faster and more lightweight than pcem/86box

Then you can use dosbox-x which can run any non nt windows version and has support for 3d acceleration

Again, I meant windows games, not DOS games.

Windows before NT is just a dos app, and dosbox(-x) can play all windows games up to those which require ME.


Office used to work well on WINE. It was the switch to a rolling release model that killed it.

For games, part of that mere „output” is 3d graphics, so replicating the internals of Direct 3D exactly right and getting the Linux GPU drivers to cooperate. That’s a hardcore task.

Fun fact: MS Office also uses Direct3D :) See "Graphics" requirement here: https://support.microsoft.com/en-us/office/system-requiremen... We put a ton of effort into D3D11 specifically to get MS Office running.

So that's what's keeping Microsoft from just running WINE on an MS-flavored Linux or perhaps a clean slate kernel as their next OS. I've been wondering for a while, this is by far the best explanation.

The Windows Kernel (and arguably the Windows APIs) are the only good part of Windows; they should dump everything else and run Linux above it; wait they did do that and then changed it to a boring VM.

if you rip out linux from your linux distribution you usually end up with GNU.

So, that would make this GNU/Windows

They could brand it as “New Windows”

Out with the old, in with the GNU.

I was disappointed when Microsoft dropped original WSL.

I'll admit I wasn't a Windows user at the time, nor since for that matter. But I had been before.

I knew the history of the "Windows Services for UNIX" and thought that it was incredibly interesting to have the Windows kernel, full driver support, NTFS, and the ability to just use Windows normally, but also be able to just do UNIX-type stuff more or less normally.

Which is what I've been doing on my Mac since the early 2000s.

Then Microsoft had to make Windows a complete shit-show. Not like it hasn't happened before, but they really got themselves in deep this time.


> "running WINE on an MS-flavored Linux"

Like obsolete Longene project?

https://en.wikipedia.org/wiki/Longene


Parts of the OS were designed for Office. (Windows installer service, for example)

> Games are mostly just doing their own thing, only interacting with the system for input & output.

They should be trivial to port then, no?


Yeah but Windows is a more stable api to develop against than Linux (at least when it comes to stuff that games need to do) - it doesn't feel "pure", but pragmatically it's much better as a game developer to just make sure the Windows version works with proton than it is to develop a native Linux version that's liable to break the second you stop maintaining it.

As someone once said it best, Win32 is the only stable ABI on Linux: https://blog.hiler.eu/win32-the-only-stable-abi/

Because free software doesn't need such a thing as a stable ABI.

There does exist flatpak, everything that would benefit from a stable ABI could use that.


Yes, they are easy to port a lot of the time. Especially now because you can use DXVK to translate DirectX calls into Vulkan, so you don't need to write a Vulkan renderer. Input is sometimes a trickier one to deal with but a lot of the time games are using cross-platform libraries for that already!

Despite all this the Unity engine has spotty Linux support. Some games run better under Wine vs. Unity's native Linux builds. It's Vulkan renderer has had a memory leak for a while now. Input has randomly decided to double keypresses on some distros.


The hard part of Linux ports isn't the first 90% (Using the Linux APIs). It's the second 90%.

Platform bugs, build issues, distro differences, implicitly relying on behavior of Windows. It's not just "use Linux API", there's a lot of effort to ship properly. Lots of effort for a tiny user base. There's more users now, but proton is probably a better target than native Linux for games.


It’s not really about OS differences - as the GP said, games don’t typically use a lot of OS features.

What they do tend to really put a strain on is GPU drivers. Many games and engines have workarounds and optimizations for specific vendors, and even driver versions.

If the GPU driver on Linux differs in behavior from the Windows version (and it is very, very difficult to port a driver in a way that doesn’t), those workarounds can become sources of bugs.


The killer for games tends to be the anti-cheat or anti-piracy layers.

I have a Windows game I can't run under CrossOver (aka Wine 11) or a VM, only because its anti-piracy layer doesn't accept those circumstances.


Meanwhile I had to pirate Dark Souls 1 because Microsoft's own DRM prevented the legitimately purchased game from saving on Windows, and download official no-cd patches for two other games because their DRM stopped working.

The problem with DRM is the DRM.


Is that USD? Fifty two thousand dollars for a watch? You can buy two Chevy Bolts for that.

One of the top stories on HN yesterday was about a company that paid 4-5 average people's wages per person for a team that sat on their butts 8 hours a day and wrote meeting scheduling software for a decade. This was done so they could then sell, not even the software, but... the right to their institutional knowledge for an additional few thousand years worth of average wages.

And of course they're permanently deleting the fruits of that decade's worth of work with 1 week's notice.

And this is the 2nd time the team's leaders have run this play, with the same buyer paying each time: seemingly they can just leave again and keep doing this ad nauseum. (Clockwise)

If you put the value we assign to software engineering in terms of other things it really doesn't make sense either. At least what these people did is something mechanically interesting, unique, and enduring vs the average CRUD app.


I see where you're coming from and I'm glad you've had a successful career, but $52,000 for a watch is absolutely cartoonish money lol. It is definitely a cool piece though, no question.

Fun article, and it hits on the main trick: buy old, used watches. eBay, watchuseek forums, wherever. You can get sweet old, mechanical watches for like $20-200 all day long. And they come in reasonable sizes, modern watches are almost always way too big IMO.

I snagged one of these on watchuseek 10+ years ago, remains my favorite watch: https://www.fratellowatches.com/citizen-homer-second-setting...


> I find it hard to empathise with people who can't get value out of AI. It feels like they must be in a completely different bubble to me.

I think it depends on why you do programming. I like programming for its own sake. I enjoy understanding a complex system, figuring out how to make change to it, how to express that change within the language and existing code structure, how to effectively test it, etc. I actively like doing these things. It's fun and that keeps me motivated.

With AI I just type in an English sentence, wait a few minutes, and it does the thing, and then I stare out the window and think about all the things I could be doing with my life that I enjoy more than what just happened. I find my productivity is way down this year since the AI push at work, because I'm just not motivated to work. This isn't the job I signed up for. It's boring now.

The money's nice, I guess. But the joy is gone. Maybe I should go find more joy in another career, even if it pays less.


Oh, I agree entirely. The new paradigm is entirely unsatisfying to me too. It's not the same work that I trained my entire life to get good at, and the new work is not as fun. I trained to get good at this work because I just loved it since I was first introduced to it at ~10. I would have, and was, doing it for free for years.

Unfortunately that doesn't change my outlook on where all this is headed.


Perhaps, then, you can actually empathize with people who don't get value from it :) I used to enjoy the work, now I don't, so I'm posting on HN and daydreaming about other careers, instead of doing something useful.

Yeah, maybe empathise was the wrong word. I certainly empathise with the feelings, I just struggle to see how people cannot use it to get more done.

I'm also daydreaming about other careers instead of doing something useful.


> I just struggle to see how people cannot use it to get more done.

To be blunt about it, there's a decent chance I'll be quitting this job later this year, largely because of the AI push. I just hate these tools and I do not want to work this way. Losing an employee is a pretty big cost to the company. I guess the AI stuff is probably worth it to them, but there's a downside to it, too.


Yeah I agree with you, and I think a lot of people feel the same. It's totally different now and it's not what I signed up for. Maybe I'll get used to it, idk.

I hope everything works out well for you.


> Instead of doing 1x your normal work, you can do 5x while still maintaining quality.

Yet my pay stays the same, all my coworkers get fired, and Sam Altman gets all of their paychecks. Hrm.


> I'd like to think that the top minds working on AI have a higher purpose than to get the next generation hooked to a digital morphine drip

The next 5 years are going to be very disappointing for you.


Yeah, I got duped by this. Did a CS degree because that's what you're "supposed" to do to get a programming job, and it was almost all theoretical junk I had no interest in. I hated it. I think I learned useful things in like, two of my classes. I knew more about programming than all but one of my instructors. It was awful and going through that degree program is one of the biggest regrets in my life. But hey, I get to stick "CS Degree from University" as the very last line on my resumes, I guess. Woo.

I was directly told by senior staff at a large org I worked for that I'd be eligible for a managerial position-- the only thing I was missing was a degree. Unfortunately, getting a degree while working full time for the income I needed was impossible for me at the time.

My entire career would've been different if I had that "very last line on my resumes" and I'd be better off financially. I just couldn't pull it off. I hope yours pays you back eventually, it seems like you worked hard to get it.


That sucks and is super unfair.

For my career path specifically I don't think it has made a difference. I've only had two software jobs in my 17 year career, the first definitely didn't need a degree and I think my current one would've let me in without a degree as I was referred by an employee. I doubt my next job will still be in software, so I'll probably have gotten largely nothing out of the time & money I blew on getting that useless degree.


Where exactly did this "supposed to" come from? I've never met anyone who expected (or needed) a CS degree to teach them programming.

From the post I was replying to:

> Industry demands specifically university degrees to gatekeep positions.

At the time (mid-2000s), people who wanted to get programming positions got CS degrees, so that's what I did. I didn't expect it to teach me anything, it was just the path I was told I was expected to take. In retrospect I should have done literally anything else, but like that same post said:

> And then we leave teenagers to figure out the puzzle by themselves. I think it's a disservice to the youth.

I was a teenager. I made a bad call and wasted 4 years on a degree program I hated because everyone said a degree is required to get a good job, and the degree that programmers get is CS. Sucks.


So do you think most people get into tens of thousands of debt to be “a better citizen of the world” or to learn what they need to know for some company to allow them to exchange labor for money to support their addictions to food and shelter?

What has that got to do with learning programming? Or not learning programming?

Really? If you don’t know how to program, why would a company hire you to program?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: