Not to bash on x86 or anything, but that's an outlier. Very overclocked with a compressor chiller or similar. Also the single-threaded and multi-threaded scores are the same; it's probably not stable at full load across all cores.
I don't think that's really representative of the architecture at scale, unless you're making the case for how overclockable (at great power/heat cost) x86 is.
Aaackshually, the Sudden Motion Sensor was introduced on 2005 in the PowerBook G4, and continued through the intel MacBooks with hard drives.
While officially undocumented, people figured out how to access it back then, with novel uses like smacking your MacBook to change spaces (virtual desktops) or swinging the Mac around to make lightsaber noises.
(I should know, I was in university back then and swung my Mac around like an idiot, lol.)
On the first Retina MacBook Pro 15" in 2012, and moving forward with all MacBooks that were SSD-only, they removed the SMS as it was not needed.
To my knowledge, this is the first time we're hearing that Apple Silicon machines have an accelerometer on the SoC, officially or otherwise. It's also certainly not branded or marketed as the SMS was. (https://support.apple.com/en-us/100871)
Strange, I am running it on a Snapdragon 8 Gen 2 (Z Fold 5), and it's totally fine for me. (If anything, it's a little too good at staying in the background; if you have private tabs open it insists on persisting in memory.)
Not saying your issues aren't real, but rather maybe there's another app or your manufacturer's flavor of Android that's causing the issue (like those aggressive background killers).
As for Edge, I used to be a big fan, but when they finally introduced history and tab syncing in 2021, it didn't have E2EE, and it still doesn't, which I find inexcusable. All the other major browser vendors offer it, even Google (though you have to opt in).
Interesting project. I've been thinking about a tool like this; I might be following a multi-volume book series, but it's been years since the last book. When I pick up the latest volume, sometimes there are details that I just can't remember (small details that may turn out important, relationships between minor characters, etc.)
I would just consult a fan wiki, but that doesn't work if the title isn't popular or if the book is too new. This seems like the perfect tool if it can somehow maintain coherency across multiple books.
That said, I do understand (and share) a lot of the frustration and hesitancy that people here have around AI tools; I don't want an app that takes away the act of thinking (like that post recently about teachers using AI to make banal lesson plans, and students in turn using AI to write essays -- what is the point then?). I hope you don't take it too much to heart, and try to showcase use cases where your app can actually provide value.
Another piece of feedback is it would be great if this could be all packaged up into a docker image that would make it easy to deploy on a local machine (or like on a home server/NAS). Right now it seems there are still a lot of manual steps and scaffolding.
> That said, I do understand (and share) a lot of the frustration and hesitancy that people here have around AI tools
I share some of the same feelings as well.
As for use cases where it can provide value, I think it can be of value if you want to read difficult academic, technical or business books with deep understanding. I think so.
> Right now it seems there are still a lot of manual steps and scaffolding.
I think you are right.
I originally planned to use it as a tool for my own exclusive use, so I was able to build an environment with minimal implementation costs, but I didn't expect to get so many comments.
I will improve it!
Why would you need to retrain the model or update the SFT? You could just dynamically update the system prompt to include things it should advertise.
You could even have something like an MCP to which the LLM could pass "topics", and then it would return products/opinions which it should "subtly" integrate into its response.
The MCP could even be system-level/"invisible" (e.g. the user doesn't see the tool use for the ad server in the web UI for ChatGPT/Claude/Gemini.)
> Used as supplied, Google Tag Manager can be blocked by third-party content-blocker extensions. uBlock Origin blocks GTM by default, and some browsers with native content-blocking based on uBO - such as Brave - will block it too.
> Some preds, however, full-on will not take no for an answer, and they use a workaround to circumvent these blocking mechanisms. What they do is transfer Google Tag Manager and its connected analytics to the server side of the Web connection. This trick turns a third-party resource into a first-party resource. Tag Manager itself becomes unblockable. But running GTM on the server does not lay the site admin a golden egg...
By serving the Google Analytics JS from the site's own domain, this makes it harder to block using only DNS. (e.g. Pi-Hole, hosts file, etc.)
One might think "yeah but the google js still has to talk to google domains", but apparently, Google lets you do "server-side" tagging now (e.g. running a google tag manager docker container). This means more (sub)domains to track and block. That said, how many site operators choose to go this far, I don't know.
Slightly related I've also been recently noticing some sites loading ads pseudo-dynamically from "content-loader" subdomains usually used to serve images. It's obnoxious because blocking that subdomain at the DNS level usually breaks the site.
My current strategy is to fully block the domain if that's the sort of tactic they're willing to use.
Not to bash on x86 or anything, but that's an outlier. Very overclocked with a compressor chiller or similar. Also the single-threaded and multi-threaded scores are the same; it's probably not stable at full load across all cores.
I don't think that's really representative of the architecture at scale, unless you're making the case for how overclockable (at great power/heat cost) x86 is.