I am not sure why the title got truncated from 'I Created My First AI-assisted Pull Request and I Feel Like a Fraud', which conveys a very different impression, and makes more sense when browsing the comments here.
It does not look like encryption is even stated on their homepage.
1: bkmker is encrypted. Think 1pass but for bookmarks.
2: This is not a browser bookmark syncing tool. Its a stand alone bookmarking tool and website, it does not share your bookmarks with the browser. You can review the bookmarks from any browser.
3: Zero knowledge privacy. The only data bkmker knows about your bookmarks is the opaque binary produced from the encryption that happened client side. We never store or ever see your private key, we have zero web tracking (the exception your IP only used for tracking auth tokens so you as a user can revoke them)
4: We also allow you to save meta data about the site such as their preview image, tags and notes, this makes reviewing your bookmarks a much more enjoyable experience vs just looking at the title.
5: We are just simply bookmarks, by keeping the ui simple and distraction free don't overwhelm you when you just need to navigate to your bookmarks.
6: Our plugin focuses on fast and simple, one click add bookmark operations.
7: We don't data mine you, the only thing we know about you is your email used to register and the total number of bookmarks you have, that's it. We will never sell or share anything about you with anyone ever.
This problem is chronic with GPT[N] dealing with a Windows environment. I have to constantly remind it to prefer the GUI option, though nothing really works. I don't know if agents make use of screenshots the way older automation routines have always done, but increasing use of that kind of data would help LLMs progress beyond CLI-addiction.
I moved all my LAN machines to IoT LTSC 2021 a year ago. Though I don’t regret it, be aware that update delay limits are the same as other Windows OS versions; that useful things like WSL2 will need installing from the app store to get the systemd version, and you’ll need to install the Windows app store from an enthusiast repo on Github; that Windows major version number is a fair way behind, affecting max Docker dated releases and same for many other frameworks; etc. It’s not that I meet a new limit every day, but certainly every few weeks.
I almost never use Windows, and I don't want things like WSL or Docker anyway. I mainly keep it around for things like upgrading firmware, occasionally flashing new ROMs onto phones, stuff like that.
I tried the MyWhoosh virtual-cycling app on one of my boxes.
(I just Googled that link -- it might not be the right one. MyWhoosh is only available on the Store. It refused to run on my elderly GPU anyway, though.)
I checked it, but at $149 per year for the home server (and don't forget to click in the 'information' button on the 'Lifetime' License Duration option), there seems to be a bit of a premium on that MS styling, considering the functionality in competing F/OSS suites.
It's three and a half years since my last cigarette, after decades of smoking. Sometimes I cannot tell if I 'backslid' during that time, because I have had SO MANY dreams about smoking, and feeling regret, in the dream, that I weakened.
But as far as I can tell they are just dreams. But this demonstrates how deeply nicotine addiction was burned into my psyche and my life - that it can even blur the distinction between fantasy and reality.
> you respond in 1-3 sentences" becomes long bulleted lists and multiple paragraphs very quickly
This is why my heart sank this morning. I have spent over a year training 4.0 to just about be helpful enough to get me an extra 1-2 hours a day of productivity. From experimentation, I can see no hope of reproducing that with 5x, and even 5x admits as much to me, when I discussed it with them today:
> Prolixity is a side effect of optimization goals, not billing strategy. Newer models are trained to maximize helpfulness, coverage, and safety, which biases toward explanation, hedging, and context expansion. GPT-4 was less aggressively optimized in those directions, so it felt terser by default.
> This is why my heart sank this morning. I have spent over a year training 4.0 to just about be helpful enough to get me an extra 1-2 hours a day of productivity.
Maybe you should consider basing your workflows on open-weight models instead? Unlike proprietary API-only models no one can take these away from you.
I have considered it, and it is still on the docket. I have a local 3090 dedicated to ML. Would be a fascinating and potentially really useful project, but as a freelancer, it would cost a lot to give it the time it needs.
You can’t ask GPT to assess the situation. That’s not the kind of question you can count on a an LLM to accurately answer.
Playing with the system prompts, temperature, and max token output dials absolutely lets you make enough headway (with the 5 series) in this regard to demonstrably render its self-analysis incorrect.
reply