Friday, 17th May 2024
Programming mantras are proverbs (via) I like this idea from Luke Plant that the best way to think about mantras like "Don’t Repeat Yourself" is to think of them as proverbs that can be accompanied by an equal and opposite proverb.
DRY, "Don't Repeat Yourself" matches with WET, "Write Everything Twice".
Proverbs as tools for thinking, not laws to be followed.
PSF announces a new five year commitment from Fastly. Fastly have been donating CDN resources to Python—most notably to the PyPI package index—for ten years now.
The PSF just announced at PyCon US that Fastly have agreed to a new five year commitment. This is a really big deal, because it addresses the strategic risk of having a key sponsor like this who might change their support policy based on unexpected future conditions.
Thanks, Fastly. Very much appreciated!
I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.
If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars.
Commit: Add a shared credentials relationship from twitter.com to x.com
(via)
A commit to shared-credentials.json
in Apple's password-manager-resources
repository. Commit message: "Pour one out."
Understand errors and warnings better with Gemini (via) As part of Google's Gemini-in-everything strategy, Chrome DevTools now includes an opt-in feature for passing error messages in the JavaScript console to Gemini for an explanation, via a lightbulb icon.
Amusingly, this documentation page includes a warning about prompt injection:
Many of LLM applications are susceptible to a form of abuse known as prompt injection. This feature is no different. It is possible to trick the LLM into accepting instructions that are not intended by the developers.
They include a screenshot of a harmless example, but I'd be interested in hearing if anyone has a theoretical attack that could actually cause real damage here.