I know, I know, enough about AI already!
But, lots of groups are starting to use these tools, and I’m regularly surprised at how little discussion there is of their extremely dodgy roots and practices. So I’m sharing this series from MIT Technology Review, which is the best thing I’ve seen on the topic, but I’d love any other recommendations…
In other news, the software that runs this forum is starting to integrate AI-based features and tools. Things like improved suggested discussions; moderation support through ‘toxicity detection’; sentiment analysis; and even composer help through suggested titles.
We probably won’t use any of these, but if we did, luckily the organisation behind the software is an open source company with a commitment to prioritising that kind of software, so almost all these tools are using defaults from more ethical sources.