AI, Chat GPT and their associated tools fascinate me.
I was jaded by yet another hype cycle (I must confess that Web 3 leaves me cold), but the possibilities around AI are genuinely fascinating.
I am worried about how we’re diving into this without an obvious outcome. Open AI’s new statement of intent has its detractors (and some blogs I read are concerned about the impact of AI-rmageddon).
I read a fascinating book about longtermism: What we owe the future by William McCaskill. AI takeover was one of the leading existential risks for humanity (along with favourites like engineered pathogens, nuclear war and the climate).
Aside from some Robocop-engineered dystopian future, one of the interesting points was how potentially our cultural processes might be stuck in 2023 forever if we use today’s data to set up AI. We think we’re enlightened beings today, but so did everyone in 1923, and there have been a lot of positive cultural changes in the last 100 years. We don’t know how long humanity may last, so we must be cautious about what institutions we set up as they may become part of culture forever.
OpenAI is now taking a step back and being more cautious in safeguarding the future of AI.
What are your feelings about this? How cautious should we be?