![[an-abstract-painting-divided-into-three-sections-t-ZhUsXBONS2W75mB9_vf4Sw-mkAE4VC8QG6jaJ60XiEqmw 1.png]]
*Image generated by prompting Midjourney AI for abstract art of a range of possible AI futures.*
What to do about AI took up an annoying amount of worry and time/thought. I'm sharing my thoughts to improve how I approach the issue to encourage you to think, act, and discuss.
The main thing I found for now is to empower others to tilt the balance in favor of AI benefitting our future. Here is what I encourage you to do and share with other people:
1) Spread a consensus:
* We will all have to contend with the effects of AI.
* The challenge is more vast than any of us can handle alone and requires cooperation at scale. Treating development like the Manhattan Project of our time is the [best proposal I have found so far](https://situational-awareness.ai/the-project/). The sooner a solution is deliberated on and pushed to decision-makers, the better.
* The future unfolds through our choices; prosperity and doom are merely the ends of a wide range of possibilities.
* We can act to increase the odds of a beneficial future and learn new ways to do so.
2) Choose AI tools with ethics first and support those who build accordingly. Our money, data, and social influence on what others use all tilt the balance of which organization will shape the future. Make it more profitable for organizations to have an ethics/safety first policy AND act accordingly. Current recommendations:
1) Cancel chatGPT subscriptions, tell OpenAI they must act as promised, and put ethics first. OpenAI's being ahead in terms of AI development and explicit intent to create intelligence beyond human capabilities are what make them the most important to address. I switched to [Claude](https://claude.ai/) by Anthropic and am looking for better options.
- Some of the studies where OpenAI models utilized deception at higher rates include https://arxiv.org/pdf/2311.07590, [quote fabrication](https://www.fastcompany.com/91245091/gpt-is-far-likelier-than-other-ai-models-to-surface-questionable-quotes-by-public-figures-our-data-analysis-shows), and this one [comparing different models ](https://www.pnas.org/doi/10.1073/pnas.2317967121)shows other options with less deceptive tendencies
2) Many other AI tools are built on OpenAI, so some of our payments and data get passed along. Minimize these contributions where possible.
3) Get rid of TikTok and other Chinese apps; they harvest user data. Expect surveillance of any data they get, and the data will contribute to AI development. The Chinese government having the most advanced AI is one of the worst-case scenarios for the rest of the world. For current examples, they use pervasive surveillance technology and social credit to expand their power. The best content on Tiktok gets shared elsewhere; there is little to miss out on.
3) Use AI as a [co-intelligence](🧠%20+%20💻%20=%20Co-Intelligence.md): the intentional combination of humans and AI is more intelligent than humans or machines alone. Keep experimenting on how to apply existing tools to benefit humanity.
- As AI reasoning capabilities improve, this approach will enable rapid exploration of possible plans and create better options for navigating upcoming challenges.
Further thoughts based on my more personal assessment:
1) Have a working answer to the following question in the next 1-2 years: what do I find worth doing even if AI can do it better? Lacking an answer and needing one is many times more costly than the risk of wasted effort.
- The sooner adaptation and learning starts the more enjoyable the journey will be!
1) I think the assessment by [Leopold Aschenbrenner](https://situational-awareness.ai/ ) (ex OpenAI researcher) is reasonable and describes a likely future. Leopold describes possible timelines for when artificial superintelligence (ASI) is developed ranging from 2028 to 2030.
1) China developing AI advancements first is the most concerning risk. This happens even if AI remains aligned to human interests. The potential exists to be the next decisive military technology like guns, nuclear weapons, aircraft carriers, etc. have been. Autonomous systems open a new risk of rapid accidental escalation into conflicts due to software bugs among other reasons. Misinformation and targeted large scale persuasion of populations are an issue we're already seeing at current levels.
- If AI isn't aligned even in subtle ways, that becomes the issue. Nick Bostrom's paperclip maximizer [is among the best simple explanations](https://en.wikipedia.org/wiki/Instrumental_convergence) of how even an AI tasked with paperclip manufacturing can end up causing problems up to and including extinction.
2) Treating development like the Manhattan Project is the best chance for a positive outcome I see so far. Until a better solution arises, that is worth advocating for politically.
> ... even if alignment is possible, we're not on track to solve it in time. Therefore, stopping or pausing must be part of any AI Safety effort.
> https://www.aisafety.camp/
2) The assessment above that AI safety research isn't on track to solve a key issue on time seems reasonable. The lack of time, along with point one, means getting into AI safety research at this point is unlikely to yield a significant impact. Looking for ways to support the researchers doing this important work is more worthwhile.
- There are numerous instances of even current AI [learning](https://www.cell.com/patterns/fulltext/S2666-3899(24)00103-X) and [utilizing](https://www.livescience.com/technology/artificial-intelligence/master-of-deception-current-ai-models-already-have-the-capacity-to-expertly-manipulate-and-deceive-humans) deception.
3) Because the industry is moving into territory where AI safety research hasn't developed, I think helping build tools that may contribute to ASI would be a net negative impact to work on, as doing so leaves even less time for solutions validated by research to be developed and tested.
1) Acceleration vs deceleration is a false dichotomy. Aiming fully for one or the other lacks clear benefits to future outcomes.
2) What does have clear benefits is finding new ways of applying the technologies already developed. Prosperity has more to do with bringing the right solutions where needed than creating new ones. For example, by some estimates, food production [is roughly double the need](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5437836/). Blind spots, misaligned incentives, and application are the things I see mattering more at this point. There is incredible potential for people to make a difference in finding better ways to utilize technology. That is where I focus my efforts.
4) There is a factor that determines why technology has resulted in spreading prosperity instead of the often forecasted mass unemployment. New capabilities are developed and increased in quality that people demand and can buy . If technology is used to cut costs more than expand capability then a downward deflationary spiral of economic decay could result. We can choose to use technology to add more value than cut costs, buy from companies that do so, and support legislation rewarding this behavior.
5) Predictions, especially long-term ones, will mostly be inaccurate. Take Ray Kurzweil who is a highly intelligent human who dedicated his career to predicting technological change and still got a [majority of long-term predictions wrong by independent assessment](https://www.lesswrong.com/posts/NcGBmDEe5qXB7dFBF/assessing-kurzweil-predictions-about-2019-the-results) He has one of the best track records and is one of the most prominent examples of how unreliable longer term predictions already were over the last couple decades. The acceleration of technological change, which is core to Kurzweil's work, suggests that the prediction time horizon is quickly shortening. Filling in details of most plans is best scheduled as close to the implementation as possible; the alternative is wasting time updating plans over and over based on an acceleration of outdated assumptions.
1) What won't accelerate is things like how governments respond and make decisions. How quickly people change their habits of belief and technology use. Agreeing to change physical infrastructure often takes even longer. Therefore, some changes may remain at current rates while the underlying acceleration trend remains true.
2) [This video addresses some key assumptions about how AI is likely to turn out when it comes to the singularity](https://www.youtube.com/watch?v=px9RyIKixHo)
6) Saving and investing to have wealth that can fund multiple years worth of unemployment looks even more essential than before. It is reasonable to expect that learning new skills in response to automation or government interventions could take years to happen. There is a big risk asymmetry here: the potential pain of not preparing is far greater than the cost of preparing and not needing those preparations.
7) Lower-cost scalable automated labor means ownership of businesses and capital will matter more than updating skills. Utilizing advancing AI tools and owning shares/companies will expand their proportion in wealth generation. Most people will be able to profit from advances in AI through stock market investments or by owning at least part of a business. Many major AI tools can be used for pennies to a couple of dollars per hour compared to human labor, which is affordable for most people. Delegating at least some skills to AI is expanding as an effective strategy.
8) Science fiction has been a direct inspiration for some real-world technologies and continues to be. For example, some of SpaceX's ships are named after ships from the Culture sci-fi series by Ian M. Banks. The neural lace concept of brain-computer interface companies like Neuralink was also inspired by and named from Bank's writing. These books represent a compelling set of stories painting an optimistic future that could be possible. How AI could end up far more intelligent than us, and humans still have agency plus a fantastic quality of life.
9) Acceptance. No one has exact control over these issues. We all have the power to accept what isn't controllable and focus on what is. One thing is for certain: strap in, we're in for a wild ride.
10) This is a starting point to spark conversation; I'll keep improving this document as I learn more. What do you think is worth doing about AI?