Recommendation systems (e.g., Twitter) optimize for your attention and spoil you to the detriment of your own well-being. Their objective is fundamentally misaligned with yours.
We are starting an open source initiative RecAlign (short for Recommendation Alignment) to address this misalignment. We use large language models (LLMs) to vet and remove recommendations according to your explicitly stated preference in a transparent and editable way.
We are developing a chrome extension. You can specify your preference for viewing recommendations (e.g., Tweets) in words such as “I like reading about AI research”. We then use a large language model (LLM) to filter out recommendations that you do not wish to see.
Perhaps you signed up for Twitter to keep up with research, but a single click on a funny meme will flood your timeline with similar content.
Perhaps you are recovering from alcohol addiction, but the recommendation system knows too well about your love for alcohol and infests your feed with ads for alcohol.
The recurring problem is that, recommendation systems are skilled at catering to who you are, but contributes nothing towards who you aspire to be. As a practical and initial remedy, we believe as users, you have the right to not see what you do not wish to see, and we want to share this superpower with you via RecAlign, an open source initiative for recommendation alignment.
 The Alignment Problem, by Brian Christian.