Skip to content

Thought per Actor

Here we investigate what actors (LLMs, themselves and BigTech) think regarding two of our questions:


Question-03 of 17: What do Humans developing and promoting AI think about what that will do to other Humans?


Question-04 of 17: Is what Humans developing and promoting AI saying in correspondence with what they are actually doing?


So, we try to understand their philosophy rather that their business goals, not their vision of "what cool features we can implement", but their attitude to people and the difference (if any) between what they say about this attitude and what they actually do. Interestingly, there cannot be "no attitude": if some actor does not tell anything about how his work or actions affect people, it means they ignore them, which is the attitude by itself.

Failing Together

While all the major "LLM actors" claim understanding the risks and "talk openly" about them, their actual behavior tends to increase those risks, not to mitigate them. While OpenAI (LLM pioneer) made his way from open source scientific project to giant and greedy corporation which promises investors more than 100x return, its illegitimate child that was born out of wish to be (unlike it's "parent") "genuinely safe" - Anthropic - has prioritized staying competitive over the existential and catastrophic risks they originally warned about.

Actors

aaa Test