Sam Altman is so adept at manipulating the media without them being conscious of it at all. He's in 'em like cordyceps - not since Trump has there been someone who just effortlessly injects their narratives and has them so dutifully rebroadcast. Because OpenAI is clearly in the wrong - because they fed their botchkin stuff they didn't own, in this case from one of the most respected manufacturers of news ever - they have had to change their tack somewhat. I boiled down a few of the industry's arguments before, but this is just a recitation of Meta's position here. And even then, it's stupid. His case is that they don't really need the New York Times' stuff, because they have so much other stuff. Well, first, it sounds like you kinda did need the New York Times' stuff, because you took it. Second, what is the copyright status of the rest of the corpus you just gestured at, Sam? There are types of idiocy which only the very intelligent may aspire to. Ordinarily it would be a great relief to have doofus argumentation like this as your foe, but losing to it is going to feel very, very bad.
It's easy to tell what a person is actually thinking, listen to what they say. It's in there, either by constant repetition or conspicuous absence. It gets easier and easier to do the more you practice it. So, no - I don't believe that the masters of AI truly believe they're in the right on their buffet-style approach to feeding their models with the works of living people. They talk about it too much. That doesn't mean they won't win. For you and I, the American legal system is a graveyard. For our masters, it's merely a toll gate - fat with their stolen wealth, they can pay those at the fore and those behind the scenes with the money they stole from… Well, everyone. It's a recognized dynamic, fractal in its duplication and precision. I am not telling you this because I am attempting to entertain you with content. I am telling you this because we're living through history and you need to understand that.
I was watching some kind of studio podcast thing with the two necromancers who dug up George Carlin just to fuck him, and they were talking about how the current large language models have this problem - the problem of licensing and attribution. They say a future version of these tools won't be akin to an LLM at all - it will be a machine that learns on its own. That's… what? That's still a product ingesting other people's products. How stupid are you? How stupid do you think I am? They share the same condition so many of their ideological kin do: they are either pretending to not know what a person is, or they genuinely don't know. Neither mindset belongs anywhere near policy.
Now: #Fridabe! Returnal! Then, Endless Dungeon with Lord Balvin! Right here!
(CW)TB out.