Yes, I did say "Apple has paid for ALL the text that it learned from," but you showed a quote that said Apple admits to be licensing only SOME of the text it learned from. I guess I misunderstood Apple's AI keynote. Regardless, your quote shows that Apple paid for SOME of its text, and I'm unaware of any other AI provider that does that. So thanks for the evidence.
From the NY Times website:
"The technology giant has floated multiyear deals worth at least $50 million to license the archives of news articles, said the people with knowledge of talks, who spoke on the condition of anonymity to discuss sensitive negotiations. The news organizations contacted by Apple include Condé Nast, publisher of Vogue and The New Yorker; NBC News; and IAC, which owns People, The Daily Beast and Better Homes and Gardens."
That came from the NY Times prior to the deal being finalized. But I heard later from Apple that the deal was made. And last year the NY Times sued Microsoft and OpenAI for not paying them for the same data that Apple paid for.
The thought behind the smiley: a lot of "communication" is stimulus-response based on learned models, many of which are linguistic. Abstract thought is intimately connected to the expression of thought through language (symbol manipulation).
Quick assimilation and assessment of visual input is required. Humans should currently win due to experience and heuristics but, when it comes to driving, I would expect deep "understanding" about the human condition to offer little competition to lightning-fast responses based on large, learned models—which machines are better at. That's in theory... The thought of automated cars is still unnerving but maybe in part because we see and even experience the horrors inflicted by human drivers that are not fully conscious. A machine will not be half asleep, drunk, drugged up, arrogant, aggressive...
Rarely for lack of conscious responses though.
It doesn't totally make sense to me yet because it's not clear what you mean by "AI", or what the criteria for it are, but I think you are indicating that they show creativity. I understand (i.e. I read... ) that AlphaFold and AlphaZero are in the machine learning (ML) tradition but do they add a new dimension?
If something like creativity or innovation emerges, we then have the question of how such a trait relates to "intelligence".
It's sort of funny that being able to beat humans at chess (AlphaZero and predecessors) was for a long time held to be a test of artificial intelligence until it was possible, and then the goalposts had to be moved.
They are "weak AI". I gather then that you consider AlphaFold and AlphaZero to indicate traits of consciousness..?
There's irony in engaging in an inquiry about the nature and value of AI from a post about AI facilitating the use of a tool to facilitate understanding between people.
Maybe split this Al thread off to an AI discussion?
The dimension missing from this conversation is being. We bring our lack of distinction around being to our attempts to distinguish the beingness of AI and also by misapplying the agency possible by beings to non-being entities like corporations when we say things like Apple did this that or the other thing.
Rigorously speaking, Apple doesn't do anything. Neither does from the perspective of agency an AI do anything. Meat grinder or other tool analogies are apt here.
If you've ever watched YouTube videos of elaborate domino falls that incorporate all sorts of pathways that activate a myriad of other pathways, you could consider AI to possess all the intelligence of a very clever arrangement of falling dominoes. I don't think anyone will imbue the output of falling objects with intelligence.
The pivotal distinction needed for clarity around intelligence is the distinction “thingNESS.” In getting clear about what constitutes a thing versus not a thing you can then distinguish thing from self. When you do this you see that self is not a thing. In other words I (universal I not personal I) am not a thing.
Until that is clear we continue to collapse self and thing and consider our self, even The Self, to be a thing among things. We then take attributes of self and apply it to other things. Self then is no different than other things. There is no unique dimension to self.
In order to get uniqueness of self, you have to let go of all things, all notions, and pass through nothingness. It is only through encounters with nothing that thingness and no-thing-ness is distinguished.
Lightning reflexes don't help in a million common scenarios. For example:
If someone tries to wave at you to let them in, your self driving car won't notice them doing that, (they don't know you are in a self driving car, which is important!) and won't let them in, potentially causing road rage, possibly resulting in you getting hurt.
Or if someone comes running at you with a gun to carjack you, will your AI understand that? Will your AI take evasive action?
Any human can deal with these situations safely, but no AI can.
If cars using AI came with some sort of visual indicator for the world to see, it could help with scenario #1, but it would make scenario #2 worse.
Intelligence. The ability to reason and solve problems.
LLMs solve the problem of language, but not that of meaning.
They model the internal statistics of token streams,
but meaning is the interaction of those streams with their context,
and that interaction is not modelled at all in LLMs.
It is nowhere present in their training data.
This is very different from AlphaZero, for example, which experiments to find sequences of choices which win games.
An LLM doesn't even know what a token stream is, just as a bluffer can sound more or less plausible, but doesn't know what they are talking about, and can't solve problems in the domain with which they are faking familiarity.
Artificial in some sense, perhaps, but in no sense intelligent.
The approximate retrieval which LLMs conduct involves no understanding of the meaning of what they are retrieving – no model of its role or function in any non-linguistic context.
As Yan LeCun puts it:
Don't confuse the approximate retrieval abilities of LLMs for actual reasoning abilities.
Training on the archive – the sole content of LLMs – is precisely the part of Alpha Go that had to be discarded to get the stronger play, and greater generality, of Alpha Zero.
Experiment is the only source of insight. A toddler can pick things up and play around with them to see what happens. AlphaZero, writhin a very limited domain, does the same.
Causality can't be derived from correlation – you have to fool around with the system, and measure the extent to which changes in one element lead to changes in another.
LLMs involve no experimentation, and no reasoning, no model of the contexts and roles of token streams – they have no intelligence.
PS I wonder if @BernSh's ideological issues even arise ?
It's just a question of capacity – LLMs don't have it.
I think... you mean strong versus weak AI, although even examples of the latter can meet the criteria implied by "reason" and "solve".
Reason: "think, understand, and form judgements logically √"
Solve: "find an answer to, explanation for, or means of effectively dealing with (a problem or mystery)" √
We use words that imply agency (even when talking about objects that we know don't have it—humans are funny like that) so when we think about thinking in machines we have to be precise about our terms, or at least remember that they are not as precise as we might have assumed.
Thanks, I shall look into that further. (Context... I am now thinking again about embodied AI, embodied intelligence...)
On solving problems – lets put it more concretely:
whereas Alpha Zero generates innovative Go playing and chess playing, at superhuman level, without relying on game archives, and
AlphaFold achieves a good rate of successful predictions for the 3D structure of proteins, even when these structures have not yet been experimentally investigated,
LLMs just offer approximate retrievals, and pastiche recombinations, without understanding, of existing material.
They can (give or take some stochastic hit and miss) often retrieve existing solutions, but they can't produce new ones.
A piece of pastiche code generated by such a system may look plausible to an innocent, but if it works at all (literally half the time, it doesn't) it's only by accident, and even code which seems to work, at least on first run, is often just accumulating technical debt – further accidents – downstream.
Better for users to just describe a problem and ask for help,
than to start by proudly offering things that the cat brought in.
True enough. If you want ChatGPT to be banned from this website completely, I'd consider supporting that idea, but even if such a rule passed, it would still be hidden behind many of the posts. I don't think it can ever be effectively banned.
I wouldn't think banning should be necessary, but transparency should be expected. I think it would be helpful for people with little-to-no programming experience to be able to post openly that they used machine learning to generate their code and ask more experienced users for the benefit of their feedback on it.
My experience is that any statement of a problem, however minor, harvests generous help.
Even if we were determined to find a role for LLMs in the solution of problems, they would clearly serve the solution provider better than the problem provider. Used at the wrong end, they can only amplify the XY Problem.
Agreed. And even though @BernSh didn't ask for help or feedback on this code or macro, several veteran members still offered the benefit of their knowledge and experience, sharing not just significant improvements but also explanations on why their suggested changes are better and/or safer/more reliable than what ChatGPT haphazardly cobbled together.
As air is to birds and water is to fish, so too are we to ourselves.
You, my dear man are BEING stingy. It’s tough to hear when it doesn’t agree with how think you are being. Who you are being lives in the listener and not the speaker.
The space you and I have to be in life lives in the listening of others. If you are interested in who you are being you need to ask them (the listener) and not ask you.
Your dead and smelling things comments were stones thrown to inflict pain, and clearly thrown with the intent to do harm. Calling it stingy was being generous and gentle. More pointedly it was both mean and nasty.
To be clear, I don’t think YOU are mean and nasty. I think you were BEING mean and nasty in speaking those comments.
While it may at first appear to be a small and insignificant difference between BEING mean and nasty and saying you ARE mean and nasty, the distinction is vast. It’s not unlike code that requires straight quotes and not curly quotes or it simply will not run. Very little difference in the occurrence of what is seen but all the different in working or not.