Machines can't think. My argument is as follows:
There is no thought without a subject (what is even a thought without a subject?)
Machines never deal with subjects and could never deal with subjects
Three supporting observations:
Neural networks don't do it, which explains their behavior (yes, we know NNs are still inadequate, but they're good practical examples to look at.)
A thought experiment illustrating the nature of an algorithm (this is my variation of CRA... it does avoid certain problems by not using a third person as a rhetorical device):
You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in so that if you see a bunch of shapes in a certain order, you would “answer” by picking a bunch of shapes in another prescribed order. Now, did you just learn any meaning behind any language?
https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46
(Related to #2) Machines work by matching internal loads (pattern matching.) All that a machine "deals with" is its own internal states; There's no such thing as an "external world" to a machine. Machines are epistemically landlocked. The power to actually "refer to" anything external is that of intentionality, "the power of minds and mental states to be about, to represent, or to stand for, things, properties and states of affairs." Sensors of a machine do what? Generate signals, which are matched to other signals... pattern matching (see #2 above.)
Therefore, machines could never possess thoughts, and thus couldn't think. Q.E.D.