Opposition between symbolic and connectionist systems has existed since the very start in AI. Still this opposition has always showed constructive and renowned experts like Gary and Joshua are having an argument for the better.
For my part, I am inclined to follow Gary Marcus, because, putting it as simply as possible, I believe no idea exists without words (symbols), rendering text the main component of cognitive systems.
Where do you stand ?
Perceptions, on the other hand, are more intuitively bound to reflexes, meaning predefined unconscious reaction that do not need symbols to be properly achieved.
The dispute also bounces on paternity for the limitation of extended learning, which is the capacity of a trained system to achieve decent generalisation outside of the training dataset.
To me, the question here is more about meta learning, the capacity of abstracting a learned concept and link it to symbolic embeddings.
Last, but not least, common sense is still a mystery that seems to be innate and achieve a kind of prioritization of knowledge allowing thoughts to take shortcuts.
I have the intuition that common sense is intimately bound to perceptions.
I truly believe that common sense is, in fact, the cognitive link between perceptions and ideas.
That being said, common sense is hardly a machine feature, it will hardly be, until machine can experience the world. Experience means consciousness in some way. But, autonomous vehicles and robots and sensor loaded smartphones begin to get a sense of their world. It is a beginning.
No knowledge precedes experience
Kant, Criticism of pure reason.