This paper introduced convolutional (weight-sharing) networks - now popularly known as Deep Neural Networks - and showed they could be used in real-world problems. Cited 24,100 times, according to Google Scholar (2020-01-29) - over 1,000 citations per year on average.
Oh and - psychologists take note - published in conference proceedings.
Not a one off. How about this conference paper. It's by Simonman & Zisserman, it's a development of the LeCun paper, it was published in 2009, and has averaged 5,500 citations per year.
Back in November 2018, a few of my colleagues read a recently-published article in Psychonomic Bulletin & Review. The article concerned the evidence for dissociable learning processes in comparative and cognitive psychology. We had all previously critiqued, in print, some part of the evidence presented. We had no particular reason to assume that the authors would agree with our critiques --- and that's fine, it's all part of the continuing debate and dialogue of science. What was perturbing was that the review had largely been written as if no such critiques existed.
In our response, (now accepted by PB&R) we coined the term testimonial review for this type of article. The term refers to a well-known technique in advertising where one promotes a product by highlighting cases that put your product in a good light. Of course, you can't scientifically evidence a claim simply by reporting the data that supports it. One has to consider both the evidence for, and against. You weigh the evidence and come to a conclusion. Good science involves showing your working, so one would expect this process of weighing evidence to be part of any scientific review paper. We call this a balanced review.
Testimonial reviews are not good science. They are potentially misleading, and may result in others basing their own work around the incorrect assumption that a particular issue is resolved. Science isn't advertising ... or, at least, it shouldn't be.