First, we can prove that in order to make the nonzero difference between two perfect squares as small as possible, their square roots must differ by 1. We can prove this as follows:
Let the two squares be a2 and b2. Without loss of generality, let us assume b≥a, so we can write b as a+k. Now, to compute the difference, we get b2−a2=(a+k)2−a2=a2+2ak+k2−a2=2ak+k2. Obviously, in order to minimize the nonzero difference, k≠0, and k must be as small as possible.
Because we want 2ak+k2 to be 1, we can factor this as k(2a+k) to see that either k=2a+k=−1, or k=2a+k=1.
If k=2a+k=−1, 2a−1=−1, so a=0, which is a contradiction.
If k=2a+k=1, 2a+1=1, so a=0, which is a contradiction.
Thus, two nonzero squares a2 and b2 cannot differ by 1.