The Goal of This Piece of Writing:
What I want to do here is to provide you with the similarities and differences between the two intimately related concepts called "statistic" and "estimator". However, I do not want to go through the differences between a parameter and a statistic, which I assume is clear enough to everyone who is struggling with the differences between a statistic and an estimator. If it is not the case for you, you need to study earlier posts first, and then start studying this post.
Relationship:
Basically, any real-valued function of observable random variables in a sample is called a statistic. There are some statistics that if they are well designed, and have some good properties (e.g. consistency, ... ), they can be used to estimate the parameters of the underlying distribution of the population. Therefore, statistics are a large set, and estimators are a subset inside the set of statistics. Hence, every estimator is a statistic, but not every statistic is an estimator.
Similarities:
Speaking of the similarities, as mentioned earlier, both are functions of random variables. In addition, both have distributions called "sampling distributions."
Differences:
Speaking of the differences, they are different in terms of their goals and tasks. The goals and tasks of a statistic could be summarizing the information in a sample (by using sufficient statistics), and sometimes doing hypothesis test, etc. In contrast, the primary goal and task of an estimator, as its name implies, is to estimate the parameters of the population being studied. It is important to mention that there are a wide variety of estimators, each of which has its own computational logic behind, such as MOMEs, MLEs, OLS estimators and so on. Another difference between these two concepts has to do with their desired properties. While one of the most desired properties of a statistic is "sufficiency", the desired properties of an estimator are things like "consistency", "unbiasedness", "precision", etc.
Caution:
Therefore, you need to be careful about using terminology correctly when dealing with statistics and estimators. For instance, it does not make much sense to talk about the biasedness of a mere statistic, which is by no means an estimator, because there is no parameter involved in such a context in order for us to be able to calculate the bias, and talk about it. Thus, you need to be careful about the terminology!
The Bottom Line:
To sum up, any function of observable random variables in a sample is a statistic. If a statistic has capability to estimate a parameter of a population, then we call it an estimator (of the parameter of interest). However, there are some statistics that are not designed to estimate parameters, so these statistics are not estimators, and here we call them "mere statistics".
What I offered above is the way I look at and think of these two concepts, and I tried my best to put it in simple words. I hope it helps!