This is just my personal opinion. It is not an uncommon practice to use standard errors in the lab graphs but most of the time I do not feel it is genuine. In fact I cannot really see a scenario that this is genuine, although I may be wrong.

Standard deviation vs standard error

Standard deviation is a description of distribution of data. If you repeat some experiment for 100 times for some results, or when you recruit 100 people and collect information on their characteristics, their values cannot be exactly the same, and will form a distribution. In many situations they are described in means and standard deviations (acronym SDs). With the same mean of 170cm, for example, an SD of 7 and an SD of 20 would give a different picture of this distribution. A larger SD means the data are more spread out.

Standard error (acronym SE) should normally be a short form of standard error of the mean (or SEM), and is essentially the standard deviation of the mean. This is not about how the 100 people are distributed. This is a what-if scenario in which, when you have the chance to recruit 100 people randomly from your background population many times, and find the mean of their height each time. The means would not be all exactly the same, and will for a distribution of means. The standard error is the hypothetical standard deviation of that mean.

Standard errors will always be smaller than standard deviations

For some reason very obvious, which I am not explaining here but you can google but need to recruit your high-school math sense, standard errors are always smaller than standard deviations. Plotting your graph with standard errors always make the graph more reliable, but do not make a lot of modern sense. This is because what we almost always wanted to know is the distribution of the data, not the hypothetical distribution of the mean. What else we would need to know is the confidence of the difference between groups, not the standard error of the mean of either groups.

As an example, when we simply compare the height between men and women, what we would like to know in the modern sense is the distribution of height of men (say mean 170cm, SD 7cm), of women (say mean 164cm, SD 6cm), and the confidence interval of the difference (say 6cm, SE 2cm, although we can use the word SD here for the same meaning).

What we might not want to know

In this scenario I cannot see how the SEs of the mean in men and women are important. Of note, we will probably move beyond caring about the p value which nominally tells you ‘whether there is difference between men and women.’ A numeric person should be able to see it from the confidence interval of the difference.

But take care of the sentences below if you would like to be more accurate. Here you have the estimated difference of 6cm and SD 2cm, and so the chance of observing this number of 6cm when there is no difference (null hypothesis of 0cm) would be low. You might also say given the data observed, the chance of no difference (0cm) is low. Can you tell the difference of the two sentences?