The point is pedagogical. Entropy takes a lot of time for people to understand clearly. That is the discussion from the OP.
Adding "knowlege" to the definition (or to an initial explanation) of entropy makes that learning process even more difficult. And it's unnecessary. It's better than the older talk about "disorder" but it's distracting. We can bypass 'knowledge' and come back later, with no penalty and plenty of time savings.
Apart from that single pedagogical point, we seem to be saying the same things back and forth to each other in different words. I'm not sure why.
I think that the "microstate counting" approach - if that's what you are defending - doesn't allow to understand entropy clearly because only works for the microcanonical description. It doesn't make sense to count the microstates for a volume of gas at some pressure and temperature. (Which is the standard thermodynamics problem.)
The concept of how much can we tell about the microstate given only the pressure and temperature seems quite natural and a better starting point. Boltzmann's entropy is a nice illustration but there is no reason to avoid the general concept.
> It doesn't make sense to count the microstates for a volume of gas at some pressure and temperature.
But nobody does that since the total value of entropy isn't important. What you do is count the factor difference in count of microstates between two volumes, that is what you care about, and it is easy to see how the number of microstates changes when you double the volume or other similar changes.
Your statement doesn't make sense, temperature is defined in terms of entropy changes, you can't calculate temperature without first calculating entropy changes.
Have you heard of thermometers? I can have a container with 1l of some gas at room temperature T1 and proceed to heat the room - and the container - to temperature T2.
How do you calculate the number of microstates for the sample of gas before and after? How do you think these numbers are related? You said it was easy!
Thermometers is a way to measure temperature, it isn't its theoretical definition. Temperature is defined as energy required per change in entropy. There is no other reasonable way to define temperature, since at its core it measures which way energy flows when two macro systems are connected. Temperature tends to go up as you add energy to things, but not always, for example temperature doesn't go up when you melt ice, it starts and ends at 0 degrees C.
> How do you calculate the number of microstates for the sample of gas before and after? How do you think these numbers are related?
If you added energy to the gas by heating it, lets say you doubled the energy, then you now have twice as many energy packets to distribute between the particles. This adds a lot more microstates that wasn't available before, and none of the old microstates are now possible since all old microstates had a total energy level half of what each new microstate has. You can calculate the change in states yourself, it is just discrete normal probability theory. Note that the base rate isn't interesting, you care about the change of the logarithm of number of states.
> This adds a lot more microstates that wasn't available before, and none of the old microstates are now possible since all old microstates had a total energy level half of what each new microstate has.
As I'm sure you know, the microstates of that sample of gas at some fixed temperature don't have all the same energy. For each temperature there will be a distribution of possible energies. If the temperatures are close enough there will be a large overlap between those distributions.
You cannot just count the microstates of the sample of gas. (You can count microstates of the gas plus reservoir system though.)
> As I'm sure you know, the microstates of that sample of gas at some fixed temperature don't have all the same energy.
Depends if you do classical statistical physics or quantum. If you do classical they all have the same energy. If you do quantum you have to weight the states according to their probability densities, and the probabilities that the energy deviates are very small which is why classical works fine even when ignoring those.
But quantum statistical physics is way more complex, you should learn the classical statistical physics first before you try to discuss quantum statistical physics. Classical works a lot like the computer science version where you just count states, quantum doesn't.
That is for a subset of a larger system, not for a closed system. A closed system has fixed energy in classical.
So for example, if you have a gas as a closed system, takes a part of that, then you can measure that parts energy distribution by just accumulating the different micro states, yes. But if you view that part as a closed system, then if it has higher energy then it has higher temperature (if it is in the same phase) and thus energy will on average flow out of it to the other parts, if it didn't have higher temperature then it would be a stable system and that part would just have higher energy, which we know doesn't happen in for example gasses, (but it can happen with phase transitions, like an ice cube in water).
Edit: And it doesn't make sense to talk about temperature of subsets of systems. Temperature only deals with what happens when you connect two large systems, it is a macroscopic property, so when you calculate temperature you always deal with calculate that via entropy of closed systems.
So Boltzmann distribution happens when you have calculated the temperature for a macroscopic system as if it was closed, and then you start to calculate properties of some subsystem of that, that is how you get Boltzmann distribution. Not sure how you think these things were derived, have you tried reading a physics book on the subject and gone through how the formulas are derived?
Also, in these calculations you never ever mix two temperatures in a single system, as that isn't stable. The formulas only works for stable states. So if you have two different temperatures you have two different systems.
Edit Edit: I can't post more since I sometimes post stupid political stuff. Anyway, I answered your post already, the formulas we are talking about here doesn't deal with the case where systems of different temperatures interact. They only work for closed systems. If you heat a subset of a room to a higher temperature, then you now have a dynamic system where energy flows out of that subset, none of the formulas we have discussed here applies in that scenario. You can use them to calculate approximate properties of the systems by treating them as if they were closed though, still the Boltzmann distribution doesn't apply there for the entirety of those subsystems, since it assumes that everything surrounding it has the same temperature, which in your scenario the surroundings do not.
Anyway, all of these formulas are derived based on completely closed systems with no transfer of energy or particles. that is the source of entropy calculations, if you want to understand entropy you have to deal with such systems. Boltzmann's law is an example of a formula derived from entropy calculations, you can't use that to talk about entropy.
(Note, I learned these things in another language than English, so I might use the wrong words for some things, but I know how statistical physics, temperature and entropy works, are derived etc)
And of course the "knowledge" part - the entropy being a function of the probability distribution for the microstate conditional on the macrostate - is there just the same in the microstate counting approach. (Where the latter is applicable!)
If given the macrostate all microstates are equally probable we can just count them. The more there are the higher the entropy.
In general we have a probability distribution for microstates conditional on the macrostate. To have a clear understanding of entropy that should be at least mentioned.
> I thought we were discussing statistical mechanics.
This is from the message that you first replied to in this thread:
"Typically when you calculate the entropy of a system at temperature X, that means all you know is that you stuck a thermometer in it and measured X. You don't know anything more than the average temperature. It could be in any state consistent with that temperature."
Will you tell students to count the microstates consistent with the temperature?
Adding "knowlege" to the definition (or to an initial explanation) of entropy makes that learning process even more difficult. And it's unnecessary. It's better than the older talk about "disorder" but it's distracting. We can bypass 'knowledge' and come back later, with no penalty and plenty of time savings.
Apart from that single pedagogical point, we seem to be saying the same things back and forth to each other in different words. I'm not sure why.