lichess.org
Donate

Mixed up engine

@jomega

Thanks again, tremendously informative. I guess I should stop feeling insulted by Stockfish when it calls my move an inaccuracy if the local analysis agrees with it. I've long been wondering about this discrepancy issue. I noticed as well that sometimes the Fishnet analysis recommends a move or a line, but when you play it out you end up with a lower evaluation. How can it be that you are, say, +2.1 as white, and after you make the RECOMMENDED move you are at +1.9?
The other sign here is the point rating after the move.

I opened your game. It goes from 0.9 to 0.5. Losing 0.4 points is pretty significant and definitely an inaccuracy...if that were accurate.

I let the local stockfish crank a little while I looked at your position. It didn't take that long to settle on 0.9 for Ne5.

As others have said, the auto annotation is using a lower depth of analysis before writing its opinion in red crayon on your scoresheet and moving on to the next move.

Never trust what an engine is telling you by only looking at one line of evaluation. Look at multiple ones. Even better, if you don't understand, open up a different engine. I like comparing Stockfish with Lc0.
@Zubbubu said in #32:
> @jomega
>
> Thanks again, tremendously informative. I guess I should stop feeling insulted by Stockfish when it calls my move an inaccuracy if the local analysis agrees with it. I've long been wondering about this discrepancy issue. I noticed as well that sometimes the Fishnet analysis recommends a move or a line, but when you play it out you end up with a lower evaluation. How can it be that you are, say, +2.1 as white, and after you make the RECOMMENDED move you are at +1.9?

Definitely stop feeling insulted by these inaccuracy/mistake/blunder markings that are actually written by Lichess code. The more you learn about how Stockfish works, and how Lichess is using Stockfish at various points in the Lichess interface, the better.

Not only is 2.1 and 1.9 not enough of a difference to necessarily be concerned about, Stockfish's recommendations are what *it* would play, not what *you* should play. The reason that you see those evaluations change as you play out the moves is because Stockfish now looks at more/different positions than it did before you played it out.

You might be interested in this study I wrote. In Lichess, there is no user control over the code that does the inaccuracy/mistake/blunder markings. There are programs that have that kind of control. In the study, I present a fictional game between two novices. Lichess analysis says there are a total of 4 blunders; while an analysis with blunder meaning 2 point drop reveals a total of 40 blunders! This is due to not only the point drop for a blunder, but also the Lichess decision to not mark certain point drops as a blunder.
See:
How the game analysis works:
lichess.org/blog/WFvLpiQAACMA8e9D/learn-from-your-mistakes

Lichess Analysis Issue and Request
@ThunderClap Yeah but can you explain why?

Also peculiar is that the opponent's last move, 20...f5, on which he lost the game for good, didn't get any negative evaluation, although it was clearly a monumental blunder. I suppose Stockfish considered the game already lost.
Any / Negative evaluation ??? He is minus 7.6 was minus 6 ... ONLY PECULIAR "thingy" around here is COMPLETE DISRESPECT for the 3150 player ... The Engine
> There are programs that have that kind of control. In the study, I present a fictional game between two novices. Lichess analysis says there are a total of 4 blunders; while an analysis with blunder meaning 2 point drop reveals a total of 40 blunders! This is due to not only the point drop for a blunder, but also the Lichess decision to not mark certain point drops as a blunder.

Wouldn't that be assuming that SF is always having a credible score whatever the position. Or all blunders of 2 point drop according to SF (whereever, even with mates lurking in some branches being searched, for example), are equally important to teach.

Perhaps there is a pedagogical reason for the formula to use winning chances correction inspiration for its blundering scale. Just wondering why lichess would not be a direct transmission of SF the god of engines, and of chess for many. Or perhaps SF has some more credible types of position where to use its scoring differentials verbatim. endgame might not be, and not just them. anything with huge values lurking is what I am repeating here. But maybe i am assuming 2 huge values having bigger than 2 between them, and that can be seen. or is there some middle zone where that could happen... i don't know I just have doubts.
@dboing
@jomega

I would have thought that on pedagogical grounds any move far from best should be indicated with a negative label, regardless of the overall advantage/disadvantage.
@Zubbubu said in #39:
> @dboing
> @jomega
>
> I would have thought that on pedagogical grounds any move far from best should be indicated with a negative label, regardless of the overall advantage/disadvantage.

I agree that, depending on the persons goals, on pedagogical grounds, errors, as defined by that person and not Lichess, should be marked. That was really my point in the study. Currently, Lichess makes the decisions on what constitutes an error, what class the error is, and whether to mark it or not.

@dboing
"Wouldn't that be assuming that SF is always having a credible score whatever the position. Or all blunders of 2 point drop according to SF (whereever, even with mates lurking in some branches being searched, for example), are equally important to teach."

Obviously, SF is sometimes going to get the wrong answer. If the analysis program does not give values to the search parameters so that SF can search enough of the tree, then that easily happens. And the idea is not that every 2 point drop is equally important to teach, but instead that the user decided that every 2 point drop is what they want to investigate.

This topic has been archived and can no longer be replied to.