Starting parse Entering state 0 Reading a token: Next token is token ID () Shifting token ID () Entering state 2 Reducing stack by rule 6 (line 19), ID -> f Stack now 0 Entering state 5 Reducing stack by rule 4 (line 15), f -> t Stack now 0 Entering state 4 Reading a token: Next token is token '*' () Shifting token '*' () Entering state 9 Reading a token: Next token is token ID () Shifting token ID () Entering state 2 Reducing stack by rule 6 (line 19), ID -> f Stack now 0 4 9 Entering state 12 Reducing stack by rule 3 (line 13), t '*' f -> t Stack now 0 Entering state 4 Reading a token: Next token is token '+' () Reducing stack by rule 2 (line 11), t -> e Stack now 0 Entering state 3 Next token is token '+' () Shifting token '+' () Entering state 8 Reading a token: Next token is token ID () Shifting token ID () Entering state 2 Reducing stack by rule 6 (line 19), ID -> f Stack now 0 3 8 Entering state 5 Reducing stack by rule 4 (line 15), f -> t Stack now 0 3 8 Entering state 11 Reading a token: Now at end of input. Reducing stack by rule 1 (line 9), e '+' t -> e Stack now 0 Entering state 3 Now at end of input. Cleanup: popping nterm e ()