When being trained via reinforcement learning, is the model architecture the same then? Like, you first train the llm as a next token predictor with a certain model architecture and it ends up with certain weights. Then you apply RL to that same model which modifies the weights in such a way as to consider while responses?