It reasoned its choices. Told me the vineyard, what most people say about it. It gave me 3 food options originally and I said go by best value. It would be the same as if a friend recommended it to you or a server. How do I know the AI is giving me good advice? I don’t, but I’ve lived on earth long enough to know it’s not bad advice. End of day, the human makes the decision. We don’t lose agency unless we just say ok to everything. In this case, it was reasonable other cases maybe not. We still make decisions even if we use ai.
So in the spirit of not losing agency, out of the 3 items you told it best value, which appears to mean “lowest cost”. Best value can also mean portion size relative to cost. Further even relative to your diet restrictions and relative to the area of economic activity. All of those things I garuntee weren’t involved in the LLM “reasoning”. It’s okay to be wowed by sleight of hand LLM tricks, but expect everyone to be skeptical when you describe the situation.
The above criteria is how I would evaluate best value of menu items. If an LLM just gave me the cheapest of 3 dishes I wouldn’t use it because it is not adding any agency I already had.
Here’s another possible scenario:
The LLM retrieved “best value” or some relative synonym from the online review of the restaurant. The review says “I think the chicken marsala is the best value.” The review was posted 5 years ago and the portion sizes have changed when the new owner wanted to shrinkflate the food. Is chicken marsala still the best value? Was it ever the best value?
I think it’s clear you don’t understand llm’s and how they operate. The data is trained on orthotic patterns. Your description would be the contrary of real world llm.