The rapid advancement of LLMs has generated growing interest in their potential role in physics education and assessment, yet a focused evaluation of their performance on multi-faceted, free-response physics problems remains underexplored. In this study, we systematically evaluate the performance of four widely accessible AI systems-ChatGPT 4.1 mini, Gemini 2.5 Flash, Claude 4.0 Sonnet, and DeepSeek R1-on AP Physics 1 and 2 free-response questions administered between 2015 and 2025. Model-generated solutions were produced under standardized exam-style prompting and evaluated by three independent physics experts using official College Board scoring guidelines. All models achieved relatively high mean scores (82-92%), indicating strong capability in structured algebraic problem solving. However, substantial year-to-year variability was observed, particularly for AP Physics 1, where statistical testing revealed no consistent performance hierarchy among models. In contrast, AP Physics 2 results showed statistically significant differences, with Gemini and DeepSeek demonstrating more consistent performance than Claude. A qualitative analysis revealed recurring error patterns across all models, including misinterpretation of diagrams and graphs, incorrect graph construction, incorrect reasoning about vector direction, circuit topology errors, partial and misleading qualitative explanations, and difficulties applying three-dimensional concepts such as the right-hand rule. These findings suggest that while contemporary AI systems can effectively support routine physics problem solving, they remain limited in tasks requiring spatial reasoning, visual interpretation, and conceptual integration. The results highlight both the instructional potential and current pedagogical limitations of AI-assisted learning tools in physics education.