Increasing the model complexity can make things worse (this should be made clear when you observe a big performance discrepancy between your training and your training score).
Let's say your data is pretty complex.You overfit so much you get an error rate of 0. You can imagine that now your model is "perfect" for that specific set of data. However, when you try it on real world data (that's the goal after all), suddenly your model is not so tailored, and actually performs badly.
What happened is that your model possibly started describing noise and outliers as you increased complexity.
Here you can see that the black segmentation is desirable as a possible solution. An overly complex model will produce a curve similar to the green curve, with lower error rates, but no possibility for generalization on other data.