There's still skill involved with using the LLM in coding. In this case, o4-mini-high might do the trick, but the easier answer that worry's with other models is to include the high level library documentation yourself as context and it'll use that API.
Whate besides anecdote makes you think a different model will be anything but marginally incrementally better?