anon373839 6 days ago

The Berkeley Function Calling Leaderboard [1] might be of interest to you. As of now, it looks like Hammer2.1-3b is the strongest model under 7 billion parameters. Its overall score is ~82% of GPT-4o's. There is also Hammer2.1-1.5b at 1.5 billion parameters that is ~76% of GPT-4o.

[1] https://gorilla.cs.berkeley.edu/leaderboard.html

1
refulgentis 6 days ago

Worth noting:

- That'll be 1 turn scores: at multiturn, 4o is 3x as good as the 3b

- BFCL is generally turn natural language into an API call, then multiturn will involve making another API call.

- I hope to inspire work towards an open model that can eat the paid models sooner rather than later

- trained quite specifically on an agent loop with tools read_files and edit_file (you'll also probably do at least read_directory and get_shared_directories, search_filenames and search_files_text are good too), bonus points for cli_command

- IMHO, this is much lower hanging-fruit than ex. training an open computer-vision model, so I beseech thee, intrepid ML-understander, to fill this gap and hear your name resound throughout the age