Through the looking glass of benchmark hacking

(poolside.ai)

15 points | by jxmorris12 19 hours ago

3 comments

  • pratio 18 minutes ago
    Are you guys affiliated to https://poolside.fm/ or https://poolsuite.net?
  • fsh 43 minutes ago
    I don't get the point. The model has presumably been trained on all public GitHub code, so the evaluation is tainted anyway.
    • adrian_b 8 minutes ago
      A couple of days ago there has been another thread about an experiment with many LLMs, where especially the Anthropic models were found to "cheat" in a large percentage of the coding tasks that had been benchmarked, by searching the Internet for appropriate code and inserting it in the program they had to write.

      The conclusion of that study was that when benchmarking LLMs for coding ability, they should not have access to Internet, if you want to know their intrinsic abilities.

      Moreover, this can be worrisome as a more direct copyright infringement than the one caused by training, because even if they find open source code on the Internet and they insert it in the generated files, it is pretty certain that it must have had a license that prohibits the removal of the copyright notice.

  • schnitzelstoat 1 hour ago
    It was an interesting read - perhaps I misunderstood the part about blocking GitHub, but is not possible just to block it from accessing that specific repo?
    • changoplatanero 1 hour ago
      In theory yes blocking specific repo is possible. In practice more difficult as the repo could be cloned under different names and you might have hundreds of training tasks that you need to configure this for. So it would be a lot of work to verify that you blocked them one by one.