-
Notifications
You must be signed in to change notification settings - Fork 14.2k
eval-callback : add support for saving logits #18281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This commit adds support for saving logits to files in the evaluation callback example. Two files are stored which is a binary file and a text file for manual inspection. Two options have been added to this example: ```console ----- example-specific params ----- --save-logits save final logits to files for verification (default: false) --logits-output-dir PATH directory for saving logits output files (default: data) ``` This motivation for this change (and follow up changes) is to replace llama-logits in examples/model-conversion, which stores logits so they can be compared to the original models logits. Future commits will add more of the features that are currently in llama-logits, like printing the prompt and token ids, and also enhance this to also store the logits and token ids so that they can also be compared as part of the verification process.
|
I didn't commit it, but I had a patch that did this for I think that now |
|
Also, I'll prioritize #17914 since it's relevant to this as well. |
My feeling is that llama-completion is already doing a lot and having something separate and more focused would be nice for things like model verification. But obviously if the majority think we should remove eval-callback then we should. I'll leave this open for a bit to allow others to chime in. |
|
@danbev I mean, I feel like What I mean is that if we want some debugging functionality, then I feel it really must address mechanisms such as chunking and autoregressive pass, which are already supported under |
|
I think the main problem with What I'm thinking is that we can re-group |
I like the sound of this and I think this would be useful for model verification. @pwilkin I see your point here and perhaps we should have some similar functionality, or a subset, for llama-completion as well. Lets leave this open over the holidays to get some more input and then proceeded. |
This commit adds support for saving logits to files in the evaluation callback example. Two files are stored which is a binary file and a text file for manual inspection.
Two options have been added to this example:
This motivation for this change (and follow up changes) is to replace llama-logits in examples/model-conversion, which stores logits so they can be compared to the original models logits.
Future commits will add more of the features that are currently in llama-logits, like printing the prompt and token ids, and also enhance this to also store the logits and token ids so that they can also be compared as part of the verification process.