Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: Request to Include Pair-wise Evaluation Prompts for Metrics Mentioned in the Paper #1475

Open
1 of 3 tasks
baptvit opened this issue Dec 6, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@baptvit
Copy link

baptvit commented Dec 6, 2024

Do you need to file an issue?

  • I have searched the existing issues and this feature is not already filed.
  • My model is hosted on OpenAI or Azure. If not, please look at the "model providers" issue and don't file a new one here.
  • I believe this is a legitimate feature request, not just a question. If this is a question, please use the Discussions area.

Is your feature request related to a problem? Please describe.

Hi everyone,

I explored the repository but couldn't locate the prompts used for the pair-wise evaluations described in the paper, specifically for metrics such as Comprehensiveness, Diversity, Empowerment and Directness.

These prompts would be incredibly valuable for adapting and implementing these metrics in a separate GraphRAG project I’m working on.

Could you kindly add the evaluation prompts used for these metrics to the repository or provide guidance on where to find them?

Thank you for your attention to this request!

Best regards,

Describe the solution you'd like

No response

Additional context

No response

@baptvit baptvit added the enhancement New feature or request label Dec 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant