I tried both OpenAI Deep Research against a GitHub repo (their new feature) and Gemini (dumped all of openhands' code in there, it's about 500,000 tokens, so it can handle it), and IMO, Gemini reasoned slightly better, and was much faster at helping me make sense of the codebase
I think "Ask OpenHands" would need to be at least as fast and accurate as Gemini for it to be worthwhile(?) Wondering what a *-bench for this use-case might be
Might also be an incentive to keep the OpenHands codebase small(er)(?)
I think there are 2 use-cases:
• discoverability
• targeted inquiry
There's a bootstrapping challenge, with all the search/prompt paradigm, in not knowing where to begin. So laying it out all like DeepWiki does provides a kind of high-level map to begin to make sense of things.
My newbie impression is that DeepWiki feels currently more credible than the user docs, re:
https://medium.com/mytake/the-ux-honeycomb-seven-essential-considerations-for-developers-accc372a398c - nonetheless, there are additional dimensions to consider here that would help make OpenHands feel more valuable to end-users and developers