I was curious about how well the “DeepWiki” docume...
# success-stories
l
I was curious about how well the “DeepWiki” documentation linked on our page worked, so I asked its “deep research” mode a question > When running on Windows with the local runtime and no WSL/Docker I get the following error when I open a conversation, please debug the issue and propose a solution: > [stacktrace] Then I decided to ask OpenHands the same question. DeepWiki has thought for 5 minutes and not returned an answer yet, and OpenHands has debugged the problem and updated my PR 🙂 I’m thinking we just need a way to add an “Ask OpenHands” link and we can use that instead of DeepWiki.
💯 5
p
I tried both OpenAI Deep Research against a GitHub repo (their new feature) and Gemini (dumped all of openhands' code in there, it's about 500,000 tokens, so it can handle it), and IMO, Gemini reasoned slightly better, and was much faster at helping me make sense of the codebase I think "Ask OpenHands" would need to be at least as fast and accurate as Gemini for it to be worthwhile(?) Wondering what a *-bench for this use-case might be Might also be an incentive to keep the OpenHands codebase small(er)(?) I think there are 2 use-cases: • discoverability • targeted inquiry There's a bootstrapping challenge, with all the search/prompt paradigm, in not knowing where to begin. So laying it out all like DeepWiki does provides a kind of high-level map to begin to make sense of things. My newbie impression is that DeepWiki feels currently more credible than the user docs, re: https://medium.com/mytake/the-ux-honeycomb-seven-essential-considerations-for-developers-accc372a398c - nonetheless, there are additional dimensions to consider here that would help make OpenHands feel more valuable to end-users and developers
l
Yeah, I think speed is essential for something like this. OpenHands is pretty fast, but we could make it faster by giving it access to code search