Interesting. I think OpenAI here uses sparse autoencoders to map out sparse activation patterns in networks. Comparing them to how a real person reasons about a situations.
Inspectus, on the other hand is a general tool to visualize how transformer models pay attention to different parts of the data they process.
I'm not a primary user. Just cleaned up the existing codebase to make it open source. But you could use this to visualise attentions and debug the model.
For an example if you're working on a Q&A model, you can check which tokens in the prompt contributed to the output. It's possible to detect issues like output not paying attention to any important part of the prompt.
The issue with the ambiguity of usage plague lots of OSS projects. Guides/Tutorials will always help drive usage much more, just look at the usage of GPT-3 vs ChatGPT (which is GPT-3.5 with WebUI slapped on top of it).