I wish I could, unfortunately, this is as much as was shared publicly by the author of the experiment. I bet some interesting prompt engineering went into the setup. I believe he loaded the OpenAi API documentation: https://platform.openai.com/docs/api-reference/authentication?lang=python. I believe the author did not provide the full code to prevent actual backdoor breach into the system. Also, we should take all of this with a grain of salt. When I ran similar tests, it didn’t provide this much information — either the loophole is closed or something else happened. I think OpenAI is aware people are constantly trying to break in… Sam Altman said that nobody will be able to get into the core via API — he is yet to be proven wrong.