Hi JM,
You have a point.
GenAI apps may differ in security capabilities and gaurdrails, based on the different natures of the app and versions released.
For example, in earlier versions of Chatgpt, the prompt security/guardrails wasn't fully imposed, comparing to the latest ones. Thus, different versions of chatgpt may have different levels of prompt security/guardrails capabilities.
To better secure GenAI, having a prompt page would be useful to alert users that there may be a certain level of risk present when accessing the particular GenAI app/webpage, or a block page to block users from accessing high-risk GenAI apps/webpages.
Alternatively, we may consider approaching the security of GenAI apps starting from a prompt-based level of security, inspecting the context and comply against a set of regulations/guardrails, moving forward.
Would this be something that would work for you and benefit your organization?
Cheers