The purpose of this document is to provide essential information about the optional AI features available in the Gnatta product. These tools utilize the Azure OpenAI Chat GPT model. This policy outlines acceptable use, data storage, limitations, and frequently asked questions. By enabling any of these AI features, you acknowledge that you have read, understood, and agree to this document.
Acceptable Use
- Azure OpenAI Service Logging: To prevent and monitor against harmful use, Microsoft stores all input data encrypted for up to 30 days. If relevant prompts are flagged as abuse by Microsoft’s abuse monitoring system, Microsoft authorized human reviewers may access the prompts and completions.
- Code of Conduct: The Microsoft OpenAI code of conduct outlines the terms of service and acceptable usage. This code of conduct can be viewed here: Code of Conduct for the Azure OpenAI Service.
- Right to Disable: We reserve the right to disable access to the AI features if there is a breach of acceptable use, or operational issues.
Overuse Management
- Usage Monitoring: We monitor usage levels to ensure fair access and service sustainability.
- Suspension: AI services will be suspended when usage exceeds a reasonable threshold, determined by the cost of provision relative to the client’s payment.
- Optional Overuse Provision: Clients can opt into an overuse provision, allowing continued access beyond the threshold, with additional charges applied in arrears.
- Fair Usage Compliance: Excessive or unreasonable overuse beyond agreed terms may result in further service restrictions or termination.
Workmate Fair Use Policy
The Workmate product is designed to enhance agent productivity by automating and accelerating common customer service tasks. To maintain service quality and ensure fair access for all users, the following fair use policy applies:
Daily Usage Limit:
Each licensed seat includes an allowance of up to 800 actions per day, per seat/user (whichever commercial arrangement is in place). An “action” is defined as a discrete interaction with the Workmate system, such as generating a reply, summarising a thread, or performing a task through an AI-powered tool.
Usage Monitoring:
We actively monitor the number of actions performed per seat. If usage consistently exceeds the fair use threshold, we will engage with your account to understand the use case and, where appropriate, propose a revised commercial agreement or additional seat licences.
Automated Abuse Protection:
To protect system integrity and performance, we reserve the right to throttle or temporarily disable Workmate features for users exceeding fair usage in a way that impacts the performance of the system for others.
Right to Review:
We may review usage patterns to detect excessive or automated use beyond the scope of normal operations. Such reviews may result in recommendations, commercial discussions, or service restrictions to prevent abuse.
Optional High-Usage Tiers:
If your team requires higher daily action volumes, enhanced tiers of usage can be enabled subject to commercial agreement. Please contact your Service Delivery Manager to explore these options.
Supplier-Driven Pricing Changes:
The Workmate platform integrates with and depends upon third-party AI and automation technologies. In the event that a downstream supplier materially changes their pricing model, we reserve the right to revise our pricing accordingly. We will provide advance notice of any such changes and will work with affected customers to ensure transparency and continuity wherever possible.
Data Storage
Conversation context is stored using the Azure SQL region North Europe. We store this context data for a period of up to 7 days. All context and input data is actively encrypted, both in transit and at rest.
Limitations
Like any advanced language model, there are limitations. The AI features aim to provide reliable outputs, but some data biases may still be present. The models may occasionally generate responses that sound plausible but could be incorrect or misleading. Common sense reasoning and domain-specific knowledge can sometimes be challenging areas, though the system is continually improving.
Important Notice: The content generated by AI features is not owned by Gnatta, and we are not responsible for the output. Clients should use the generated content at their own risk. If clients choose to send AI-generated content directly to customers (without review by an agent), Gnatta is not liable for any outcomes resulting from such actions. Additionally, clients acknowledge that any liability, including errors, omissions, or any content-related issues, rests solely with them.
Beta Products: Any beta AI features may be modified or disabled without prior notice. Clients should be aware that access to beta products is provided on an as-is basis and may be subject to change or discontinuation.
Frequently Asked Questions
- How to enable or disable the AI features? These tools are optional add-ons to your Gnatta account. To request access, please reach out to your account manager or our support team.
- Is any of our data used in training or available to other Gnatta clients? No. Data used to generate AI outputs are not accessible outside of the conversation in which they are used. The full Microsoft policy on how this data is used can be found here: Data, privacy, and security for Azure OpenAI Service.