Overview:
The widget currently supports three AI providers :
* Google's Gemini [Online]
* OpenAI's GPT-4 [Online]
* Ollama [Local LLMs]
🔹 Analyse This Request
- Examines request details, including requester sentiment, for a detailed breakdown.
- Identifies key information, priorities, and potential concerns.
- Helps technicians quickly understand complex requests.

🔹 Come Up With a Resolution Plan
- Generates a structured, step-by-step resolution strategy based on request context and best practices.
- Provides time estimates and resource requirements when applicable.
- Helps standardize resolution approaches across teams.

🔹 Ask AI Anything About This Request
- Enables natural language queries for contextual answers based on request data.
- Clarifies complex aspects or technical details.
- Supports decision-making with AI-powered insights.


🔹 Analyse an Image (Supported with Gemini alone)
- Users can upload images and request AI assistance.
- AI analyzes the image in the context of the request details.
- Combines visual and textual data for a more complete understanding, helping technicians assess issues quickly.


🔹 Check for Relevant Solutions within ServiceDesk Plus (Supported with Gemini alone)
- Searches ServiceDesk Plus (SDP) for existing solutions accessible to the technician.
- Identifies relevant knowledge articles, reducing resolution time.

🔹 Generate a Post-Incident Review (PIR) for This Request
- Creates detailed post-incident reports with:
- Incident timeline
- Impact analysis
- Resolution steps
- Captures key metrics and lessons learned.
- Export as PDF option for easy report download.

Configuring the AI Providers
1️⃣ Download and Extract
2️⃣ Gemini Configuration
- The Gemini provider is configured in the
config.json
file, with a dedicated configuration block for each provider.
3️⃣ Obtaining the Gemini API Key
- Visit AI Studio and sign in with your Google credentials.
- Click Get API Key (top-left corner).
- Click Create API Key to generate your unique key.

After generating the API key:
- Unzip the widget files.
- Open
config.json
in a text editor.
- Paste the API key as the value for Gemini's
API_KEY
parameter.

After updating the API key in config.json
:
- Zip all the widget files again.
- Upload the ZIP file under Admin → Developer Space → Custom Widget
This will enable Gemini as the AI provider for the widget
Implementation Details
Model Used: Gemini 1.5 Flash
Capabilities:
✅ Text analysis
✅ Image analysis
✅ Direct REST API calls
✅ Supports multimodal inputs (text + images)
OpenAI Configuration
The OpenAI provider is configured using ServiceDesk Plus’s DRE connections feature.
Steps to Configure OpenAI Connection:
- Go to: Admin → Developer Space → Connections → Create New Connection.
- Create a new service for this connection:
- Set Service Name and Service Link Name as openAI.
- Choose Basic Authentication as the authentication type.

Configuring OpenAI in ServiceDesk Plus
Steps to Create a Connection for OpenAI:
1️⃣ Go to: Admin → Developer Space → Connections
2️⃣ Click Create Connection.
3️⃣ Enter Connection Name and Link Name as openAI, then click Create and Connect.
4️⃣ When prompted:
- Leave Username blank (
""
).
- Enter your OpenAI API Key as the Password.
- To get your API Key, visit: OpenAI API Keys.
- Use an existing key or generate a new secret key.
5️⃣ Click Connect to authorize the connection.
6️⃣ Once authorized, copy the JSON of the connection for further use.

Once you've copied the JSON for the OpenAI connection:
- Go to the widget directory.
- Open the plugin-manifest.json file.
- Paste the copied JSON under the connections array.

After pasting the JSON in the plugin-manifest.json file:
- Save the file.
- Select and zip all the widget files again.
- Upload the ZIP file under Admin → Developer Space → Custom Widget to enable OpenAI
as your AI provider for the widget.
Implementation Details:
- Model used: GPT-4
Capabilities:
- Text analysis
API Endpoint: https://api.openai.com/v1/chat/completions
- System prompt sets context as "helpful assistant that analyzes user requests"
- Messages are structured in chat format with system and user roles
Local LLMs with Ollama:
To run and make use of LLMs that run entirely in your local server machine, check out this document.