Azure OpenAI
Overview
Arnica utilizes Azure OpenAI to provide mitigation code samples for code risks, such as SAST and IaC vulnerabilities.
The integration with Azure OpenAI provides the following benefits:
Enterprise-Grade Infrastructure: Azure OpenAI, being a part of Microsoft Azure, offers robust cloud infrastructure, ensuring high scalability, security, and compliance standards suitable for enterprise needs.
Extended Support and SLAs: Azure provides extended support and service level agreements (SLAs) that are crucial for businesses and large-scale applications.
Customization and Control: Azure OpenAI might offer more customization and control options tailored for business applications, including private deployments and specific compliance needs.
Pricing and Billing: With Azure, businesses can get consolidated billing for all Azure services, including Azure OpenAI, which simplifies financial management.
Prerequisites
Deploy service resource
Service resources are required in order to connect to the models they host. Follow Microsoft's guidelines to create and deploy Azure OpenAI service resource.
IP allowlist
In some cases, customers may want to restrict who can access the deployed resources. If needed, use Arnica's IP addresses, as documented in the Ingress traffic section of the On-Premises integrations page.
Installation process
Set up Azure AI Foundry Project
Login to Azure AI Foundry and click on
+ Create new
.Select
Azure AI Foundry Resource
.

Name your project. (Note that the name must be globally unique. The default name is your username followed by a random number. We recommend you replace your username with something such as
arnica-ai-models-
followed by the random numbers to ensure uniqueness. E.g. if the suggested name wasyourusername-1234
change it toarnica-ai-models-1234
).

Review the project configuration settings. Microsoft configures new Foundry projects with defaults optimized for functionality. We recommend keeping these default settings, though you can modify them if your organization has specific requirements.
Click
Create
. Azure will take a few minutes to set up your project.Navigate to the
Azure OpenAI
tab under Libraries, copy theAzure OpenAI endpoint
andAPI Key
. Store these securely. Make sure you've stored the correct endpoint. Using the default Azure AI Foundry endpoint will cause configuration issues.

Deploy AI Models
Navigate to
Models + Endpoints
on the sidebar.

2. Click on `+ Deploy Model`

3. Deploy the models you wish to use
Search and select a model you wish to use

2. Click Deploy

3. Repeat this for each model you want to add. We recommend the following models:

Integrate with Arnica
Navigate to the Integrations page in Arnica, scroll down and under
Artificial Intelligence
, locateAzure OpenAI
and clickConnect
.Fill in the endpoint URL and
API key
from step 6 above.Under deployment names make sure that only the models you deployed are selected, e.g. if you deployed:
gpt-4.1
,gpt-5-mini
,gpt-4.1-mini
andgpt-5-nano
then remove (click the x) neargpt-4o
, then clickValidate
.If the process was successful the validation will succeed. Click
OK
.Select the primary AI model and click
Save
.Scroll down and ensure the new integration appears under the
Existing Integrations
list.
User Experience
Arnica allows users to select when to trigger the Azure OpenAI recommendation request, so that the cost will remain relatively low compared to execution on every finding.
To see the recommendation, navigate to the Code Risks page and click on one of the SAST / IaC findings. Click on the AI icon on the top right corner of the details pane - it will spin while the recommendation is generated and validated by Arnica.

The code example recommendation will be dynamically generated in the details pane, followed by the explanation of the generated code to ensure the solution is clear as much as possible for the developer or Arnica operator.
Last updated
Was this helpful?