Configuring the LLM provider settings

After you install the Konveyor extension in Visual Studio (VS) Code, you must provide your large language model (LLM) credentials to activate Konveyor AI settings in Visual Studio (VS) Code.

Konveyor AI settings are applied to all AI-assisted analysis that you perform by using the Konveyor extension. The extension settings can be broadly categorized into debugging and logging, Konveyor AI settings, analysis settings, and Solution Server settings.

Prerequisites

In addition to the overall prerequisites, you have configured the following:

  • You completed the Solution Server configurations in Tackle custom resource if you opt to use the Solution Server.

Procedure

  1. Go to the Konveyor AI settings in one of the following ways:

    1. Click Extensions > Konveyor Extension for VSCode > Settings

    2. Type Ctrl + Shift + P or Cmd + Shift + P on the search bar to open the Command Palette and enter Preferences: Open Settings (UI). Go to Extensions > Konveyor to open the settings page.

  2. Configure the settings described in the following table:

SettingsDescription
Log levelSet the log level for the Konveyor binary. The default log level is debug.The log level increases or decreases the verbosity of logs.
Analyzer pathSpecify an Konveyor custom binary path. If you do not provide a path, Konveyor extension uses the default path to the binary.
Auto Accept on SaveThis option is enabled by default. When you accept the changes suggested by the LLM, the updated code is saved automatically in a new file. Disable this option if you want to manually save the new file after accepting the suggested code changes.
Gen AI:EnabledThis option is enabled by default. It enables you to get code fixes by using Konveyor AI with a large language model.
Gen AI: Agent modeEnable the experimental Agentic AI flow for analysis. Konveyor runs an automated analysis of a file to identify issues and suggest resolutions. After you accept the solutions, Konveyor AI makes the changes in the code and re-analyzes the file.
Gen AI: Excluded diagnostic sourcesAdd diagnostic sources in the settings.json file. The issues generated by such diagnostic sources are excluded from the automated Agentic AI analysis.
Cache directorySpecify the path to a directory in your filesystem to store cached responses from the LLM.
Trace directoryConfigure the absolute path to the directory that contains the saved LLM interaction.
Trace enabledEnable to trace Konveyor communication with the LLM model. Traces are stored in the trace directory that you configured.
Demo modeEnable to run Konveyor AI in demo mode that uses the LLM responses saved in the cache directory for analysis.
Solution Server:URLEdit Solution Server configurion.
Debug:WebviewEnable debug level logging for Webview message handling in VS Code.

Solution server configuration:

  • “enabled”: Enter a boolean value. Set true for connecting the Solution Server client (Konveyor extension) to the Solution Server.

  • “url”: Configure the URL of the Solution Server end point.

  • “auth”: The authentication settings allows you to configure a list of options to authenticate to the Solution Server.

    • “enabled”: Set to true to enable authentication. If you enable authentication, then you must configure the Solution Server realm.

    • “insecure”: Set to true to skip SSL certificate verification when clients connect to the Solution Server. Set to false to allow secure connections to the Solution Server.

    • “realm”: Enter the name of the Keycloak realm for Solution Server. If you enabled authentication for the Solution Server, you must configure a Keycloak realm to allow clients to connect to the Solution Server. An administrator can configure SSL for the realm.|