Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
azure-ai-language-conversations
Advanced tools
Microsoft Azure Conversational Language Understanding Client Library for Python
Conversational Language Understanding - aka CLU for short - is a cloud-based conversational AI service which provides many language understanding capabilities like:
Source code | Package (PyPI) | Package (Conda) | API reference documentation | Samples | Product documentation | REST API documentation
Install the Azure Conversations client library for Python with pip:
pip install azure-ai-language-conversations
Note: This version of the client library defaults to the 2023-04-01 version of the service
In order to interact with the CLU service, you'll need to create an instance of the ConversationAnalysisClient class, or ConversationAuthoringClient class. You will need an endpoint, and an API key to instantiate a client object. For more information regarding authenticating with Cognitive Services, see Authenticate requests to Azure Cognitive Services.
You can get the endpoint and an API key from the Cognitive Services resource in the Azure Portal.
Alternatively, use the Azure CLI command shown below to get the API key from the Cognitive Service resource.
az cognitiveservices account keys list --resource-group <resource-group-name> --name <resource-name>
Once you've determined your endpoint and API key you can instantiate a ConversationAnalysisClient
:
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.conversations import ConversationAnalysisClient
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api-key>")
client = ConversationAnalysisClient(endpoint, credential)
Once you've determined your endpoint and API key you can instantiate a ConversationAuthoringClient
:
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.conversations.authoring import ConversationAuthoringClient
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api-key>")
client = ConversationAuthoringClient(endpoint, credential)
To use an Azure Active Directory (AAD) token credential, provide an instance of the desired credential type obtained from the azure-identity library. Note that regional endpoints do not support AAD authentication. Create a custom subdomain name for your resource in order to use this type of authentication.
Authentication with AAD requires some initial setup:
After setup, you can choose which type of credential from azure.identity to use. As an example, DefaultAzureCredential can be used to authenticate the client:
Set the values of the client ID, tenant ID, and client secret of the AAD application as environment variables:
AZURE_CLIENT_ID
, AZURE_TENANT_ID
, AZURE_CLIENT_SECRET
Use the returned token credential to authenticate the client:
from azure.ai.language.conversations import ConversationAnalysisClient
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = ConversationAnalysisClient(endpoint="https://<my-custom-subdomain>.cognitiveservices.azure.com/", credential=credential)
The ConversationAnalysisClient is the primary interface for making predictions using your deployed Conversations models. For asynchronous operations, an async ConversationAnalysisClient
is in the azure.ai.language.conversation.aio
namespace.
You can use the ConversationAuthoringClient to interface with the Azure Language Portal to carry out authoring operations on your language resource/project. For example, you can use it to create a project, populate with training data, train, test, and deploy. For asynchronous operations, an async ConversationAuthoringClient
is in the azure.ai.language.conversation.authoring.aio
namespace.
The azure-ai-language-conversation
client library provides both synchronous and asynchronous APIs.
The following examples show common scenarios using the client
created above.
If you would like to extract custom intents and entities from a user utterance, you can call the client.analyze_conversation()
method with your conversation's project name as follows:
# import libraries
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.conversations import ConversationAnalysisClient
# get secrets
clu_endpoint = os.environ["AZURE_CONVERSATIONS_ENDPOINT"]
clu_key = os.environ["AZURE_CONVERSATIONS_KEY"]
project_name = os.environ["AZURE_CONVERSATIONS_PROJECT_NAME"]
deployment_name = os.environ["AZURE_CONVERSATIONS_DEPLOYMENT_NAME"]
# analyze quey
client = ConversationAnalysisClient(clu_endpoint, AzureKeyCredential(clu_key))
with client:
query = "Send an email to Carol about the tomorrow's demo"
result = client.analyze_conversation(
task={
"kind": "Conversation",
"analysisInput": {
"conversationItem": {
"participantId": "1",
"id": "1",
"modality": "text",
"language": "en",
"text": query
},
"isLoggingEnabled": False
},
"parameters": {
"projectName": project_name,
"deploymentName": deployment_name,
"verbose": True
}
}
)
# view result
print("query: {}".format(result["result"]["query"]))
print("project kind: {}\n".format(result["result"]["prediction"]["projectKind"]))
print("top intent: {}".format(result["result"]["prediction"]["topIntent"]))
print("category: {}".format(result["result"]["prediction"]["intents"][0]["category"]))
print("confidence score: {}\n".format(result["result"]["prediction"]["intents"][0]["confidenceScore"]))
print("entities:")
for entity in result["result"]["prediction"]["entities"]:
print("\ncategory: {}".format(entity["category"]))
print("text: {}".format(entity["text"]))
print("confidence score: {}".format(entity["confidenceScore"]))
if "resolutions" in entity:
print("resolutions")
for resolution in entity["resolutions"]:
print("kind: {}".format(resolution["resolutionKind"]))
print("value: {}".format(resolution["value"]))
if "extraInformation" in entity:
print("extra info")
for data in entity["extraInformation"]:
print("kind: {}".format(data["extraInformationKind"]))
if data["extraInformationKind"] == "ListKey":
print("key: {}".format(data["key"]))
if data["extraInformationKind"] == "EntitySubtype":
print("value: {}".format(data["value"]))
If you would like to pass the user utterance to your orchestrator (worflow) app, you can call the client.analyze_conversation()
method with your orchestration's project name. The orchestrator project simply orchestrates the submitted user utterance between your language apps (Luis, Conversation, and Question Answering) to get the best response according to the user intent. See the next example:
# import libraries
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.conversations import ConversationAnalysisClient
# get secrets
clu_endpoint = os.environ["AZURE_CONVERSATIONS_ENDPOINT"]
clu_key = os.environ["AZURE_CONVERSATIONS_KEY"]
project_name = os.environ["AZURE_CONVERSATIONS_WORKFLOW_PROJECT_NAME"]
deployment_name = os.environ["AZURE_CONVERSATIONS_WORKFLOW_DEPLOYMENT_NAME"]
# analyze query
client = ConversationAnalysisClient(clu_endpoint, AzureKeyCredential(clu_key))
with client:
query = "Reserve a table for 2 at the Italian restaurant"
result = client.analyze_conversation(
task={
"kind": "Conversation",
"analysisInput": {
"conversationItem": {
"participantId": "1",
"id": "1",
"modality": "text",
"language": "en",
"text": query
},
"isLoggingEnabled": False
},
"parameters": {
"projectName": project_name,
"deploymentName": deployment_name,
"verbose": True
}
}
)
# view result
print("query: {}".format(result["result"]["query"]))
print("project kind: {}\n".format(result["result"]["prediction"]["projectKind"]))
# top intent
top_intent = result["result"]["prediction"]["topIntent"]
print("top intent: {}".format(top_intent))
top_intent_object = result["result"]["prediction"]["intents"][top_intent]
print("confidence score: {}".format(top_intent_object["confidenceScore"]))
print("project kind: {}".format(top_intent_object["targetProjectKind"]))
if top_intent_object["targetProjectKind"] == "Luis":
print("\nluis response:")
luis_response = top_intent_object["result"]["prediction"]
print("top intent: {}".format(luis_response["topIntent"]))
print("\nentities:")
for entity in luis_response["entities"]:
print("\n{}".format(entity))
You can use this sample if you need to summarize a conversation in the form of an issue, and final resolution. For example, a dialog from tech support:
# import libraries
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.conversations import ConversationAnalysisClient
# get secrets
endpoint = os.environ["AZURE_CONVERSATIONS_ENDPOINT"]
key = os.environ["AZURE_CONVERSATIONS_KEY"]
# analyze query
client = ConversationAnalysisClient(endpoint, AzureKeyCredential(key))
with client:
poller = client.begin_conversation_analysis(
task={
"displayName": "Analyze conversations from xxx",
"analysisInput": {
"conversations": [
{
"conversationItems": [
{
"text": "Hello, how can I help you?",
"modality": "text",
"id": "1",
"participantId": "Agent"
},
{
"text": "How to upgrade Office? I am getting error messages the whole day.",
"modality": "text",
"id": "2",
"participantId": "Customer"
},
{
"text": "Press the upgrade button please. Then sign in and follow the instructions.",
"modality": "text",
"id": "3",
"participantId": "Agent"
}
],
"modality": "text",
"id": "conversation1",
"language": "en"
},
]
},
"tasks": [
{
"taskName": "Issue task",
"kind": "ConversationalSummarizationTask",
"parameters": {
"summaryAspects": ["issue"]
}
},
{
"taskName": "Resolution task",
"kind": "ConversationalSummarizationTask",
"parameters": {
"summaryAspects": ["resolution"]
}
},
]
}
)
# view result
result = poller.result()
task_results = result["tasks"]["items"]
for task in task_results:
print(f"\n{task['taskName']} status: {task['status']}")
task_result = task["results"]
if task_result["errors"]:
print("... errors occurred ...")
for error in task_result["errors"]:
print(error)
else:
conversation_result = task_result["conversations"][0]
if conversation_result["warnings"]:
print("... view warnings ...")
for warning in conversation_result["warnings"]:
print(warning)
else:
summaries = conversation_result["summaries"]
print("... view task result ...")
for summary in summaries:
print(f"{summary['aspect']}: {summary['text']}")
This sample shows a common scenario for the authoring part of the SDK
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.conversations.authoring import ConversationAuthoringClient
clu_endpoint = os.environ["AZURE_CONVERSATIONS_ENDPOINT"]
clu_key = os.environ["AZURE_CONVERSATIONS_KEY"]
project_name = "test_project"
exported_project_assets = {
"projectKind": "Conversation",
"intents": [{"category": "Read"}, {"category": "Delete"}],
"entities": [{"category": "Sender"}],
"utterances": [
{
"text": "Open Blake's email",
"dataset": "Train",
"intent": "Read",
"entities": [{"category": "Sender", "offset": 5, "length": 5}],
},
{
"text": "Delete last email",
"language": "en-gb",
"dataset": "Test",
"intent": "Delete",
"entities": [],
},
],
}
client = ConversationAuthoringClient(
clu_endpoint, AzureKeyCredential(clu_key)
)
poller = client.begin_import_project(
project_name=project_name,
project={
"assets": exported_project_assets,
"metadata": {
"projectKind": "Conversation",
"settings": {"confidenceThreshold": 0.7},
"projectName": "EmailApp",
"multilingual": True,
"description": "Trying out CLU",
"language": "en-us",
},
"projectFileVersion": "2022-05-01",
},
)
response = poller.result()
print(response)
Optional keyword arguments can be passed in at the client and per-operation level. The azure-core reference documentation describes available configurations for retries, logging, transport protocols, and more.
The Conversations client will raise exceptions defined in Azure Core.
This library uses the standard logging library for logging. Basic information about HTTP sessions (URLs, headers, etc.) is logged at INFO level.
Detailed DEBUG level logging, including request/response bodies and unredacted
headers, can be enabled on a client with the logging_enable
argument.
See full SDK logging documentation with examples here.
import sys
import logging
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.conversations import ConversationAnalysisClient
# Create a logger for the 'azure' SDK
logger = logging.getLogger('azure')
logger.setLevel(logging.DEBUG)
# Configure a console output
handler = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handler)
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<my-api-key>")
# This client will log detailed information about its HTTP sessions, at DEBUG level
client = ConversationAnalysisClient(endpoint, credential, logging_enable=True)
result = client.analyze_conversation(...)
Similarly, logging_enable
can enable detailed logging for a single operation, even when it isn't enabled for the client:
result = client.analyze_conversation(..., logging_enable=True)
See the Sample README for several code snippets illustrating common patterns used in the CLU Python API.
See the CONTRIBUTING.md for details on building, testing, and contributing to this library.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Note: The following changes are only breaking from the previous beta. They are not breaking since version 1.0.0 when those types and members did not exist.
ConversationAnalysisClient
.ConversationAnalysisClient
.ConversationAuthoringClient
:
begin_assign_deployment_resources
get_assign_deployment_resources_status
begin_unassign_deployment_resources
get_unassign_deployment_resources_status
begin_delete_deployment_from_resources
get_deployment_delete_from_resources_status
list_assigned_resource_deployments
list_deployment_resources
begin_conversation_analysis
.summaryAspects
options for ConversationalSummarizationTasks.ConversationAuthoringClient
to manage deployment resources:
begin_assign_deployment_resources
get_assign_deployment_resources_status
begin_unassign_deployment_resources
get_unassign_deployment_resources_status
begin_delete_deployment_from_resources
get_deployment_delete_from_resources_status
begin_load_snapshot
get_load_snapshot_status
list_assigned_resource_deployments
list_deployment_resources
trained_model_label
keyword argument to begin_export_project
.ConversationAuthoringClient
under the azure.ai.language.conversations.authoring
namespace.ConversationAuthoringClient
under the azure.ai.language.conversations.authoring
namespace.analyze_conversation()
methodConversationAnalysisOptions
model used as input to the analyze_conversation
operation is now wrapped in a CustomConversationalTask
which combines the analysis options with the project parameters into a single model.query
within the ConversationAnalysisOptions
is now further qualified as a TextConversationItem
with additional properties.AnalyzeConversationResult
is now wrapped in a CustomConversationalTaskResult
according to the input model.FAQs
Microsoft Azure Conversational Language Understanding Client Library for Python
We found that azure-ai-language-conversations demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.