Security News
Fluent Assertions Faces Backlash After Abandoning Open Source Licensing
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
The Azure Monitor Query client library is used to execute read-only queries against Azure Monitor's two data platforms:
Resources:
Install the Azure Monitor Query client library for Python with pip:
pip install azure-monitor-query
An authenticated client is required to query Logs or Metrics. The library includes both synchronous and asynchronous forms of the clients. To authenticate, create an instance of a token credential. Use that instance when creating a LogsQueryClient
, MetricsQueryClient
, or MetricsClient
. The following examples use DefaultAzureCredential
from the azure-identity package.
Consider the following example, which creates synchronous clients for both Logs and Metrics querying:
from azure.identity import DefaultAzureCredential
from azure.monitor.query import LogsQueryClient, MetricsQueryClient, MetricsClient
credential = DefaultAzureCredential()
logs_query_client = LogsQueryClient(credential)
metrics_query_client = MetricsQueryClient(credential)
metrics_client = MetricsClient("https://<regional endpoint>", credential)
The asynchronous forms of the query client APIs are found in the .aio
-suffixed namespace. For example:
from azure.identity.aio import DefaultAzureCredential
from azure.monitor.query.aio import LogsQueryClient, MetricsQueryClient, MetricsClient
credential = DefaultAzureCredential()
async_logs_query_client = LogsQueryClient(credential)
async_metrics_query_client = MetricsQueryClient(credential)
async_metrics_client = MetricsClient("https://<regional endpoint>", credential)
To use the asynchronous clients, you must also install an async transport, such as aiohttp.
pip install aiohttp
By default, all clients are configured to use the Azure public cloud. To use a sovereign cloud, provide the correct endpoint
argument when using LogsQueryClient
or MetricsQueryClient
. For MetricsClient
, provide the correct audience
argument instead. For example:
from azure.identity import AzureAuthorityHosts, DefaultAzureCredential
from azure.monitor.query import LogsQueryClient, MetricsQueryClient, MetricsClient
# Authority can also be set via the AZURE_AUTHORITY_HOST environment variable.
credential = DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_GOVERNMENT)
logs_query_client = LogsQueryClient(credential, endpoint="https://api.loganalytics.us/v1")
metrics_query_client = MetricsQueryClient(credential, endpoint="https://management.usgovcloudapi.net")
metrics_client = MetricsClient(
"https://usgovvirginia.metrics.monitor.azure.us", credential, audience="https://metrics.monitor.azure.us"
)
Note: Currently, MetricsQueryClient
uses the Azure Resource Manager (ARM) endpoint for querying metrics. You need the corresponding management endpoint for your cloud when using this client. This detail is subject to change in the future.
For examples of Logs and Metrics queries, see the Examples section.
The Log Analytics service applies throttling when the request rate is too high. Limits, such as the maximum number of rows returned, are also applied on the Kusto queries. For more information, see Query API.
If you're executing a batch logs query, a throttled request returns a LogsQueryError
object. That object's code
value is ThrottledError
.
Each set of metric values is a time series with the following characteristics:
This example shows how to query a Log Analytics workspace. To handle the response and view it in a tabular form, the pandas
library is used. See the samples if you choose not to use pandas
.
The following example demonstrates how to query logs directly from an Azure resource without the use of a Log Analytics workspace. Here, the query_resource
method is used instead of query_workspace
. Instead of a workspace ID, an Azure resource identifier is passed in. For example, /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/{resource-provider}/{resource-type}/{resource-name}
.
import os
import pandas as pd
from datetime import timedelta
from azure.monitor.query import LogsQueryClient, LogsQueryStatus
from azure.core.exceptions import HttpResponseError
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = LogsQueryClient(credential)
query = """AzureActivity | take 5"""
try:
response = client.query_resource(os.environ['LOGS_RESOURCE_ID'], query, timespan=timedelta(days=1))
if response.status == LogsQueryStatus.SUCCESS:
data = response.tables
else:
# LogsQueryPartialResult
error = response.partial_error
data = response.partial_data
print(error)
for table in data:
df = pd.DataFrame(data=table.rows, columns=table.columns)
print(df)
except HttpResponseError as err:
print("something fatal happened")
print(err)
The timespan
parameter specifies the time duration for which to query the data. This value can take one of the following forms:
timedelta
timedelta
and a start datetime
datetime
/end datetime
For example:
import os
import pandas as pd
from datetime import datetime, timezone
from azure.monitor.query import LogsQueryClient, LogsQueryResult
from azure.identity import DefaultAzureCredential
from azure.core.exceptions import HttpResponseError
credential = DefaultAzureCredential()
client = LogsQueryClient(credential)
query = """AppRequests | take 5"""
start_time=datetime(2021, 7, 2, tzinfo=timezone.utc)
end_time=datetime(2021, 7, 4, tzinfo=timezone.utc)
try:
response = client.query_workspace(
workspace_id=os.environ['LOG_WORKSPACE_ID'],
query=query,
timespan=(start_time, end_time)
)
if response.status == LogsQueryStatus.SUCCESS:
data = response.tables
else:
# LogsQueryPartialResult
error = response.partial_error
data = response.partial_data
print(error)
for table in data:
df = pd.DataFrame(data=table.rows, columns=table.columns)
print(df)
except HttpResponseError as err:
print("something fatal happened")
print(err)
The query_workspace
API returns either a LogsQueryResult
or a LogsQueryPartialResult
object. The batch_query
API returns a list that can contain LogsQueryResult
, LogsQueryPartialResult
, and LogsQueryError
objects. Here's a hierarchy of the response:
LogsQueryResult
|---statistics
|---visualization
|---tables (list of `LogsTable` objects)
|---name
|---rows
|---columns
|---columns_types
LogsQueryPartialResult
|---statistics
|---visualization
|---partial_error (a `LogsQueryError` object)
|---code
|---message
|---details
|---status
|---partial_data (list of `LogsTable` objects)
|---name
|---rows
|---columns
|---columns_types
The LogsQueryResult
directly iterates over the table as a convenience. For example, to handle a logs query response with tables and display it using pandas
:
response = client.query(...)
for table in response:
df = pd.DataFrame(table.rows, columns=[col.name for col in table.columns])
A full sample can be found here.
In a similar fashion, to handle a batch logs query response:
for result in response:
if result.status == LogsQueryStatus.SUCCESS:
for table in result:
df = pd.DataFrame(table.rows, columns=table.columns)
print(df)
A full sample can be found here.
The following example demonstrates sending multiple queries at the same time using the batch query API. The queries can either be represented as a list of LogsBatchQuery
objects or a dictionary. This example uses the former approach.
import os
from datetime import timedelta, datetime, timezone
import pandas as pd
from azure.monitor.query import LogsQueryClient, LogsBatchQuery, LogsQueryStatus
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = LogsQueryClient(credential)
requests = [
LogsBatchQuery(
query="AzureActivity | summarize count()",
timespan=timedelta(hours=1),
workspace_id=os.environ['LOG_WORKSPACE_ID']
),
LogsBatchQuery(
query= """bad query""",
timespan=timedelta(days=1),
workspace_id=os.environ['LOG_WORKSPACE_ID']
),
LogsBatchQuery(
query= """let Weight = 92233720368547758;
range x from 1 to 3 step 1
| summarize percentilesw(x, Weight * 100, 50)""",
workspace_id=os.environ['LOG_WORKSPACE_ID'],
timespan=(datetime(2021, 6, 2, tzinfo=timezone.utc), datetime(2021, 6, 5, tzinfo=timezone.utc)), # (start, end)
include_statistics=True
),
]
results = client.query_batch(requests)
for res in results:
if res.status == LogsQueryStatus.PARTIAL:
## this will be a LogsQueryPartialResult
print(res.partial_error)
for table in res.partial_data:
df = pd.DataFrame(table.rows, columns=table.columns)
print(df)
elif res.status == LogsQueryStatus.SUCCESS:
## this will be a LogsQueryResult
table = res.tables[0]
df = pd.DataFrame(table.rows, columns=table.columns)
print(df)
else:
# this will be a LogsQueryError
print(res.message)
The following example shows setting a server timeout in seconds. A gateway timeout is raised if the query takes more time than the mentioned timeout. The default is 180 seconds and can be set up to 10 minutes (600 seconds).
import os
from datetime import timedelta
from azure.monitor.query import LogsQueryClient
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = LogsQueryClient(credential)
response = client.query_workspace(
os.environ['LOG_WORKSPACE_ID'],
"range x from 1 to 10000000000 step 1 | count",
timespan=timedelta(days=1),
server_timeout=600 # sets the timeout to 10 minutes
)
The same logs query can be executed across multiple Log Analytics workspaces. In addition to the Kusto query, the following parameters are required:
workspace_id
- The first (primary) workspace IDadditional_workspaces
- A list of workspaces, excluding the workspace provided in the workspace_id
parameter. The parameter's list items can consist of the following identifier formats:
For example, the following query executes in three workspaces:
client.query_workspace(
<workspace_id>,
query,
timespan=timedelta(days=1),
additional_workspaces=['<workspace 2>', '<workspace 3>']
)
A full sample can be found here.
To get logs query execution statistics, such as CPU and memory consumption:
include_statistics
parameter to True
.statistics
field inside the LogsQueryResult
object.The following example prints the query execution time:
query = "AzureActivity | top 10 by TimeGenerated"
result = client.query_workspace(
<workspace_id>,
query,
timespan=timedelta(days=1),
include_statistics=True
)
execution_time = result.statistics.get("query", {}).get("executionTime")
print(f"Query execution time: {execution_time}")
The statistics
field is a dict
that corresponds to the raw JSON response, and its structure can vary by query. The statistics are found within the query
property. For example:
{
"query": {
"executionTime": 0.0156478,
"resourceUsage": {...},
"inputDatasetStatistics": {...},
"datasetStatistics": [{...}]
}
}
To get visualization data for logs queries using the render operator:
include_visualization
property to True
.visualization
field inside the LogsQueryResult
object.For example:
query = (
"StormEvents"
"| summarize event_count = count() by State"
"| where event_count > 10"
"| project State, event_count"
"| render columnchart"
)
result = client.query_workspace(
<workspace_id>,
query,
timespan=timedelta(days=1),
include_visualization=True
)
print(f"Visualization result: {result.visualization}")
The visualization
field is a dict
that corresponds to the raw JSON response, and its structure can vary by query. For example:
{
"visualization": "columnchart",
"title": "the chart title",
"accumulate": False,
"isQuerySorted": False,
"kind": None,
"legend": None,
"series": None,
"yMin": "NaN",
"yMax": "NaN",
"xAxis": None,
"xColumn": None,
"xTitle": "x axis title",
"yAxis": None,
"yColumns": None,
"ySplit": None,
"yTitle": None,
"anomalyColumns": None
}
Interpretation of the visualization data is left to the library consumer. To use this data with the Plotly graphing library, see the synchronous or asynchronous code samples.
The following example gets metrics for an Event Grid subscription. The resource ID (also known as resource URI) is that of an Event Grid topic.
The resource ID must be that of the resource for which metrics are being queried. It's normally of the format /subscriptions/<id>/resourceGroups/<rg-name>/providers/<source>/topics/<resource-name>
.
To find the resource ID/URI:
NOTE: The metrics are returned in the order of the metric_names
sent.
import os
from datetime import timedelta, datetime
from azure.monitor.query import MetricsQueryClient
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = MetricsQueryClient(credential)
start_time = datetime(2021, 5, 25)
duration = timedelta(days=1)
metrics_uri = os.environ['METRICS_RESOURCE_URI']
response = client.query_resource(
metrics_uri,
metric_names=["PublishSuccessCount"],
timespan=(start_time, duration)
)
for metric in response.metrics:
print(metric.name)
for time_series_element in metric.timeseries:
for metric_value in time_series_element.data:
print(metric_value.time_stamp)
The metrics query API returns a MetricsQueryResult
object. The MetricsQueryResult
object contains properties such as a list of Metric
-typed objects, granularity
, namespace
, and timespan
. The Metric
objects list can be accessed using the metrics
param. Each Metric
object in this list contains a list of TimeSeriesElement
objects. Each TimeSeriesElement
object contains data
and metadata_values
properties. In visual form, the object hierarchy of the response resembles the following structure:
MetricsQueryResult
|---granularity
|---timespan
|---cost
|---namespace
|---resource_region
|---metrics (list of `Metric` objects)
|---id
|---type
|---name
|---unit
|---timeseries (list of `TimeSeriesElement` objects)
|---metadata_values
|---data (list of data points represented by `MetricValue` objects)
import os
from azure.monitor.query import MetricsQueryClient, MetricAggregationType
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
client = MetricsQueryClient(credential)
metrics_uri = os.environ['METRICS_RESOURCE_URI']
response = client.query_resource(
metrics_uri,
metric_names=["MatchedEventCount"],
aggregations=[MetricAggregationType.COUNT]
)
for metric in response.metrics:
print(metric.name)
for time_series_element in metric.timeseries:
for metric_value in time_series_element.data:
if metric_value.count != 0:
print(
"There are {} matched events at {}".format(
metric_value.count,
metric_value.time_stamp
)
)
To query metrics for multiple Azure resources in a single request, use the query_resources
method of MetricsClient
. This method:
MetricsQueryClient
methods.Each Azure resource must reside in:
Furthermore:
from datetime import timedelta
import os
from azure.core.exceptions import HttpResponseError
from azure.identity import DefaultAzureCredential
from azure.monitor.query import MetricsClient, MetricAggregationType
endpoint = "https://westus3.metrics.monitor.azure.com"
credential = DefaultAzureCredential()
client = MetricsClient(endpoint, credential)
resource_ids = [
"/subscriptions/<id>/resourceGroups/<rg-name>/providers/<source>/storageAccounts/<resource-name-1>",
"/subscriptions/<id>/resourceGroups/<rg-name>/providers/<source>/storageAccounts/<resource-name-2>"
]
response = client.query_resources(
resource_ids=resource_ids,
metric_namespace="Microsoft.Storage/storageAccounts",
metric_names=["Ingress"],
timespan=timedelta(hours=2),
granularity=timedelta(minutes=5),
aggregations=[MetricAggregationType.AVERAGE],
)
for metrics_query_result in response:
print(metrics_query_result.timespan)
See our troubleshooting guide for details on how to diagnose various failure scenarios.
To learn more about Azure Monitor, see the Azure Monitor service documentation.
The following code samples show common scenarios with the Azure Monitor Query client library.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
audience
keyword argument can now be passed to the MetricsClient
constructor to specify the audience for the authentication token. This is useful when querying metrics in sovereign clouds. (#35502)roll_up_by
keyword argument to MetricsClient.query_resources
to support rolling up metrics by dimension. (#33752)1.3.0b2
/1.3.0b1
):
MetricsBatchQueryClient
has been renamed to MetricsClient
. (#33958)MetricsClient
constructor so that endpoint
is now the first positional argument. (#33752)MetricsClient.query_resources
are now required keyword-only arguments. (#33958)resource_uris
argument in MetricsClient.query_resources
has been renamed to resource_ids
. (#34760)MetricsQueryClient
.azure-core
to >=1.28.0
.azure-core
to >=1.28.0
.MetricsBatchQueryClient
to support batch querying metrics from Azure resources. (#31049)query_resource
method to LogsQueryClient
to allow users to query Azure resources directly without the context of a workspace. (#29365)LogsTable
constructor, changing column_types
to columns_types
. Note that this is a class that is typically only instantiated internally, and not by users. (#29076)time_stamp
(should be timeStamp
) was used in the creation of MetricValue
objects (thanks @jamespic). (#28777)LogsQueryError
object. (#25137)msrest
dependency.azure-core
to >=1.24.0
.isodate>=0.6.0
(isodate
was required by msrest
).typing-extensions>=4.0.1
.query_resource
in metrics client is throwing an error with unexpected metric_namespace
argument.LogsQueryPartialResult
and LogsQueryError
to handle errors.status
attribute to LogsQueryResult
.LogsQueryStatus
Enum to describe the status of a result.LogsTableRow
type that represents a single row in a table.metrics
list in MetricsQueryResult
can now be accessed by metric names.LogsQueryResult
now iterates over the tables directly as a convenience.query
API in logs is renamed to query_workspace
query
API in metrics is renamed to query_resource
query_workspace
API now returns a union of LogsQueryPartialResult
and LogsQueryResult
.query_batch
API now returns a union of LogsQueryPartialResult
, LogsQueryError
and LogsQueryResult
.metric_namespace
is renamed to namespace
and is a keyword-only argument in list_metric_definitions
API.MetricsResult
is renamed to MetricsQueryResult
.display_description
attribute to the Metric
type.MetricClass
enum to provide the class of a metric.metric_class
attribute to the MetricDefinition
type.MetricNamespaceClassification
enum to support the namespace_classification
attribute on MetricNamespace
type.MetricUnit
enum to describe the unit of the metric.batch_query
to query_batch
.LogsBatchQueryRequest
to LogsBatchQuery
.include_render
is now renamed to include_visualization
in the query API.LogsQueryResult
now returns visualization
instead of render
.start_time
, duration
and end_time
are now replaced with a single param called timespan
resourceregion
is renamed to resource_region
in the MetricResult type.top
is renamed to max_results
in the metric's query
API.metric_namespace_name
is renamed to fully_qualified_namespace
is_dimension_required
is renamed to dimension_required
interval
and time_grain
are renamed to granularity
orderby
is renamed to order_by
LogsQueryResult
now returns datetime
objects for a time values.LogsBatchQuery
doesn't accept a request_id
anymore.MetricsMetadataValues
is removed. A dictionary is used instead.time_stamp
is renamed to timestamp
in MetricValue
type.AggregationType
is renamed to MetricAggregationType
.LogsBatchResultError
type.LogsQueryResultTable
is named to LogsTable
LogsTableColumn
is now removed. Column labels are strings instead.start_time
in list_metric_namespaces
API is now a datetime.LogsBatchQuery
is changed. Also, headers
is no longer accepted.timespan
is now a required keyword-only argument in logs APIs.LogsQueryResult
objects.include_statistics
and include_visualization
args can now work together.AggregationType
which can be used to specify aggregations in the query API.LogsBatchQueryResult
model that is returned for a logs batch query.error
attribute to LogsQueryResult
.aggregation
param in the query API is renamed to aggregations
batch_query
API now returns a list of responses.LogsBatchResults
model is now removed.LogsQueryRequest
is renamed to LogsBatchQueryRequest
LogsQueryResults
is now renamed to LogsQueryResult
LogsBatchQueryResult
now has 4 additional attributes - tables
, error
, statistics
and render
instead of body
attribute.workspaces
, workspace_ids
, qualified_names
and azure_resource_ids
are now merged into a single additional_workspaces
list in the query API.LogQueryRequest
object now takes in a workspace_id
and additional_workspaces
instead of workspace
.aggregation
param is now a list instead of a string in the query
method.duration
must now be provided as a timedelta instead of a string.Features
~azure.monitor.query.LogsQueryClient
to query log analytics along with ~azure.monitor.query.aio.LogsQueryClient
.~azure.monitor.query.MetricsQueryClient
for querying metrics, listing namespaces and metric definitions along with ~azure.monitor.query.aio.MetricsQueryClient
.FAQs
Microsoft Azure Monitor Query Client Library for Python
We found that azure-monitor-query demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
Research
Security News
Socket researchers uncover the risks of a malicious Python package targeting Discord developers.
Security News
The UK is proposing a bold ban on ransomware payments by public entities to disrupt cybercrime, protect critical services, and lead global cybersecurity efforts.