
Security News
Browserslist-rs Gets Major Refactor, Cutting Binary Size by Over 1MB
Browserslist-rs now uses static data to reduce binary size by over 1MB, improving memory use and performance for Rust-based frontend tools.
tellaro-query-language
Advanced tools
A flexible, human-friendly query language for searching and filtering structured data
Tellaro Query Language (TQL) is a flexible, human-friendly query language for searching and filtering structured data. TQL is designed to provide a unified, readable syntax for expressing complex queries, supporting both simple and advanced search scenarios. It is especially useful for environments where data may come from different backends (such as OpenSearch or JSON files) and where users want to write queries that are portable and easy to understand.
TQL supports:
TQL queries are generally structured as:
field [| mutator1 | mutator2 ...] operator value
computer.name
, os.ver
).| lowercase
).eq
, contains
, in
, >
, regexp
).computer.name | lowercase eq 'ha-jhend'
os.ver > 10
os.dataset in ['windows_server', 'enterprise desktop']
Mutators allow you to transform field values before comparison. For example, | lowercase
will convert the field value to lowercase before evaluating the condition.
user.email | lowercase eq 'admin@example.com'
TQL supports a variety of comparison operators, including:
eq
, =
, ne
, !=
(equals, not equals)>
, >=
, <
, <=
(greater/less than)contains
, in
, regexp
, startswith
, endswith
is
, exists
, range
, between
, cidr
Values can be:
'value'
or "value"
123
, 42
, 1.01
computer01
, admin
["val1", "val2"]
TQL supports logical operators and grouping:
field1 eq 'foo' AND (field2 > 10 OR field3 in ['a', 'b'])
NOT field4 contains 'bar'
Operators supported: AND
, OR
, NOT
, ANY
, ALL
(case-insensitive)
computer.name | lowercase eq 'ha-jhend' AND (os.ver > 10 OR os.dataset in ['windows_server', 'enterprise desktop'])
TQL provides a consistent, readable way to express queries across different data sources. It abstracts away backend-specific quirks (like OpenSearch's text vs. keyword fields) and lets users focus on what they want to find, not how to write backend-specific queries.
Key benefits:
Suppose you want to find computers named "HA-JHEND" (case-insensitive), running Windows Server or Enterprise Desktop, and with an OS version greater than 10:
computer.name | lowercase eq 'ha-jhend' AND (os.ver > 10 OR os.dataset in ['windows_server', 'enterprise desktop'])
This query will:
computer.name
to lowercase and compare to 'ha-jhend'
os.ver
is greater than 10os.dataset
is in the provided listTQL is implemented using pyparsing to define the grammar and parse queries. The parser supports mutators, operator precedence, and both standard and reversed operator forms (e.g., 'value' in field
).
See src/tql/
for the implementation, including the parser grammar and evaluation logic.
For comprehensive documentation, see the docs/
folder:
# Install from PyPI
pip install tellaro-query-language
# Or install with OpenSearch support
pip install tellaro-query-language[opensearch]
from tql import TQL
# Initialize TQL
tql = TQL()
# Query data
data = [{'name': 'Alice', 'age': 30}, {'name': 'Bob', 'age': 25}]
results = tql.query(data, 'age > 27')
print(f'Found {len(results)} people over 27: {results}')
# Output: Found 1 people over 27: [{'name': 'Alice', 'age': 30}]
For OpenSearch integration examples and production usage patterns, see the Package Usage Guide.
For contributors and developers who want to work on TQL itself:
# Clone the repository
git clone https://github.com/tellaro/tellaro-query-language.git
cd tellaro-query-language
# Install with poetry (includes all dev dependencies)
poetry install
# Load environment variables for integration tests
cp .env.example .env
# Edit .env with your OpenSearch credentials
# Run tests
poetry run tests
Note: The development setup uses python-dotenv
to load OpenSearch credentials from .env
files for integration testing. This is NOT required when using TQL as a package - see the Package Usage Guide for production configuration patterns.
The repository includes an interactive web playground for testing TQL queries:
# Navigate to the playground directory
cd playground
# Start with Docker (recommended)
docker-compose up
# Or start with OpenSearch included
docker-compose --profile opensearch up
Access the playground at:
The playground uses your local TQL source code, so any changes you make are immediately reflected. See playground/README.md for more details.
from tql import TQL
# Query JSON files directly
tql = TQL()
results = tql.query("data.json", "user.role eq 'admin' AND status eq 'active'")
# Query with field mappings for OpenSearch
mappings = {"hostname": "agent.name.keyword"}
tql_mapped = TQL(mappings)
opensearch_dsl = tql_mapped.to_opensearch("hostname eq 'server01'")
# Extract fields from a complex query
query = "process.name eq 'explorer.exe' AND (user.id eq 'admin' OR user.groups contains 'administrators')"
fields = tql.extract_fields(query)
print(fields) # ['process.name', 'user.groups', 'user.id']
TQL provides context-aware query analysis to help you understand performance implications before execution:
from tql import TQL
tql = TQL()
# Analyze for in-memory execution (default)
query = "field | lowercase | trim eq 'test'"
analysis = tql.analyze_query(query) # or explicitly: analyze_query(query, context="in_memory")
print(f"Health: {analysis['health']['status']}") # 'good' - fast mutators don't impact in-memory
print(f"Score: {analysis['health']['score']}") # 100
print(f"Has mutators: {analysis['stats']['has_mutators']}") # True
# Analyze the same query for OpenSearch execution
analysis = tql.analyze_query(query, context="opensearch")
print(f"Health: {analysis['health']['status']}") # 'fair' - post-processing required
print(f"Score: {analysis['health']['score']}") # 85
# Check mutator-specific health
if 'mutator_health' in analysis:
print(f"Mutator health: {analysis['mutator_health']['health_status']}")
for reason in analysis['mutator_health']['health_reasons']:
print(f" - {reason['reason']}")
# Slow mutators impact both contexts
slow_query = "hostname | nslookup contains 'example.com'"
analysis = tql.analyze_query(slow_query)
print(f"In-memory health: {analysis['health']['status']}") # 'fair' or 'poor' - network I/O
# Query complexity analysis
complex_query = "(a > 1 OR b < 2) AND (c = 3 OR (d = 4 AND e = 5))"
analysis = tql.analyze_query(complex_query)
print(f"Depth: {analysis['complexity']['depth']}")
print(f"Fields: {analysis['stats']['fields']}")
print(f"Operators: {analysis['stats']['operators']}")
TQL intelligently handles mutators based on field mappings. When OpenSearch can't perform certain operations (like case-insensitive searches on keyword fields), TQL applies post-processing:
# Field mappings with only keyword fields
mappings = {"username": {"type": "keyword"}, "department": {"type": "keyword"}}
tql = TQL(mappings)
# This query requires post-processing since keyword fields can't do case-insensitive contains
query = "username | lowercase contains 'admin' AND department eq 'Engineering'"
# Analyze the query (analyze_opensearch_query is deprecated, use analyze_query instead)
analysis = tql.analyze_query(query, context="opensearch")
print(f"Health: {analysis['health']['status']}") # 'fair' (post-processing required)
# Execute with automatic post-processing
result = tql.execute_opensearch(
opensearch_client=client,
index="users",
query=query
)
# OpenSearch returns all Engineering users, TQL filters to only those with 'admin' in username
# Run the demo to see this in action
# poetry run python post_processing_demo.py
# Run comprehensive demos
poetry run python demo.py # Basic functionality
poetry run python intelligent_mapping_demo.py # Field mapping features
poetry run python test_requested_functionality.py # Core functionality tests
poetry run python field_extraction_demo.py # Field extraction
poetry run python post_processing_demo.py # Post-processing filtering
# Run tests
poetry run pytest tests/ -v
# Run integration tests with OpenSearch (requires OpenSearch)
# 1. Copy .env.example to .env and configure connection settings
# 2. Set OPENSEARCH_INTEGRATION_TEST=true in .env
poetry run pytest tests/test_opensearch_integration.py -v
TQL supports 25+ mutators including string manipulation, encoding/decoding, DNS operations, and network analysis. See the Mutators documentation for the complete list.
To add new mutators or operators, see the implementation in src/tql/mutators.py
and src/tql/parser.py
.
TQL supports powerful data analysis with stats expressions:
# Simple aggregation
| stats sum(revenue)
# Grouped analysis
| stats count(requests), average(response_time) by server_name
# Top N analysis
| stats sum(sales, top 10) by product_category
# Complex analytics
status eq 'success'
| stats count(requests), sum(bytes), average(response_time), max(cpu_usage) by endpoint
Stats functions include: sum
, min
, max
, count
, unique_count
, average
, median
, percentile_rank
, zscore
, std
Comprehensive documentation is available in the docs directory:
# Clone the repository
git clone https://github.com/tellaro/tellaro-query-language.git
cd tellaro-query-language
# Install with poetry
poetry install
# Or install with pip
pip install -e .
This project supports Python 3.11, 3.12, 3.13, and 3.14. We use nox
for automated testing across all versions.
# Install test dependencies
poetry install --with dev
# Run tests on all Python versions
poetry run nox -s tests
# Run tests on a specific version
poetry run nox -s tests-3.12
# Quick test run (fail fast, no coverage)
poetry run nox -s test_quick
# Run linting and formatting
poetry run nox -s lint
poetry run nox -s format
# Run all checks
poetry run nox -s all
For more detailed testing instructions, see TESTING.md.
FAQs
A flexible, human-friendly query language for searching and filtering structured data
We found that tellaro-query-language demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Browserslist-rs now uses static data to reduce binary size by over 1MB, improving memory use and performance for Rust-based frontend tools.
Research
Security News
Eight new malicious Firefox extensions impersonate games, steal OAuth tokens, hijack sessions, and exploit browser permissions to spy on users.
Security News
The official Go SDK for the Model Context Protocol is in development, with a stable, production-ready release expected by August 2025.