Socket
Book a DemoInstallSign in
Socket

graflo

Package Overview
Dependencies
Maintainers
1
Versions
9
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

graflo

A framework for transforming tabular (CSV, SQL) and hierarchical data (JSON, XML) into property graphs and ingesting them into graph databases (ArangoDB, Neo4j)

pipPyPI
Version
1.3.4
Maintainers
1

GraFlo graflo logo

A framework for transforming tabular (CSV, SQL) and hierarchical data (JSON, XML) into property graphs and ingesting them into graph databases (ArangoDB, Neo4j, TigerGraph).

⚠️ Package Renamed: This package was formerly known as graphcast.

Python PyPI version PyPI Downloads License: BSL pre-commit DOI

Core Concepts

Property Graphs

graflo works with property graphs, which consist of:

  • Vertices: Nodes with properties and optional unique identifiers
  • Edges: Relationships between vertices with their own properties
  • Properties: Both vertices and edges may have properties

Schema

The Schema defines how your data should be transformed into a graph and contains:

  • Vertex Definitions: Specify vertex types, their properties, and unique identifiers
    • Fields can be specified as strings (backward compatible) or typed Field objects with types (INT, FLOAT, STRING, DATETIME, BOOL)
    • Type information enables better validation and database-specific optimizations
  • Edge Definitions: Define relationships between vertices and their properties
    • Weight fields support typed definitions for better type safety
  • Resource Mapping: describe how data sources map to vertices and edges
  • Transforms: Modify data during the casting process
  • Automatic Schema Inference: Generate schemas automatically from PostgreSQL 3NF databases

Resources

Resources are your data sources that can be:

  • Table-like: CSV files, database tables
  • JSON-like: JSON files, nested data structures

Features

  • Graph Transformation Meta-language: A powerful declarative language to describe how your data becomes a property graph:
    • Define vertex and edge structures with typed fields
    • Set compound indexes for vertices and edges
    • Use blank vertices for complex relationships
    • Specify edge constraints and properties with typed weight fields
    • Apply advanced filtering and transformations
  • Typed Schema Definitions: Enhanced type support throughout the schema system
    • Vertex fields support types (INT, FLOAT, STRING, DATETIME, BOOL) for better validation
    • Edge weight fields can specify types for improved type safety
    • Backward compatible: fields without types default to None (suitable for databases like ArangoDB)
  • PostgreSQL Schema Inference: Automatically generate schemas from PostgreSQL 3NF databases
    • Introspect PostgreSQL schemas to identify vertex-like and edge-like tables
    • Automatically map PostgreSQL data types to graflo Field types
    • Infer vertex configurations from table structures
    • Infer edge configurations from foreign key relationships
    • Create Resource mappings from PostgreSQL tables
  • Parallel processing: Use as many cores as you have
  • Database support: Ingest into ArangoDB, Neo4j, and TigerGraph using the same API (database agnostic). Source data from PostgreSQL and other SQL databases.
  • Server-side filtering: Efficient querying with server-side filtering support (TigerGraph REST++ API)

Documentation

Full documentation is available at: growgraph.github.io/graflo

Installation

pip install graflo

Usage Examples

Simple ingest

from suthing import FileHandle

from graflo import Schema, Caster, Patterns
from graflo.db.connection.onto import ArangoConfig

schema = Schema.from_dict(FileHandle.load("schema.yaml"))

# Option 1: Load config from docker/arango/.env (recommended)
conn_conf = ArangoConfig.from_docker_env()

# Option 2: Load from environment variables
# Set: ARANGO_URI, ARANGO_USERNAME, ARANGO_PASSWORD, ARANGO_DATABASE
conn_conf = ArangoConfig.from_env()

# Option 3: Load with custom prefix (for multiple configs)
# Set: USER_ARANGO_URI, USER_ARANGO_USERNAME, USER_ARANGO_PASSWORD, USER_ARANGO_DATABASE
user_conn_conf = ArangoConfig.from_env(prefix="USER")

# Option 4: Create config directly
# conn_conf = ArangoConfig(
#     uri="http://localhost:8535",
#     username="root",
#     password="123",
#     database="mygraph",  # For ArangoDB, 'database' maps to schema/graph
# )
# Note: If 'database' (or 'schema_name' for TigerGraph) is not set,
# Caster will automatically use Schema.general.name as fallback

from graflo.util.onto import FilePattern
import pathlib

# Create Patterns with file patterns
patterns = Patterns()
patterns.add_file_pattern(
    "work",
    FilePattern(regex="\Sjson$", sub_path=pathlib.Path("./data"), resource_name="work")
)

# Or use resource_mapping for simpler initialization
# patterns = Patterns(
#     _resource_mapping={
#         "work": "./data/work.json",
#     }
# )

schema.fetch_resource()

caster = Caster(schema)

caster.ingest(
    output_config=conn_conf,  # Target database config
    patterns=patterns,  # Source data patterns
)

PostgreSQL Schema Inference

from graflo.db.postgres import PostgresConnection
from graflo.db.postgres.heuristics import infer_schema_from_postgres
from graflo.db.connection.onto import PostgresConfig
from graflo import Caster
from graflo.onto import DBFlavor

# Connect to PostgreSQL
postgres_config = PostgresConfig.from_docker_env()  # or PostgresConfig.from_env()
postgres_conn = PostgresConnection(postgres_config)

# Infer schema from PostgreSQL 3NF database
schema = infer_schema_from_postgres(
    postgres_conn,
    schema_name="public",  # PostgreSQL schema name
    db_flavor=DBFlavor.ARANGO  # Target graph database flavor
)

# Close PostgreSQL connection
postgres_conn.close()

# Use the inferred schema with Caster
caster = Caster(schema)
# ... continue with ingestion

Development

To install requirements

git clone git@github.com:growgraph/graflo.git && cd graflo
uv sync --dev

Tests

Test databases

Spin up Arango from arango docker folder by

docker-compose --env-file .env up arango

Neo4j from neo4j docker folder by

docker-compose --env-file .env up neo4j

and TigerGraph from tigergraph docker folder by

docker-compose --env-file .env up tigergraph

To run unit tests

pytest test

Requirements

  • Python 3.10+
  • python-arango
  • sqlalchemy>=2.0.0 (for PostgreSQL and SQL data sources)

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts