Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

llama-index-readers-wordlift

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

llama-index-readers-wordlift

llama-index readers wordlift integration

  • 0.3.0
  • PyPI
  • Socket score

Maintainers
1

WordLift Reader

pip install llama-index-readers-wordlift

The WordLift GraphQL Reader is a connector to fetch and transform data from a WordLift Knowledge Graph using your the WordLift Key. The connector provides a convenient way to load data from WordLift using a GraphQL query and transform it into a list of documents for further processing.

Usage

To use the WordLift GraphQL Reader, follow the steps below:

  1. Set up the necessary configuration options, such as the API endpoint, headers, query, fields, and configuration options (make sure you have with you the Wordlift Key).
  2. Create an instance of the WordLiftLoader class, passing in the configuration options.
  3. Use the load_data method to fetch and transform the data.
  4. Process the loaded documents as needed.

Here's an example of how to use the WordLift GraphQL Reader:

import json
from llama_index.core import VectorStoreIndex
from llama_index.core import Document
from langchain.llms import OpenAI
from llama_index.readers.wordlift import WordLiftLoader

# Set up the necessary configuration options
endpoint = "https://api.wordlift.io/graphql"
headers = {
    "Authorization": "<YOUR_WORDLIFT_KEY>",
    "Content-Type": "application/json",
}

query = """
# Your GraphQL query here
"""
fields = "<YOUR_FIELDS>"
config_options = {
    "text_fields": ["<YOUR_TEXT_FIELDS>"],
    "metadata_fields": ["<YOUR_METADATA_FIELDS>"],
}
# Create an instance of the WordLiftLoader
reader = WordLiftLoader(endpoint, headers, query, fields, config_options)

# Load the data
documents = reader.load_data()

# Convert the documents
converted_doc = []
for doc in documents:
    converted_doc_id = json.dumps(doc.doc_id)
    converted_doc.append(
        Document(
            text=doc.text,
            doc_id=converted_doc_id,
            embedding=doc.embedding,
            doc_hash=doc.doc_hash,
            extra_info=doc.extra_info,
        )
    )

# Create the index and query engine
index = VectorStoreIndex.from_documents(converted_doc)
query_engine = index.as_query_engine()

# Perform a query
result = query_engine.query("<YOUR_QUERY>")

# Process the result as needed
logging.info("Result: %s", result)

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc