You're Invited:Meet the Socket Team at BlackHat and DEF CON in Las Vegas, Aug 4-6.RSVP
Socket
Book a DemoInstallSign in
Socket

Elasticsearch-to-BigQuery-Connector

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

Elasticsearch-to-BigQuery-Connector

0.1
pipPyPI
Maintainers
1

Python Library for connecting to Elasticsearch and loading data into BigQuery.

This Python library provides utilities to extract data from Elasticsearch and load it directly into Google BigQuery. It simplifies the process of data migration between Elasticsearch and BigQuery by handling connection setup, data extraction, and data loading with optional timestamping.

Features

  • Connect to an Elasticsearch instance and fetch data.
  • Load data directly into a specified BigQuery table.
  • Optional timestamping for record insertion.

Installation Install the package via pip:

pip install Elasticsearch_to_BigQuery_Connector

Dependencies

  • elasticsearch: To connect and interact with Elasticsearch.
  • google-cloud-bigquery: To handle operations related to BigQuery.

Make sure to have these installed using:

pip install elasticsearch google-cloud-bigquery

Example Usage:


from Elasticsearch_to_BigQuery_Connector import Elasticsearch_to_BigQuery_Connector

Elasticsearch_to_BigQuery_Connector(
    es_index_name='your_index',
    es_host='localhost',
    es_port=port,
    es_scheme='http',
    es_http_auth=('user', 'pass'),
    es_size=size,
    bq_project_id='your_project_id',
    bq_dataset_id='your_dataset_id',
    bq_table_name='your_table_name',
    bq_add_record_addition_time=True
)

Parameters:

  • index_name (str): The name of the Elasticsearch index to query.
  • host (str): The hostname of the Elasticsearch server.
  • port (int): The port number on which the Elasticsearch server is listening.
  • scheme (str): The protocol scheme (e.g., 'http' or 'https').
  • http_auth (tuple): A tuple containing the username and password for basic auth.
  • size (int, optional): The number of records to fetch in one query (default is 10000).
  • bq_project_id (str): The Google Cloud project ID.
  • bq_dataset_id (str): The dataset ID within the Google Cloud project.
  • bq_table_name (str): The table name where the data will be loaded.
  • bq_add_record_addition_time (bool): If True, adds the current datetime as landloaddate to each record.

Additional Notes:

Ensure you have configured credentials for both Elasticsearch and Google Cloud (BigQuery):

  • For Elasticsearch, provide the host, port, scheme, and authentication details.

  • For BigQuery, ensure your environment is set up with the appropriate credentials (using Google Cloud SDK or setting the GOOGLE_APPLICATION_CREDENTIALS environment variable to your service account key file).

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts