New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

@vulcan-sql/extension-huggingface

Package Overview
Dependencies
Maintainers
6
Versions
54
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@vulcan-sql/extension-huggingface

Hugging Face feature for VulcanSQL

  • 0.10.4
  • latest
  • Source
  • npm
  • Socket score

Version published
Maintainers
6
Created
Source

extension-huggingface

Supporting Hugging Face Inference API task for VulcanSQL, provided by Canner.

Installation

  1. Install the package:

    npm i @vulcan-sql/extension-huggingface
    
  2. Update your vulcan.yaml file to enable the extension:

    extensions:
      hf: '@vulcan-sql/extension-huggingface'
    
    hf:
      # Required: Hugging Face access token, see: https://huggingface.co/docs/hub/security-tokens
      accessToken: 'your-huggingface-access-token'
    

Using Hugging Face

VulcanSQL support using Hugging Face tasks by VulcanSQL Filters statement.

⚠️ Caution: Hugging Face has a rate limit, so it does not allow sending large datasets to the Hugging Face library for processing. Otherwise, using a different Hugging Face model may yield different results or even result in failure.

Table Question Answering

The Table Question Answering is one of the Natural Language Processing tasks supported by Hugging Face.

Using the huggingface_table_question_answering filter.

The result will be converted to a JSON string from huggingface_table_question_answering. You could decompress the JSON string and use the result by itself.

Sample 1 - send the data from variable by set tag:

{% set data = [
  {
    "repository": "vulcan-sql",
    "topic": ["analytics", "data-lake", "data-warehouse", "api-builder"],
    "description":"Create and share Data APIs fast! Data API framework for DuckDB, ClickHouse, Snowflake, BigQuery, PostgreSQL"
  },
  {
    "repository": "accio",
    "topic": ["data-analytics", "data-lake", "data-warehouse", "bussiness-intelligence"],
    "description": "Query Your Data Warehouse Like Exploring One Big View."
  },
  {
    "repository": "hello-world",
    "topic": [],
    "description": "Sample repository for testing"
  }
] %}

-- The source data for "huggingface_table_question_answering" needs to be an array of objects.
SELECT {{ data | huggingface_table_question_answering(query="How many repositories related to data-lake topic?") }} as result

Sample 1 - Response:

[
  {
    "result": "{\"answer\":\"COUNT > vulcan-sql, accio\",\"coordinates\":[[0,0],[1,0]],\"cells\":[\"vulcan-sql\",\"accio\"],\"aggregator\":\"COUNT\"}"
  }
]

Sample 2 - send the data from req tag:

{% req artists %}
  SELECT * FROM artists
{% endreq %}

{% set question = "List display name where gender are female?" %}

SELECT {{ products.value() | huggingface_table_question_answering(query=question, model="microsoft/tapex-base-finetuned-wtq", wait_for_model=true, use_cache=true) }}

Sample 2 - Response:

[
  {
    "result": "{\"answer\":\"Irene Aronson, Ruth Asawa, Isidora Aschheim, Geneviève Asse, Dana Atchley, Aino Aalto, Berenice Abbott\",\"coordinates\":[[8,1],[16,1],[17,1],[23,1],[25,1],[29,1],[35,1]],\"cells\":[\"Irene Aronson\",\"Ruth Asawa\",\"Isidora Aschheim\",\"Geneviève Asse\",\"Dana Atchley\",\"Aino Aalto\",\"Berenice Abbott\"],\"aggregator\":\"NONE\"}"
  }
]

Table Question Answering Arguments

Please check Table Question Answering for further information.

NameRequiredDefaultDescription
queryYThe query in plain text that you want to ask the table.
endpointNThe inference endpoint URL, when using endpoint, it replaces the original default value of model.
modelNgoogle/tapas-base-finetuned-wtqThe model id of a pre-trained model hosted inside a model repo on huggingface.co. See: https://huggingface.co/models?pipeline_tag=table-question-answering
use_cacheNtrueThere is a cache layer on the inference API to speedup requests we have already seen
wait_for_modelNfalseIf the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done

Text Generation

The Text Generation is one of the Natural Language Processing tasks supported by Hugging Face.

Using the huggingface_text_generation filter. The result will be a string from huggingface_text_generation.

📢 Notice: The Text Generation default model is gpt2, If you would like to use the Meta LLama2 models, you have two methods to do:

  1. Subscribe to the Pro Account.
  • Set the Meta LLama2 model using the model keyword argument in huggingface_text_generation, e.g: meta-llama/Llama-2-13b-chat-hf.
  1. Using Inference Endpoint.
  • Select one of the Meta LLama2 Models and deploy it to the Inference Endpoint.
  • Set the endpoint URL using the endpoint keyword argument in huggingface_text_generation.

Sample 1 - Subscribe to the Pro Account:

{% set data = [
  {
    "rank": 1,
    "institution": "Massachusetts Institute of Technology (MIT)",
    "location code":"US",
    "location":"United States"
  },
  {
    "rank": 2,
    "institution": "University of Cambridge",
    "location code":"UK",
    "location":"United Kingdom"
  },
  {
    "rank": 3,
    "institution": "Stanford University"
    "location code":"US",
    "location":"United States"
  }
  -- other universities.....
] %}

SELECT {{ data | huggingface_text_generation(query="Which university is the top-ranked university?", model="meta-llama/Llama-2-13b-chat-hf") }} as result

Sample 1 - Response:

[
  {
    "result": "Answer: Based on the provided list, the top-ranked university is Massachusetts Institute of Technology (MIT) with a rank of 1."
  }
]

Sample 2 - Using Inference Endpoint:

{% req universities %}
 SELECT rank,institution,"location code", "location" FROM read_csv_auto('2023-QS-World-University-Rankings.csv') 
{% endreq %}

SELECT {{ universities.value() | huggingface_text_generation(query="Which university located in the UK is ranked at the top of the list?", endpoint='xxx.yyy.zzz.huggingface.cloud') }} as result

Sample 2 - Response:

[
  {
    "result": "Answer: Based on the list provided, the top-ranked university in the UK is the University of Cambridge, which is ranked at number 2."
  }
]

Text Generation Arguments

Some default value was changed, so it may different from Text Generation default value.

NameRequiredDefaultDescription
queryYThe query in plain text that you want to ask the table.
endpointNThe inference endpoint URL, when using endpoint, it replaces the original default value of model.
modelNgpt2The model id of a pre-trained model hosted inside a model repo on huggingface.co. See: https://huggingface.co/models?pipeline_tag=text-generation
top_kNInteger value to define the top tokens considered within the sample operation to create new text.
top_pNFloat value to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top_p.
temperatureN0.1Range: (0.0 - 100.0). The temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability.
repetition_penaltyNRange: (0.0 - 100.0). The more a token is used within generation the more it is penalized to not be picked in successive generation passes.
max_new_tokensN250The amount of new tokens to be generated, this does not include the input length it is a estimate of the size of generated text you want. Each new tokens slows down the request, so look for balance between response times and length of text generated.
max_timeNRange (0-120.0). The amount of time in seconds that the query should take maximum. Network can cause some overhead so it will be a soft limit. Use that in combination with max_new_tokens for best results.
return_full_textNfalseIf set to False, the return results will not contain the original query making it easier for prompting.
num_return_sequencesN1The number of proposition you want to be returned.
do_sampleNWhether or not to use sampling, use greedy decoding otherwise.
use_cacheNtrueThere is a cache layer on the inference API to speedup requests we have already seen
wait_for_modelNfalseIf the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done

Keywords

FAQs

Package last updated on 18 Jan 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc