Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

markdown-frames

Package Overview
Dependencies
Maintainers
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

markdown-frames

Markdown tables parsing to pyspark / pandas DataFrames

  • 1.0.6
  • PyPI
  • Socket score

Maintainers
3

Markdown Frames

Helper package for testing Apache Spark and Pandas DataFrames. It makes your data-related unit tests more readable.

History

While working at Exacaster Vaidas Armonas came up with the idea to make testing data more representable. And with the help of his team, he implemented the initial version of this package.

Before that, we had to define our testing data as follows:

schema = ["user_id", "even_type", "item_id", "event_time", "country", "dt"]
input_df = spark.createDataFrame([
    (123456, 'page_view', None, datetime(2017,12,31,23,50,50), "uk", "2017-12-31"),
    (123456, 'item_view', 68471513, datetime(2017,12,31,23,50,55), "uk", "2017-12-31")], 
    schema)

And with this library you can define same data like this:

input_data = """ 
    |  user_id   |  even_type  | item_id  |    event_time       | country  |     dt      |
    |   bigint   |   string    |  bigint  |    timestamp        |  string  |   string    |
    | ---------- | ----------- | -------- | ------------------- | -------- | ----------- |
    |   123456   |  page_view  |   None   | 2017-12-31 23:50:50 |   uk     | 2017-12-31  |
    |   123456   |  item_view  | 68471513 | 2017-12-31 23:50:55 |   uk     | 2017-12-31  |
"""
input_df = spark_df(input_data, spark)

Installation

To install this package, run this command on your python environment:

pip install markdown_frames[pyspark]

Usage

When you have this package installed, you can use it in your unit tests as follows (assuming you are using pytest-spark ang have Spark Session available):

from pyspark.sql import SparkSession
from markdown_frames.spark_dataframe import spark_df

def test_your_use_case(spark: SpakSession): -> None
    expected_data = """
        | column1 | column2 | column3 | column4 |
        |   int   |  string |  float  |  bigint |
        | ------- | ------- | ------- | ------- |
        |   1     |   user1 |   3.14  |  111111 |
        |   2     |   None  |   1.618 |  222222 |
        |   3     |   ''    |   2.718 |  333333 |
        """
    expected_df = spark_df(expected_data, spark)

    actaual_df = your_use_case(spark)

    assert expected_df.collect()) == actaual_df.collect())

Supported data types

This package supports all major datatypes, use these type names in your table definitions:

  • int
  • bigint
  • float
  • double
  • string
  • boolean
  • date
  • timestamp
  • decimal(precision,scale) (scale and precision must be integers)
  • array<int> (int can be replaced by any of mentioned types)
  • map<string,int> (string and int can be replaced by any of mentioned types)

For null values use None keyword.

License

This project is MIT licensed.

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc