Security News
Fluent Assertions Faces Backlash After Abandoning Open Source Licensing
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
Helper package for testing Apache Spark and Pandas DataFrames. It makes your data-related unit tests more readable.
While working at Exacaster Vaidas Armonas came up with the idea to make testing data more representable. And with the help of his team, he implemented the initial version of this package.
Before that, we had to define our testing data as follows:
schema = ["user_id", "even_type", "item_id", "event_time", "country", "dt"]
input_df = spark.createDataFrame([
(123456, 'page_view', None, datetime(2017,12,31,23,50,50), "uk", "2017-12-31"),
(123456, 'item_view', 68471513, datetime(2017,12,31,23,50,55), "uk", "2017-12-31")],
schema)
And with this library you can define same data like this:
input_data = """
| user_id | even_type | item_id | event_time | country | dt |
| bigint | string | bigint | timestamp | string | string |
| ---------- | ----------- | -------- | ------------------- | -------- | ----------- |
| 123456 | page_view | None | 2017-12-31 23:50:50 | uk | 2017-12-31 |
| 123456 | item_view | 68471513 | 2017-12-31 23:50:55 | uk | 2017-12-31 |
"""
input_df = spark_df(input_data, spark)
To install this package, run this command on your python environment:
pip install markdown_frames[pyspark]
When you have this package installed, you can use it in your unit tests as follows (assuming you are using pytest-spark
ang have Spark Session available):
from pyspark.sql import SparkSession
from markdown_frames.spark_dataframe import spark_df
def test_your_use_case(spark: SpakSession): -> None
expected_data = """
| column1 | column2 | column3 | column4 |
| int | string | float | bigint |
| ------- | ------- | ------- | ------- |
| 1 | user1 | 3.14 | 111111 |
| 2 | None | 1.618 | 222222 |
| 3 | '' | 2.718 | 333333 |
"""
expected_df = spark_df(expected_data, spark)
actaual_df = your_use_case(spark)
assert expected_df.collect()) == actaual_df.collect())
This package supports all major datatypes, use these type names in your table definitions:
int
bigint
float
double
string
boolean
date
timestamp
decimal(precision,scale)
(scale and precision must be integers)array<int>
(int can be replaced by any of mentioned types)map<string,int>
(string and int can be replaced by any of mentioned types)For null
values use None
keyword.
This project is MIT licensed.
FAQs
Markdown tables parsing to pyspark / pandas DataFrames
We found that markdown-frames demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 3 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
Research
Security News
Socket researchers uncover the risks of a malicious Python package targeting Discord developers.
Security News
The UK is proposing a bold ban on ransomware payments by public entities to disrupt cybercrime, protect critical services, and lead global cybersecurity efforts.