skypoint-python-cdm-connector
Python Spark CDM Connector by SkyPoint.
Apache Spark connector for the Microsoft Azure "Common Data Model". Reading and writing is supported and it is a work in progress. Please file issues for any bugs that you find.
For more information about the Azure Common Data Model, check out this page.
We support Azure Data Lake Service (ADLS) and AWS S3 as storage, historical data preservation using snapshots of the schema & data files and usage within PySpark, Azure Functions etc.
*Upcoming Support for incremental data refresh handling, [CDM 1.1](https://docs.microsoft.com/en-us/common-data-model/cdm-manifest and Google Cloud (Cloud Storage).
Example
- Please look into the sample usage file skypoint_python_cdm.py
- Dynamically add/remove entities, annotations and attributes
- Pass Reader and Writer object for any storage account you like to write/read data to/from.
- Check out the below code for basic read and write examples.
m = Model()
df = {"country": ["Brazil", "Russia", "India", "China", "South Africa", "ParaSF"],
"currentTime": [datetime.now(), datetime.now(), datetime.now(), datetime.now(), datetime.now(), datetime.now()],
"area": [8.516, 17.10, 3.286, 9.597, 1.221, 2.222],
"capital": ["Brasilia", "Moscow", "New Dehli", "Beijing", "Pretoria", "ParaSF"],
"population": [200.4, 143.5, 1252, 1357, 52.98, 12.34] }
df = pd.DataFrame(df)
entity = Model.generate_entity(df, "customEntity")
m.add_entity(entity)
Model.add_annotation("modelJsonAnnotation", "modelJsonAnnotationValue", m)
writer = ADLSWriter("ACCOUNT_NAME", "ACCOUNT_KEY",
"CONTAINER_NAME", "STORAGE_NAME", "DATAFLOW_NAME")
m.write_to_storage("customEntity", df, writer)
Contributing
This project welcomes contributions and suggestions.
References
Model.json version1 schema
A clean implementation for Python Objects from/to model.json file