Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Avrotize is a "Rosetta Stone" for data structure definitions, allowing you to convert between numerous data and database schema formats and to generate code for different programming languages.
It is, for instance, a well-documented and predictable converter and code generator for data structures originally defined in JSON Schema (of arbitrary complexity).
The tool leans on the Apache Avro-derived Avrotize Schema as its schema model.
You can install Avrotize from PyPI, having installed Python 3.10 or later:
pip install avrotize
Avrotize provides several commands for converting schema formats via Avrotize Schema.
Converting to Avrotize Schema:
avrotize p2a
- Convert Protobuf (2 or 3) schema to Avrotize Schema.avrotize j2a
- Convert JSON schema to Avrotize Schema.avrotize x2a
- Convert XML schema to Avrotize Schema.avrotize asn2a
- Convert ASN.1 to Avrotize Schema.avrotize k2a
- Convert Kusto table definitions to Avrotize Schema.avrotize pq2a
- Convert Parquet schema to Avrotize Schema.avrotize csv2a
- Convert CSV file to Avrotize Schema.avrotize kstruct2a
- Convert Kafka Connect Schema to Avrotize Schema.Converting from Avrotize Schema:
avrotize a2p
- Convert Avrotize Schema to Protobuf 3 schema.avrotize a2j
- Convert Avrotize Schema to JSON schema.avrotize a2x
- Convert Avrotize Schema to XML schema.avrotize a2k
- Convert Avrotize Schema to Kusto table definition.avrotize a2sql
- Convert Avrotize Schema to SQL table definition.avrotize a2pq
- Convert Avrotize Schema to Parquet or Iceberg schema.avrotize a2ib
- Convert Avrotize Schema to Iceberg schema.avrotize a2mongo
- Convert Avrotize Schema to MongoDB schema.avrotize a2cassandra
- Convert Avrotize Schema to Cassandra schema.avrotize a2es
- Convert Avrotize Schema to Elasticsearch schema.avrotize a2dynamodb
- Convert Avrotize Schema to DynamoDB schema.avrotize a2cosmos
- Convert Avrotize Schema to CosmosDB schema.avrotize a2couchdb
- Convert Avrotize Schema to CouchDB schema.avrotize a2firebase
- Convert Avrotize Schema to Firebase schema.avrotize a2hbase
- Convert Avrotize Schema to HBase schema.avrotize a2neo4j
- Convert Avrotize Schema to Neo4j schema.avrotize a2dp
- Convert Avrotize Schema to Datapackage schema.avrotize a2md
- Convert Avrotize Schema to Markdown documentation.Generate code from Avrotize Schema:
avrotize a2cs
- Generate C# code from Avrotize Schema.avrotize a2java
- Generate Java code from Avrotize Schema.avrotize a2py
- Generate Python code from Avrotize Schema.avrotize a2ts
- Generate TypeScript code from Avrotize Schema.avrotize a2js
- Generate JavaScript code from Avrotize Schema.avrotize a2cpp
- Generate C++ code from Avrotize Schema.avrotize a2go
- Generate Go code from Avrotize Schema.avrotize a2rust
- Generate Rust code from Avrotize Schema.Other commands:
avrotize pcf
- Create the Parsing Canonical Form (PCF) of an Avrotize Schema.You can use Avrotize to convert between Avro/Avrotize Schema and other schema formats like JSON Schema, XML Schema (XSD), Protocol Buffers (Protobuf), ASN.1, and database schema formats like Kusto Data Table Definition (KQL) and SQL Table Definition. That means you can also convert from JSON Schema to Protobuf going via Avrotize Schema.
You can also generate C#, Java, TypeScript, JavaScript, and Python code from Avrotize Schema documents. The difference to the native Avro tools is that Avrotize can emit data classes without Avro library dependencies and, optionally, with annotations for JSON serialization libraries like Jackson or System.Text.Json.
The tool does not convert data (instances of schemas), only the data structure definitions.
Mind that the primary objective of the tool is the conversion of schemas that describe data structures used in applications, databases, and message systems. While the project's internal tests do cover a lot of ground, it is nevertheless not a primary goal of the tool to convert every complex document schema like those used for devops pipeline or system configuration files.
Data structure definitions are an essential part of data exchange, serialization, and storage. They define the shape and type of data, and they are foundational for tooling and libraries for working with the data. Nearly all data schema languages are coupled to a specific data exchange or storage format, locking the definitions to that format.
Avrotize is designed as a tool to "unlock" data definitions from JSON Schema or XML Schema and make them usable in other contexts. The intent is also to lay a foundation for transcoding data from one format to another, by translating the schema definitions as accurately as possible into the schema model of the target format's schema. The transcoding of the data itself requires separate tools that are beyond the scope of this project.
The use of the term "data structure definition" and not "data object definition" is quite intentional. The focus of the tool is on data structures that can be used for messaging and eventing payloads, for data serialization, and for database tables, with the goal that those structures can be mapped cleanly from and to common programming language types.
Therefore, Avrotize intentionally ignores common techniques to model object-oriented inheritance. For instance, when converting from JSON Schema, all content from allOf
expressions is merged into a single record type rather than trying to model the inheritance tree in Avro.
Avrotize Schema is a schema model that is a full superset of the popular Apache Avro Schema model. Avrotize Schema is the "pivot point" for this tool. All schemas are converted from and to Avrotize Schema.
Since Avrotize Schema is a superset of Avro Schema and uses its extensibility features, every Avrotize Schema is also a valid Avro Schema and vice versa.
Why did we pick Avro Schema as the foundational schema model?
Avro Schema ...
It needs to be noted here that while Avro Schema is great for defining data structures, and data classes generated from Avro Schema using this tool or other tools can be used to with the most popular JSON serialization libraries, the Apache Avro project's own JSON encoding has fairly grave interoperability issues with common usage of JSON. Avrotize defines an alternate JSON encoding
in avrojson.md
.
Avro Schema does not support all the bells and whistles of XML Schema or JSON Schema, but that is a feature, not a bug, as it ensures the portability of the schemas across different systems and infrastructures. Specifically, Avro Schema does not support many of the data validation features found in JSON Schema or XML Schema. There are no pattern
, format
, minimum
, maximum
, or required
keywords in Avro Schema, and Avro does not support conditional validation.
In a system where data originates as XML or JSON described by a validating XML Schema or JSON Schema, the assumption we make here is that data will be validated using its native schema language first, and then the Avro Schema will be used for transformation or transfer or storage.
When converting Avrotize Schema to Kusto Data Table Definition (KQL), SQL Table Definition, or Parquet Schema, the tool can add special columns for CloudEvents attributes. CNCF CloudEvents is a specification for describing event data in a common way.
The rationale for adding such columns to database tables is that messages and events commonly separate event metadata from the payload data, while that information is merged when events are projected into a database. The metadata often carries important context information about the event that is not contained in the payload itself. Therefore, the tool can add those columns to the database tables for easy alignment of the message context with the payload when building event stores.
avrotize p2a <path_to_proto_file> [--out <path_to_avro_schema_file>]
Parameters:
<path_to_proto_file>
: The path to the Protobuf schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the Avrotize Schema file to write the conversion result to. If omitted, the output is directed to stdout.Conversion notes:
Timestamp
type is mapped to the Avro logical type 'timestamp-millis'. The rest of the well-known Protobuf types are kept as Avro record types with the same field names and types.map
, Avro does not. When converting from Proto to Avro, the type information for the map keys is ignored.extensions
and reserved
keywords in the Proto schema.optional
keyword results in an Avro field being nullable (union with the null
type), while the required
keyword results in a non-nullable field. The repeated
keyword results in an Avro field being an array of the field type.oneof
keyword in Proto is mapped to an Avro union type.options
in the Proto schema are ignored.avrotize a2p <path_to_avro_schema_file> [--out <path_to_proto_directory>] [--naming <naming_mode>] [--allow-optional]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the Protobuf schema directory to write the conversion result to. If omitted, the output is directed to stdout.--naming
: (optional) Type naming convention. Choices are snake
, camel
, pascal
.--allow-optional
: (optional) Enable support for 'optional' fields.Conversion notes:
.proto
file with the package definition and an import
statement for each namespace found in the Avrotize Schema.[]
are converted to oneof
expressions in Proto. Avro allows for maps and arrays in the type union, whereas Proto only supports scalar types and message type references. The tool will therefore emit message types containing a single array or map field for any such case and add it to the containing type, and will also recursively resolve further unions in the array and map values.oneof
expressions, the alternative fields need to be assigned field numbers, which will shift the field numbers for any subsequent fields.avrotize j2a <path_to_json_schema_file> [--out <path_to_avro_schema_file>] [--namespace <avro_schema_namespace>] [--split-top-level-records]
Parameters:
<path_to_json_schema_file>
: The path to the JSON schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the Avrotize Schema file to write the conversion result to. If omitted, the output is directed to stdout.--namespace
: (optional) The namespace to use in the Avrotize Schema if the JSON schema does not define a namespace.--split-top-level-records
: (optional) Split top-level records into separate files.Conversion notes:
avrotize a2j <path_to_avro_schema_file> [--out <path_to_json_schema_file>] [--naming <naming_mode>]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the JSON schema file to write the conversion result to. If omitted, the output is directed to stdout.--naming
: (optional) Type naming convention. Choices are snake
, camel
, pascal
, default
.Conversion notes:
avrotize x2a <path_to_xsd_file> [--out <path_to_avro_schema_file>] [--namespace <avro_schema_namespace>]
Parameters:
<path_to_xsd_file>
: The path to the XML schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the Avrotize Schema file to write the conversion result to. If omitted, the output is directed to stdout.--namespace
: (optional) The namespace to use in the Avrotize Schema if the XML schema does not define a namespace.Conversion notes:
xsd:any
as Avro does not support arbitrary typing and must always use a named type. The tool will map xsd:any
to a field any
typed as a union that allows scalar values or two levels of array and/or map nesting.simpleType
declarations that define enums are mapped to enum
types in Avro. All other facets are ignored and simple types are mapped to the corresponding Avro type.complexType
declarations that have simple content where a base type is augmented with attributes is mapped to a record type in Avro. Any other facets defined on the complex type are ignored.xmlkind
extension attribute that indicates whether the field was an element
or an attribute
in the XML schema.avrotize a2x <path_to_avro_schema_file> [--out <path_to_xsd_schema_file>] [--namespace <target_namespace>]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the XML schema file to write the conversion result to. If omitted, the output is directed to stdout.--namespace
: (optional) Target namespace for the XSD schema.Conversion notes:
then joined with a choice.
avrotize asn2a <path_to_asn1_schema_file>[,<path_to_asn1_schema_file>,...] [--out <path_to_avro_schema_file>]
Parameters:
<path_to_asn1_schema_file>
: The path to the ASN.1 schema file to be converted. The tool supports multiple files in a comma-separated list. If omitted, the file is read from stdin.--out
: The path to the Avrotize Schema file to write the conversion result to. If omitted, the output is directed to stdout.Conversion notes:
SEQUENCE
and SET
are mapped to Avro record types.CHOICE
is mapped to an Avro record types with all fields being optional. While the CHOICE
type technically corresponds to an Avro union, the ASN.1 type has different named fields for each option, which is not a feature of Avro unions.OBJECT IDENTIFIER
is mapped to an Avro string type.ENUMERATED
is mapped to an Avro enum type.SEQUENCE OF
and SET OF
are mapped to Avro array type.BIT STRING
is mapped to Avro bytes type.OCTET STRING
is mapped to Avro bytes type.INTEGER
is mapped to Avro long type.REAL
is mapped to Avro double type.BOOLEAN
is mapped to Avro boolean type.NULL
is mapped to Avro null type.UTF8String
, PrintableString
, IA5String
, BMPString
, NumericString
, TeletexString
, VideotexString
, GraphicString
, VisibleString
, GeneralString
, UniversalString
, CharacterString
, T61String
are all mapped to Avro string type.avrotize k2a --kusto-uri <kusto_cluster_uri> --kusto-database <kusto_database> [--out <path_to_avro_schema_file>] [--emit-cloudevents-xregistry]
Parameters:
--kusto-uri
: The URI of the Kusto cluster to connect to.--kusto-database
: The name of the Kusto database to read the table definitions from.--out
: The path to the Avrotize Schema file to write the conversion result to. If omitted, the output is directed to stdout.--emit-cloudevents-xregistry
: (optional) See discussion below.Conversion notes:
bool
is mapped to Avro boolean type.datetime
is mapped to Avro long type with logical type timestamp-millis
.decimal
is mapped to a logical Avro type with the logicalType
set to decimal
and the precision
and scale
set to the values of the decimal
type in Kusto.guid
is mapped to Avro string type.int
is mapped to Avro int type.long
is mapped to Avro long type.real
is mapped to Avro double type.string
is mapped to Avro string type.timespan
is mapped to a logical Avro type with the logicalType
set to duration
.dynamic
columns, the tool will sample the data in the table to determine the structure of the dynamic column. The tool will map the dynamic column to an Avro record type with fields that correspond to the fields found in the dynamic column. If the dynamic column contains nested dynamic columns, the tool will recursively map those to Avro record types. If records with conflicting structures are found in the dynamic column, the tool will emit a union of record types for the dynamic column.--emit-cloudevents-xregistry
option is set, the tool will emit an xRegistry registry manifest file with a CloudEvent message definition for each table in the Kusto database and a separate Avro Schema for each table in the embedded schema registry. If one or more tables are found to contain CloudEvent data (as indicated by the presence of the CloudEvents attribute columns), the tool will inspect the content of the type
(or __type
or __type
) columns to determine which CloudEvent types have been stored in the table and will emit a CloudEvent definition and schema for each unique type.avrotize a2k <path_to_avro_schema_file> [--out <path_to_kusto_kql_file>] [--record-type <record_type>] [--emit-cloudevents-columns] [--emit-cloudevents-dispatch]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the Kusto KQL file to write the conversion result to. If omitted, the output is directed to stdout.--record-type
: (optional) The name of the Avro record type to convert to a Kusto table.--emit-cloudevents-columns
: (optional) If set, the tool will add CloudEvents attribute columns to the table: ___id
, ___source
, ___subject
, ___type
, and ___time
.--emit-cloudevents-dispatch
: (optional) If set, the tool will add a table named _cloudevents_dispatch
to the script or database, which serves as an ingestion and dispatch table for CloudEvents. The table has columns for the core CloudEvents attributes and a data
column that holds the CloudEvents data. For each table in the Avrotize Schema, the tool will create an update policy that maps events whose type
attribute matches the Avro type name to the respective table.Conversion notes:
record
type can be mapped to a Kusto table. If the Avrotize Schema contains other types (like enum
or array
), the tool will ignore them.record
type in the Avrotize Schema is converted to a Kusto table. If the Avrotize Schema contains other record
types, they will be ignored. The --record-type
option can be used to specify which record
type to convert.dynamic
in the Kusto table.avrotize a2sql [input] --out <path_to_sql_script> --dialect <dialect>
Parameters:
input
: The path to the Avrotize schema file to be converted (or read from stdin if omitted).--out
: The path to the SQL script file to write the conversion result to.--dialect
: The SQL dialect (database type) to target. Supported dialects include:
mysql
, mariadb
, postgres
, sqlserver
, oracle
, sqlite
, bigquery
, snowflake
, redshift
, db2
--emit-cloudevents-columns
: (Optional) Add CloudEvents columns to the SQL table.For detailed conversion rules and type mappings for each SQL dialect, refer to the SQL Conversion Notes document.
avrotize a2mongo <path_to_avro_schema_file> [--out <path_to_mongodb_schema>] [--emit-cloudevents-columns]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the MongoDB schema file to write the conversion result to.--emit-cloudevents-columns
: (optional) If set, the tool will add CloudEvents attribute columns to the MongoDB schema.Conversion notes:
object
.mongoimport
tool to create a collection with the specified schema.Here are the "Convert ..." sections for the newly added commands:
avrotize a2cassandra [input] --out <output_directory> [--emit-cloudevents-columns]
input
: Path to the Avrotize schema file (or read from stdin if omitted).--out
: Output path for the Cassandra schema (required).--emit-cloudevents-columns
: Add CloudEvents columns to the Cassandra schema (optional, default: false).Refer to the detailed conversion notes for Cassandra in the NoSQL Conversion Notes.
avrotize a2dynamodb [input] --out <output_directory> [--emit-cloudevents-columns]
input
: Path to the Avrotize schema file (or read from stdin if omitted).--out
: Output path for the DynamoDB schema (required).--emit-cloudevents-columns
: Add CloudEvents columns to the DynamoDB schema (optional, default: false).Refer to the detailed conversion notes for DynamoDB in the NoSQL Conversion Notes.
avrotize a2es [input] --out <output_directory> [--emit-cloudevents-columns]
input
: Path to the Avrotize schema file (or read from stdin if omitted).--out
: Output path for the Elasticsearch schema (required).--emit-cloudevents-columns
: Add CloudEvents columns to the Elasticsearch schema (optional, default: false).Refer to the detailed conversion notes for Elasticsearch in the NoSQL Conversion Notes.
avrotize a2couchdb [input] --out <output_directory> [--emit-cloudevents-columns]
input
: Path to the Avrotize schema file (or read from stdin if omitted).--out
: Output path for the CouchDB schema (required).--emit-cloudevents-columns
: Add CloudEvents columns to the CouchDB schema (optional, default: false).Refer to the detailed conversion notes for CouchDB in the NoSQL Conversion Notes.
avrotize a2neo4j [input] --out <output_directory> [--emit-cloudevents-columns]
input
: Path to the Avrotize schema file (or read from stdin if omitted).--out
: Output path for the Neo4j schema (required).--emit-cloudevents-columns
: Add CloudEvents columns to the Neo4j schema (optional, default: false).Refer to the detailed conversion notes for Neo4j in the NoSQL Conversion Notes.
avrotize a2firebase [input] --out <output_directory> [--emit-cloudevents-columns]
input
: Path to the Avrotize schema file (or read from stdin if omitted).--out
: Output path for the Firebase schema (required).--emit-cloudevents-columns
: Add CloudEvents columns to the Firebase schema (optional, default: false).Refer to the detailed conversion notes for Firebase in the NoSQL Conversion Notes.
avrotize a2cosmos [input] --out <output_directory> [--emit-cloudevents-columns]
input
: Path to the Avrotize schema file (or read from stdin if omitted).--out
: Output path for the CosmosDB schema (required).--emit-cloudevents-columns
: Add CloudEvents columns to the CosmosDB schema (optional, default: false).Refer to the detailed conversion notes for CosmosDB in the NoSQL Conversion Notes.
avrotize a2hbase [input] --out <output_directory> [--emit-cloudevents-columns]
input
: Path to the Avrotize schema file (or read from stdin if omitted).--out
: Output path for the HBase schema (required).--emit-cloudevents-columns
: Add CloudEvents columns to the HBase schema (optional, default: false).Refer to the detailed conversion notes for HBase in the NoSQL Conversion Notes.
avrotize a2pq <path_to_avro_schema_file> [--out <path_to_parquet_schema_file>] [--record-type <record-type-from-avro>] [--emit-cloudevents-columns]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the Parquet schema file to write the conversion result to. If omitted, the output is directed to stdout.--record-type
: (optional) The name of the Avro record type to convert to a Parquet schema.--emit-cloudevents-columns
: (optional) If set, the tool will add CloudEvents attribute columns to the Parquet schema: __id
, __source
, __subject
, __type
, and __time
.Conversion notes:
record
type. If the Avrotize Schema contains a top-level union, the --record-type
option must be used to specify which record type to emit.avrotize a2ib <path_to_avro_schema_file> [--out <path_to_iceberg_schema_file>] [--record-type <record-type-from-avro>] [--emit-cloudevents-columns]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the Iceberg schema file to write the conversion result to. If omitted, the output is directed to stdout.--record-type
: (optional) The name of the Avro record type to convert to an Iceberg schema.--emit-cloudevents-columns
: (optional) If set, the tool will add CloudEvents attribute columns to the Iceberg schema: __id
, __source
, __subject
, __type
, and __time
.Conversion notes:
record
type. If the Avrotize Schema contains a top-level union, the --record-type
option must be used to specify which record type to emit.avrotize pq2a <path_to_parquet_file> [--out <path_to_avro_schema_file>] [--namespace <avro_schema_namespace>]
Parameters:
<path_to_parquet_file>
: The path to the Parquet file to be converted. If omitted, the file is read from stdin.--out
: The path to the Avrotize Schema file to write the conversion result to. If omitted, the output is directed to stdout.--namespace
: (optional) The namespace to use in the Avrotize Schema if the Parquet file does not define a namespace.Conversion notes:
avrotize csv2a <path_to_csv_file> [--out <path_to_avro_schema_file>] [--namespace <avro_schema_namespace>]
Parameters:
<path_to_csv_file>
: The path to the CSV file to be converted. If omitted, the file is read from stdin.--out
: The path to the Avrotize Schema file to write the conversion result to. If omitted, the output is directed to stdout.--namespace
: (optional) The namespace to use in the Avrotize Schema if the CSV file does not define a namespace.Conversion notes:
avrotize kstruct2a [input] --out <path_to_avro_schema_file>
Parameters:
input
: The path to the Kafka Struct file to be converted (or read from stdin if omitted).--out
: The path to the Avrotize Schema file to write the conversion result to.--kstruct
: Deprecated: The path to the Kafka Struct file (for backward compatibility).Conversion notes:
avrotize a2cs <path_to_avro_schema_file> [--out <path_to_csharp_dir>] [--namespace <csharp_namespace>] [--avro-annotation] [--system_text_json_annotation] [--newtonsoft-json-annotation] [--pascal-properties]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the directory to write the C# classes to. Required.--namespace
: (optional) The namespace to use in the C# classes.--avro-annotation
: (optional) Use Avro annotations.--system_text_json_annotation
: (optional) Use System.Text.Json annotations.--newtonsoft-json-annotation
: (optional) Use Newtonsoft.Json annotations.--pascal-properties
: (optional) Use PascalCase properties.Conversion notes:
--avro-annotation
option adds Avro annotations, the --system_text_json_annotation
option adds System.Text.Json annotations, and the --newtonsoft-json-annotation
option adds Newtonsoft.Json annotations.--pascal-properties
option changes the naming convention of the properties to PascalCase.avrotize a2java <path_to_avro_schema_file> [--out <path_to_java_dir>] [--package <java_package>] [--avro-annotation] [--jackson-annotation] [--pascal-properties]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the directory to write the Java classes to. Required.--package
: (optional) The package to use in the Java classes.--avro-annotation
: (optional) Use Avro annotations.--jackson-annotation
: (optional) Use Jackson annotations.--pascal-properties
: (optional) Use PascalCase properties.Conversion notes:
--avro-annotation
option adds Avro annotations, and the --jackson-annotation
option adds Jackson annotations.--pascal-properties
option changes the naming convention of the properties to PascalCase.avrotize a2py <path_to_avro_schema_file> [--out <path_to_python_dir>] [--package <python_package>] [--dataclasses-json-annotation] [--avro-annotation]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the directory to write the Python classes to. Required.--package
: (optional) The package to use in the Python classes.--dataclasses-json-annotation
: (optional) Use dataclasses-json annotations.--avro-annotation
: (optional) Use Avro annotations.Conversion notes:
--dataclasses-json-annotation
option adds dataclasses-json annotations, and the --avro-annotation
option adds Avro annotations.avrotize a2ts <path_to_avro_schema_file> [--out <path_to_typescript_dir>] [--package <typescript_package>] [--avro-annotation] [--typedjson-annotation]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the directory to write the TypeScript classes to. Required.--package
: (optional) The package to use in the TypeScript classes.--avro-annotation
: (optional) Use Avro annotations.--typedjson-annotation
: (optional) Use TypedJSON annotations.Conversion notes:
--avro-annotation
option adds Avro annotations, and the --typedjson-annotation
option adds TypedJSON annotations.avrotize a2js <path_to_avro_schema_file> [--out <path_to_javascript_dir>] [--package <javascript_package>] [--avro-annotation]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the directory to write the JavaScript classes to. Required.--package
: (optional) The package to use in the JavaScript classes.--avro-annotation
: (optional) Use Avro annotations.Conversion notes:
--avro-annotation
option adds Avro annotations.avrotize a2cpp <path_to_avro_schema_file> [--out <path_to_cpp_dir>] [--namespace <cpp_namespace>] [--avro-annotation] [--json-annotation]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the directory to write the C++ classes to. Required.--namespace
: (optional) The namespace to use in the C++ classes.--avro-annotation
: (optional) Use Avro annotations.--json-annotation
: (optional) Use JSON annotations.Conversion notes:
rotize Schema is converted to a C++ class.
--avro-annotation
option adds Avro annotations, and the --json-annotation
option adds JSON annotations.avrotize a2go <path_to_avro_schema_file> [--out <path_to_go_dir>] [--package <go_package>] [--avro-annotation] [--json-annotation] [--package-site <go_package_site>] [--package-username <go_package_username>]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the directory to write the Go classes to. Required.--package
: (optional) The package to use in the Go classes.--package-site
: (optional) The package site to use in the Go classes.--package-username
: (optional) The package username to use in the Go classes.--avro-annotation
: (optional) Use Avro annotations.--json-annotation
: (optional) Use JSON annotations.Conversion notes:
--avro-annotation
option adds Avro annotations, and the --json-annotation
option adds JSON annotations.avrotize a2rust <path_to_avro_schema_file> [--out <path_to_rust_dir>] [--package <rust_package>] [--avro-annotation] [--serde-annotation]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the directory to write the Rust classes to. Required.--package
: (optional) The package to use in the Rust classes.--avro-annotation
: (optional) Use Avro annotations.--serde-annotation
: (optional) Use Serde annotations.Conversion notes:
--avro-annotation
option adds Avro annotations, and the --serde-annotation
option adds Serde annotations.avrotize a2dp <path_to_avro_schema_file> [--out <path_to_datapackage_file>] [--record-type <record-type-from-avro>]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the Datapackage schema file to write the conversion result to. If omitted, the output is directed to stdout.--record-type
: (optional) The name of the Avro record type to convert to a Datapackage schema.Conversion notes:
avrotize a2md <path_to_avro_schema_file> [--out <path_to_markdown_file>]
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.--out
: The path to the Markdown file to write the conversion result to. If omitted, the output is directed to stdout.Conversion notes:
avrotize pcf <path_to_avro_schema_file>
Parameters:
<path_to_avro_schema_file>
: The path to the Avrotize Schema file to be converted. If omitted, the file is read from stdin.Conversion notes:
This document provides an overview of the usage and functionality of Avrotize. For more detailed information, please refer to the Avrotize Schema documentation and the individual command help messages.
FAQs
Tools to convert from and to Avro Schema from various other schema languages.
We found that avrotize demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.