
Research
/Security News
9 Malicious NuGet Packages Deliver Time-Delayed Destructive Payloads
Socket researchers discovered nine malicious NuGet packages that use time-delayed payloads to crash applications and corrupt industrial control systems.
[](https://www.npmjs.com/package/ts-proto) [](https://github.com/stephenh/ts-proto/actions)
ts-prototransforms your.protofiles into strongly-typed, idiomatic TypeScript files!
The 2.x release of ts-proto migrated the low-level Protobuf serializing that its encode and decode method use from the venerable, but aging & stagnant, protobufjs package to @bufbuild/protobuf.
If you only used the encode and decode methods, this should largely be a non-breaking change.
However, if you used any code that used the old protobufjs Writer or Reader classes, you'll need to update your code to use the new @bufbuild/protobuf classes:
import { BinaryReader, BinaryWriter } from "@bufbuild/protobuf/wire";
If migrating to @bufbuild/protobuf is a blocker for you, you can pin your ts-proto version to 1.x.
Disclaimer & apology: I had intended to release ts-proto 2.x as an alpha release, but didn't get the semantic-release config correct, and so ts-proto 2.x was published as a major release without a proper alpha/beta cycle.
If you could file reports (or better PRs!) for any issues you come across while the release is still fresh, that would be greatly appreciated.
Any tips or tricks for others on the migration would also be appreciated!
ts-proto generates TypeScript types from protobuf schemas.
I.e. given a person.proto schema like:
message Person {
string name = 1;
}
ts-proto will generate a person.ts file like:
interface Person {
name: string
}
const Person = {
encode(person): Writer { ... }
decode(reader): Person { ... }
toJSON(person): unknown { ... }
fromJSON(data): Person { ... }
}
It also knows about services and will generate types for them as well, i.e.:
export interface PingService {
ping(request: PingRequest): Promise<PingResponse>;
}
It will also generate client implementations of PingService; currently Twirp, grpc-web, grpc-js and nestjs are supported.
npm install ts-protoprotoc --plugin=./node_modules/.bin/protoc-gen-ts_proto --ts_proto_out=. ./simple.proto
ts_proto_out, is named based on the suffix of the plugin's name, i.e. "ts_proto" suffix in the --plugin=./node_modules/.bin/protoc-gen-ts_proto parameter becomes the _out prefix, per protoc's CLI conventions.)protoc --plugin=protoc-gen-ts_proto=".\\node_modules\\.bin\\protoc-gen-ts_proto.cmd" --ts_proto_out=. ./simple.proto (see #93)protoc (see installation instructions for your platform, i.e. protoc v3.0.0 doesn't support the _opt flagThis will generate *.ts source files for the given *.proto types.
If you want to package these source files into an npm package to distribute to clients, just run tsc on them as usual to generate the .js/.d.ts files, and deploy the output as a regular npm package.
If you're using Buf, pass strategy: all in your buf.gen.yaml file (docs).
version: v1
plugins:
- name: ts
out: ../gen/ts
strategy: all
path: ../node_modules/ts-proto/protoc-gen-ts_proto
To prevent buf push from reading irrelevant .proto files, configure buf.yaml like so:
build:
excludes: [node_modules]
You can also use the official plugin published to the Buf Registry.
version: v1
plugins:
- plugin: buf.build/community/stephenh-ts-proto
out: ../gen/ts
opt:
- outputServices=...
- useExactTypes=...
If you're using a modern TS setup with either esModuleInterop or running in an ESM environment, you'll need to pass ts_proto_opts of:
esModuleInterop=true if using esModuleInterop in your tsconfig.json, andimportSuffix=.js if executing the generated ts-proto code in an ESM environmentIn terms of the code that ts-proto generates, the general goals are:
ts-proto is a clean break from either the built-in Google/Java-esque JS code of protoc or the "make .d.ts files the *.js comments" approach of protobufjsprotobufjs/minimal package is used for actually reading/writing bytes.)*.proto-to-*.ts workflow, currently no runtime reflection/loading of dynamic .proto filesNote that ts-proto is not an out-of-the-box RPC framework; instead it's more of a swiss-army knife (as witnessed by its many config options), that lets you build exactly the RPC framework you'd like on top of it (i.e. that best integrates with your company's protobuf ecosystem; for better or worse, protobuf RPC is still a somewhat fragmented ecosystem).
If you'd like an out-of-the-box RPC framework built on top of ts-proto, there are a few examples:
(Note for potential contributors, if you develop other frameworks/mini-frameworks, or even blog posts/tutorials, on using ts-proto, we're happy to link to them.)
We also don't support clients for google.api.http-based Google Cloud APIs, see #948 if you'd like to submit a PR.
The generated types are "just data", i.e.:
export interface Simple {
name: string;
age: number;
createdAt: Date | undefined;
child: Child | undefined;
state: StateEnum;
grandChildren: Child[];
coins: number[];
}
Along with encode/decode factory methods:
export const Simple = {
create(baseObject?: DeepPartial<Simple>): Simple {
...
},
encode(message: Simple, writer: Writer = Writer.create()): Writer {
...
},
decode(reader: Reader, length?: number): Simple {
...
},
fromJSON(object: any): Simple {
...
},
fromPartial(object: DeepPartial<Simple>): Simple {
...
},
toJSON(message: Simple): unknown {
...
},
};
This allows idiomatic TS/JS usage like:
const bytes = Simple.encode({ name: ..., age: ..., ... }).finish();
const simple = Simple.decode(Reader.create(bytes));
const { name, age } = simple;
Which can dramatically ease integration when converting to/from other layers without creating a class and calling the right getters/setters.
A poor man's attempt at "please give us back optional types"
The canonical protobuf wrapper types, i.e. google.protobuf.StringValue, are mapped as optional values, i.e. string | undefined, which means for primitives we can kind of pretend the protobuf type system has optional types.
(Update: ts-proto now also supports the proto3 optional keyword.)
Timestamps are mapped as Date
(Configurable with the useDate parameter.)
fromJSON/toJSON use the proto3 canonical JSON encoding format (e.g. timestamps are ISO strings), unlike protobufjs.
ObjectIds can be mapped as mongodb.ObjectId
(Configurable with the useMongoObjectId parameter.)
(Note: this is currently only supported by the Twirp clients.)
If you're using ts-proto's clients to call backend micro-services, similar to the N+1 problem in SQL applications, it is easy for micro-service clients to (when serving an individual request) inadvertently trigger multiple separate RPC calls for "get book 1", "get book 2", "get book 3", that should really be batched into a single "get books [1, 2, 3]" (assuming the backend supports a batch-oriented RPC method).
ts-proto can help with this, and essentially auto-batch your individual "get book" calls into batched "get books" calls.
For ts-proto to do this, you need to implement your service's RPC methods with the batching convention of:
Batch<OperationName>Batch<OperationName> input type has a single repeated field (i.e. repeated string ids = 1)Batch<OperationName> output type has either a:
repeated Foo foos = 1) where the output order is the same as the input ids order, ormap<string, Entity> entities = 1;)When ts-proto recognizes methods of this pattern, it will automatically create a "non-batch" version of <OperationName> for the client, i.e. client.Get<OperationName>, that takes a single id and returns a single result.
This provides the client code with the illusion that it can make individual Get<OperationName> calls (which is generally preferable/easier when implementing the client's business logic), but the actual implementation that ts-proto provides will end up making Batch<OperationName> calls to the backend service.
You also need to enable the useContext=true build-time parameter, which gives all client methods a Go-style ctx parameter, with a getDataLoaders method that lets ts-proto cache/resolve request-scoped DataLoaders, which provide the fundamental auto-batch detection/flushing behavior.
See the batching.proto file and related tests for examples/more details.
But the net effect is that ts-proto can provide SQL-/ORM-style N+1 prevention for clients calls, which can be critical especially in high-volume / highly-parallel implementations like GraphQL front-end gateways calling backend micro-services.
ts-proto is a protoc plugin, so you run it by (either directly in your project, or more likely in your mono-repo schema pipeline, i.e. like Ibotta or Namely):
ts-proto to your package.jsonnpm install to download itprotoc with a plugin parameter like:protoc --plugin=node_modules/ts-proto/protoc-gen-ts_proto ./batching.proto -I.
ts-proto can also be invoked with Gradle using the protobuf-gradle-plugin:
protobuf {
plugins {
// `ts` can be replaced by any unused plugin name, e.g. `tsproto`
ts {
path = 'path/to/plugin'
}
}
// This section only needed if you provide plugin options
generateProtoTasks {
all().each { task ->
task.plugins {
// Must match plugin ID declared above
ts {
option 'foo=bar'
}
}
}
}
}
Generated code will be placed in the Gradle build directory.
With --ts_proto_opt=globalThisPolyfill=true, ts-proto will include a polyfill for globalThis.
Defaults to false, i.e. we assume globalThis is available.
With --ts_proto_opt=context=true, the services will have a Go-style ctx parameter, which is useful for tracing/logging/etc. if you're not using node's async_hooks api due to performance reasons.
With --ts_proto_opt=forceLong=long, all 64-bit numbers will be parsed as instances of Long (using the long library).
With --ts_proto_opt=forceLong=string, all 64-bit numbers will be output as strings.
With --ts_proto_opt=forceLong=bigint, all 64-bit numbers will be output as BigInts. This option still uses the long library to encode/decode internally within protobuf.js, but then converts to/from BigInts in the ts-proto-generated code.
The default behavior is forceLong=number, which will internally still use the long library to encode/decode values on the wire (so you will still see a util.Long = Long line in your output), but will convert the long values to number automatically for you. Note that a runtime error is thrown if, while doing this conversion, a 64-bit value is larger than can be correctly stored as a number.
With --ts_proto_opt=useJsTypeOverride, 64-bit numbers will be output as the FieldOption.JSType specified on the field. This takes precedence over the forceLong option provided.
With --ts_proto_opt=esModuleInterop=true changes output to be esModuleInterop compliant.
Specifically the Long imports will be generated as import Long from 'long' instead of import * as Long from 'long'.
With --ts_proto_opt=env=node or browser or both, ts-proto will make environment-specific assumptions in your output. This defaults to both, which makes no environment-specific assumptions.
Using node changes the types of bytes from Uint8Array to Buffer for easier integration with the node ecosystem which generally uses Buffer.
Currently browser doesn't have any specific behavior other than being "not node". It probably will soon/at some point.
With --ts_proto_opt=useOptionals=messages (for message fields) or --ts_proto_opt=useOptionals=all (for message and scalar fields), fields are declared as optional keys, e.g. field?: Message instead of the default field: Message | undefined.
ts-proto defaults to useOptionals=none because it:
For typo prevention, optional fields make it easy for extra fields to slip into a message (until we get Exact Types), i.e.:
interface SomeMessage {
firstName: string;
lastName: string;
}
// Declared with a typo
const data = { firstName: "a", lastTypo: "b" };
// With useOptionals=none, this correctly fails to compile; if `lastName` was optional, it would not
const message: SomeMessage = { ...data };
For a consistent API, if SomeMessage.lastName is optional lastName?, then readers have to check two empty conditions: a) is lastName undefined (b/c it was created in-memory and left unset), or b) is lastName empty string (b/c we read SomeMessage off the wire and, per the proto3 spec, initialized lastName to empty string)?
For ensuring proper initialization, if later SomeMessage.middleInitial is added, but it's marked as optional middleInitial?, you may have many call sites in production code that should now be passing middleInitial to create a valid SomeMessage, but are not.
So, between typo-prevention, reader inconsistency, and proper initialization, ts-proto recommends using useOptionals=none as the "most safe" option.
All that said, this approach does require writers/creators to set every field (although fromPartial and create are meant to address this), so if you still want to have optional keys, you can set useOptionals=messages or useOptionals=all.
(See this issue and this issue for discussions on useOptional.)
With --ts_proto_opt=exportCommonSymbols=false, utility types like DeepPartial and protobufPackage won't be exportd.
This should make it possible to use create barrel imports of the generated output, i.e. import * from ./foo and import * from ./bar.
Note that if you have the same message name used in multiple *.proto files, you will still get import conflicts.
With --ts_proto_opt=oneof=unions, oneof fields will be generated as ADTs.
See the "OneOf Handling" section.
With --ts_proto_opt=unrecognizedEnumName=<NAME> enums will contain a key <NAME> with value of the unrecognizedEnumValue option.
Defaults to UNRECOGNIZED.
With --ts_proto_opt=unrecognizedEnumValue=<NUMBER> enums will contain a key provided by the unrecognizedEnumName option with value of <NUMBER>.
Defaults to -1.
With --ts_proto_opt=unrecognizedEnum=false enums will not contain an unrecognized enum key and value as provided by the unrecognizedEnumName and unrecognizedEnumValue options.
With --ts_proto_opt=removeEnumPrefix=true generated enums will have the enum name removed from members.
FooBar.FOO_BAR_BAZ = "FOO_BAR_BAZ" will generate FooBar.BAZ = "FOO_BAR_BAZ"
If any value in the enum would be made invalid by removing the enum name, the enum name will be present for all the values in the enum.
FooBar.FOO_BAR_1 = "FOO_BAR_BAZ" will generate FooBar.FOO_BAR_1 = "FOO_BAR_BAZ"
FooBar.FOO_BAR_1A = "FOO_BAR_BAZ" will generate FooBar.FOO_BAR_1A = "FOO_BAR_BAZ"
FooBar.FOO_BAR_Infinity = "FOO_BAR_BAZ" will generate FooBar.FOO_BAR_Infinity = "FOO_BAR_BAZ" (this is technically valid but is likely to lead to downstream issues)
FooBar.FOO_BAR_INFINITY = "FOO_BAR_BAZ" will generate FooBar.INFINITY = "FOO_BAR_BAZ"
With --ts_proto_opt=lowerCaseServiceMethods=true, the method names of service methods will be lowered/camel-case, i.e. service.findFoo instead of service.FindFoo.
With --ts_proto_opt=snakeToCamel=false, fields will be kept snake case in both the message keys and the toJSON / fromJSON methods.
snakeToCamel can also be set as a _-delimited list of strings (comma is reserved as the flag delimited), i.e. --ts_proto_opt=snakeToCamel=keys_json, where including keys will make message keys be camel case and including json will make JSON keys be camel case.
Empty string, i.e. snakeToCamel=, will keep both messages keys and JSON keys as snake case (it is the same as snakeToCamel=false).
Note that to use the json_name attribute, you'll have to use the json.
The default behavior is keys_json, i.e. both will be camel cased, and json_name will be used if set.
With --ts_proto_opt=outputEncodeMethods=false, the Message.encode and Message.decode methods for working with protobuf-encoded/binary data will not be output.
This is useful if you want "only types".
With --ts_proto_opt=outputEncodeIncludeTypes=regex, the Message.encode method will only be output for types whose name matches the provided regular expression.
This is useful if you want to limit encode methods to only select types.
With --ts_proto_opt=outputDecodeIncludeTypes=regex, the Message.decode method will only be output for types whose name matches the provided regular expression.
This is useful if you want to limit decode methods to only select types.
With --ts_proto_opt=outputJsonMethods=false, the Message.fromJSON and Message.toJSON methods for working with JSON-coded data will not be output.
This is also useful if you want "only types".
With --ts_proto_opt=outputJsonMethods=to-only and --ts_proto_opt=outputJsonMethods=from-only you will be able to export only one between the Message.toJSON and Message.fromJSON methods.
This is useful if you're using ts-proto just to encode or decode and not for both.
With --ts_proto_opt=outputPartialMethods=false, the Message.fromPartial and Message.create methods for accepting partially-formed objects/object literals will not be output.
With --ts_proto_opt=stringEnums=true, the generated enum types will be string-based instead of int-based.
This is useful if you want "only types" and are using a gRPC REST Gateway configured to serialize enums as strings.
(Requires outputEncodeMethods=false.)
With --ts_proto_opt=outputClientImpl=false, the client implementations, i.e. FooServiceClientImpl, that implement the client-side (in Twirp, see next option for grpc-web) RPC interfaces will not be output.
With --ts_proto_opt=outputClientImpl=grpc-web, the client implementations, i.e. FooServiceClientImpl, will use the @improbable-eng/grpc-web library at runtime to send grpc messages to a grpc-web backend.
(Note that this only uses the grpc-web runtime, you don't need to use any of their generated code, i.e. the ts-proto output replaces their ts-protoc-gen output.)
You'll need to add the @improbable-eng/grpc-web and a transport to your project's package.json; see the integration/grpc-web directory for a working example. Also see #504 for integrating with grpc-web-devtools.
With --ts_proto_opt=returnObservable=true, the return type of service methods will be Observable<T> instead of Promise<T>.
With--ts_proto_opt=addGrpcMetadata=true, the last argument of service methods will accept the grpc Metadata type, which contains additional information with the call (i.e. access tokens/etc.).
(Requires nestJs=true.)
With--ts_proto_opt=addNestjsRestParameter=true, the last argument of service methods will be a rest parameter with type any. This way you can use custom decorators you could normally use in nestjs.
(Requires nestJs=true.)
With --ts_proto_opt=nestJs=true, the defaults will change to generate NestJS protobuf friendly types & service interfaces that can be used in both the client-side and server-side of NestJS protobuf implementations. See the nestjs readme for more information and implementation examples.
Specifically outputEncodeMethods, outputJsonMethods, and outputClientImpl will all be false, lowerCaseServiceMethods will be true and outputServices will be ignored.
Note that addGrpcMetadata, addNestjsRestParameter and returnObservable will still be false.
With --ts_proto_opt=useDate=false, fields of type google.protobuf.Timestamp will not be mapped to type Date in the generated types. See Timestamp for more details.
With --ts_proto_opt=useMongoObjectId=true, fields of a type called ObjectId where the message is constructed to have on field called value that is a string will be mapped to type mongodb.ObjectId in the generated types. This will require your project to install the mongodb npm package. See ObjectId for more details.
With --ts_proto_opt=annotateFilesWithVersion=false, the generated files will not contain the versions of protoc and ts-proto used to generate the file. This option is normally set to true, such that files list the versions used.
With --ts_proto_opt=outputSchema=true, meta typings will be generated that can later be used in other code generators.
With --ts_proto_opt=outputSchema=no-file-descriptor, meta typings will be generated, but we do not include the file descriptor in the generated schema. This is useful if you are trying to minimize the size of the generated schema.
With --ts_proto_opt=outputSchema=const, meta typings will be generated as const, allowing type-safe access to all its properties. (only works with TypeScript 4.9 and up, because it also uses the satisfies operator). Can be combined with the no-file-descriptor option (outputSchema=const,outputSchema=no-file-descriptor) to not include the file descriptor in the generated schema.
With --ts_proto_opt=outputTypeAnnotations=true, each message will be given a $type field containing its fully-qualified name. You can use --ts_proto_opt=outputTypeAnnotations=static-only to omit it from the interface declaration, or --ts_proto_opt=outputTypeAnnotations=optional to make it an optional property on the interface definition. The latter option may be useful if you want to use the $type field for runtime type checking on responses from a server.
With --ts_proto_opt=outputTypeRegistry=true, the type registry will be generated that can be used to resolve message types by fully-qualified name. Also, each message will be given a $type field containing its fully-qualified name.
With --ts_proto_opt=outputServices=grpc-js, ts-proto will output service definitions and server / client stubs in grpc-js format.
With --ts_proto_opt=outputServices=generic-definitions, ts-proto will output generic (framework-agnostic) service definitions. These definitions contain descriptors for each method with links to request and response types, which allows to generate server and client stubs at runtime, and also generate strong types for them at compile time. An example of a library that uses this approach is nice-grpc.
With --ts_proto_opt=outputServices=nice-grpc, ts-proto will output server and client stubs for nice-grpc. This should be used together with generic definitions, i.e. you should specify two options: outputServices=nice-grpc,outputServices=generic-definitions.
With --ts_proto_opt=metadataType=Foo@./some-file, ts-proto add a generic (framework-agnostic) metadata field to the generic service definition.
With --ts_proto_opt=outputServices=generic-definitions,outputServices=default, ts-proto will output both generic definitions and interfaces. This is useful if you want to rely on the interfaces, but also have some reflection capabilities at runtime.
With --ts_proto_opt=outputServices=false, or =none, ts-proto will output NO service definitions.
With --ts_proto_opt=rpcBeforeRequest=true, ts-proto will add a function definition to the Rpc interface definition with the signature: beforeRequest(service: string, message: string, request: <RequestType>). It will also automatically set outputServices=default. Each of the Service's methods will call beforeRequest before performing its request.
With --ts_proto_opt=rpcAfterResponse=true, ts-proto will add a function definition to the Rpc interface definition with the signature: afterResponse(service: string, message: string, response: <ResponseType>). It will also automatically set outputServices=default. Each of the Service's methods will call afterResponse before returning the response.
With --ts_proto_opt=rpcErrorHandler=true, ts-proto will add a function definition to the Rpc interface definition with the signature: handleError(service: string, message: string, error: Error). It will also automatically set outputServices=default.
With --ts_proto_opt=useAbortSignal=true, the generated services will accept an AbortSignal to cancel RPC calls.
With --ts_proto_opt=useAsyncIterable=true, the generated services will use AsyncIterable instead of Observable.
With --ts_proto_opt=emitImportedFiles=false, ts-proto will not emit google/protobuf/* files unless you explicit add files to protoc like this
protoc --plugin=./node_modules/.bin/protoc-gen-ts_proto my_message.proto google/protobuf/duration.proto
With --ts_proto_opt=fileSuffix=<SUFFIX>, ts-proto will emit generated files using the specified suffix. A helloworld.proto file with fileSuffix=.pb would be generated as helloworld.pb.ts. This is common behavior in other protoc plugins and provides a way to quickly glob all the generated files.
With --ts_proto_opt=importSuffix=<SUFFIX>, ts-proto will emit file imports using the specified suffix. An import of helloworld.ts with fileSuffix=.js would generate import "helloworld.js". The default is to import without a file extension. Supported by TypeScript 4.7.x and up.
With --ts_proto_opt=enumsAsLiterals=true, the generated enum types will be enum-ish object with as const.
With --ts_proto_opt=useExactTypes=false, the generated fromPartial and create methods will not use Exact types.
The default behavior is useExactTypes=true, which makes fromPartial and create use Exact type for its argument to make TypeScript reject any unknown properties.
With --ts_proto_opt=unknownFields=true, all unknown fields will be parsed and output as arrays of buffers.
With --ts_proto_opt=onlyTypes=true, only types will be emitted, and imports for long and protobufjs/minimal will be excluded.
This is the same as setting outputJsonMethods=false,outputEncodeMethods=false,outputClientImpl=false,nestJs=false
With --ts_proto_opt=usePrototypeForDefaults=true, the generated code will wrap new objects with Object.create.
This allows code to do hazzer checks to detect when default values have been applied, which due to proto3's behavior of not putting default values on the wire, is typically only useful for interacting with proto2 messages.
When enabled, default values are inherited from a prototype, and so code can use Object.keys().includes("someField") to detect if someField was actually decoded or not.
Note that, as indicated, this means Object.keys will not include set-by-default fields, so if you have code that iterates over messages keys in a generic fashion, it will have to also iterate over keys inherited from the prototype.
With --ts_proto_opt=useJsonName=true, json_name defined in protofiles will be used instead of message field names.
With --ts_proto_opt=useJsonWireFormat=true, the generated code will reflect the JSON representation of Protobuf messages.
Requires onlyTypes=true. Implies useDate=string and stringEnums=true. This option is to generate types that can be directly used with marshalling/unmarshalling Protobuf messages serialized as JSON.
You may also want to set useOptionals=all, as gRPC gateways are not required to send default value for scalar values.
With --ts_proto_opt=useNumericEnumForJson=true, the JSON converter (toJSON) will encode enum values as int, rather than a string literal.
With --ts_proto_opt=initializeFieldsAsUndefined=false, all optional field initializers will be omitted from the generated base instances. This can be useful if you're using Next.js and getServerSideProps, see #492. The default is true to ideally get the best v8 performance by avoiding "changing shapes" as fields are incrementally set.
With --ts_proto_opt=disableProto2Optionals=true, all optional fields on proto2 files will not be set to be optional. Please note that this flag is primarily for preserving ts-proto's legacy handling of proto2 files, to avoid breaking changes, and as a result, it is not intended to be used moving forward.
With --ts_proto_opt=disableProto2DefaultValues=true, all fields in proto2 files that specify a default value will not actually use that default value. Please note that this flag is primarily for preserving ts-proto's legacy handling of proto2 files, to avoid breaking changes, and as a result, it is not intended to be used moving forward.
With --ts_proto_opt=Mgoogle/protobuf/empty.proto=./google3/protobuf/empty, ('M' means 'importMapping', similar to protoc-gen-go), the generated code import path for ./google/protobuf/empty.ts will reflect the overridden value:
Mfoo/bar.proto=@myorg/some-lib will map foo/bar.proto imports into import ... from '@myorg/some-lib'.Mfoo/bar.proto=./some/local/lib will map foo/bar.proto imports into import ... from './some/local/lib'.Mfoo/bar.proto=some-modules/some-lib will map foo/bar.proto imports into import ... from 'some-module/some-lib'.--ts_proto_opt=M... --ts_proto_opt=M... (one ts_proto_opt per mapping).With --ts_proto_opt=useMapType=true, the generated code for protobuf map<key_type, value_type> will become Map<key_type, value_type> that uses JavaScript Map type.
The default behavior is useMapType=false, which makes it generate the code for protobuf map<key_type, value_type with the key-value pair like {[key: key_type]: value_type}.
With --ts_proto_opt=useReadonlyTypes=true, the generated types will be declared as immutable using typescript's readonly modifier.
With --ts_proto_opt=useSnakeTypeName=false will remove snake casing from types.
Example Protobuf
message Box {
message Element {
message Image {
enum Alignment {
LEFT = 1;
CENTER = 2;
RIGHT = 3;
}
}
}
}
by default this is enabled which would generate a type of Box_Element_Image_Alignment. By disabling this option the type that is generated would be BoxElementImageAlignment.
With --ts_proto_opt=outputExtensions=true, the generated code will include proto2 extensions
Extension encode/decode methods are compliant with the outputEncodeMethods option, and if unknownFields=true,
the setExtension and getExtension methods will be created for extendable messages, also compliant with outputEncodeMethods (setExtension = encode, getExtension = decode).
With --ts_proto_opt=outputIndex=true, index files will be generated based on the proto package namespaces.
This will disable exportCommonSymbols to avoid name collisions on the common symbols.
With --ts_proto_opt=emitDefaultValues=json-methods, the generated toJSON method will emit scalars like 0 and "" as json fields.
With --ts_proto_opt=comments=false, comments won't be copied from the proto files to the generated code.
With --ts_proto_opt=bigIntLiteral=false, the generated code will use BigInt("0") instead of 0n for BigInt literals. BigInt literals aren't supported by TypeScript when the "target" compiler option set to something older than "ES2020".
With --ts_proto_opt=useNullAsOptional=true, undefined values will be converted to null, and if you use optional label in your .proto file, the field will have undefined type as well. for example:
With --ts_proto_opt=typePrefix=MyPrefix, the generated interfaces, enums, and factories will have a prefix of MyPrefix in their names.
With --ts_proto_opt=typeSuffix=MySuffix, the generated interfaces, enums, and factories will have a suffix of MySuffix in their names.
message ProfileInfo {
int32 id = 1;
string bio = 2;
string phone = 3;
}
message Department {
int32 id = 1;
string name = 2;
}
message User {
int32 id = 1;
string username = 2;
/*
ProfileInfo will be optional in typescript, the type will be ProfileInfo | null | undefined
this is needed in cases where you don't wanna provide any value for the profile.
*/
optional ProfileInfo profile = 3;
/*
Department only accepts a Department type or null, so this means you have to pass it null if there is no value available.
*/
Department department = 4;
}
the generated interfaces will be:
export interface ProfileInfo {
id: number;
bio: string;
phone: string;
}
export interface Department {
id: number;
name: string;
}
export interface User {
id: number;
username: string;
profile?: ProfileInfo | null | undefined; // check this one
department: Department | null; // check this one
}
--ts_proto_opt=noDefaultsForOptionals=true, undefined primitive values will not be defaulted as per the protobuf spec. Additionally unlike the standard behavior, when a field is set to its standard default value, it will be encoded allowing it to be sent over the wire and distinguished from undefined values. For example if a message does not set a boolean value, ordinarily this would be defaulted to false which is different to it being undefined.This option allows the library to act in a compatible way with the Wire implementation maintained and used by Square/Block. Note: this option should only be used in combination with other client/server code generated using Wire or ts-proto with this option enabled.
--ts_proto_opt=useDuration=string, the generated code will use the string type for google.protobuf.Duration instead of {seconds: number, nano: number}.We have a great way of working together with nestjs. ts-proto generates interfaces and decorators for you controller, client. For more information see the nestjs readme.
If you want to run ts-proto on every change of a proto file, you'll need to use a tool like chokidar-cli and use it as a script in package.json:
"proto:generate": "protoc --ts_proto_out=. ./<proto_path>/<proto_name>.proto --ts_proto_opt=esModuleInterop=true",
"proto:watch": "chokidar \"**/*.proto\" -c \"npm run proto:generate\""
ts-proto is RPC framework agnostic - how you transmit your data to and from
your data source is up to you. The generated client implementations all expect
a rpc parameter, which type is defined like this:
interface Rpc {
request(service: string, method: string, data: Uint8Array): Promise<Uint8Array>;
}
If you're working with gRPC, a simple implementation could look like this:
const conn = new grpc.Client(
"localhost:8765",
grpc.credentials.createInsecure()
);
type RpcImpl = (service: string, method: string, data: Uint8Array) => Promise<Uint8Array>;
const sendRequest: RpcImpl = (service, method, data) => {
// Conventionally in gRPC, the request path looks like
// "package.names.ServiceName/MethodName",
// we therefore construct such a string
const path = `/${service}/${method}`;
return new Promise((resolve, reject) => {
// makeUnaryRequest transmits the result (and error) with a callback
// transform this into a promise!
const resultCallback: UnaryCallback<any> = (err, res) => {
if (err) {
return reject(err);
}
resolve(res);
};
function passThrough(argument: any) {
return argument;
}
// Using passThrough as the deserialize functions
conn.makeUnaryRequest(path, d => Buffer.from(d), passThrough, data, resultCallback);
});
};
const rpc: Rpc = { request: sendRequest };
Kudos to our sponsors:
If you need ts-proto customizations or priority support for your company, you can ping me at via email.
This section describes how to contribute directly to ts-proto, i.e. it's not required for running ts-proto in protoc or using the generated TypeScript.
Requirements
yarn — npm install -g yarnSetup
The commands below assume you have Docker installed. If you are using OS X, install coreutils, brew install coreutils.
yarn install to install the dependencies.yarn build:test to generate the test files.
This runs the following commands:
proto2ts— Runsts-protoon theintegration/**/*.protofiles to generate.tsfiles.proto2pbjs— Generates a reference implementation usingpbjsfor testing compatibility.
yarn testWorkflow
integration/* test that is close enough to your use case, e.g. has a parameters.txt that matches the ts_proto_opt params necessary to reproduce your use caseintegration/your-new-test/parameters.txt with the necessary ts_proto_opt paramsintegration/your-new-test/your-new-test.proto schema to reproduce your use caseyour-new-test.proto, or an existing integration/*.proto file, run yarn proto2bin
yarn watch running, and it should "just do the right thing"integration/your-new-test/some-test.ts unit test, even if it's as trivial as just making sure the generated code compilests-proto code generation logic:
src/*.ts files, run yarn proto2ts to re-codegen all integration tests
yarn proto2ts your-new-test to re-codegen a specific testyarn watch running should "just do the right thing"yarn test to verify your changes pass all existing testsyarn format to format the typescript files.git add all of the *.proto, *.bin, and *.ts files in integration/your-new-test
Testing in your projects
You can test your local ts-proto changes in your own projects by running yarn add ts-proto@./path/to/ts-proto, as long as you run yarn build manually.
Dockerized Protoc
The repository includes a dockerized version of protoc, which is configured in docker-compose.yml.
It can be useful in case you want to manually invoke the plugin with a known version of protoc.
Usage:
# Include the protoc alias in your shell.
. aliases.sh
# Run protoc as usual. The ts-proto directory is available in /ts-proto.
protoc --plugin=/ts-proto/protoc-gen-ts_proto --ts_proto_out=./output -I=./protos ./protoc/*.proto
# Or use the ts-protoc alias which specifies the plugin path for you.
ts-protoc --ts_proto_out=./output -I=./protos ./protoc/*.proto
../ is not allowed/ts-proto/host, and sets it as its working directory.aliases.sh is sourced, you can use the protoc command in any folder.oneof=unions-value the default behavior in 2.0forceLong default in 2.0, should default to forceLong=longesModuleInterop=true the default in 2.0By default, ts-proto models oneof fields "flatly" in the message, e.g. a message like:
message Foo {
oneof either_field { string field_a = 1; string field_b = 2; }
}
Will generate a Foo type with two fields: field_a: string | undefined; and field_b: string | undefined.
With this output, you'll have to check both if object.field_a and if object.field_b, and if you set one, you'll have to remember to unset the other.
Instead, we recommend using the oneof=unions-value option, which will change the output to be an Algebraic Data Type/ADT like:
interface YourMessage {
eitherField?: { $case: "field_a"; value: string } | { $case: "field_b"; value: string };
}
As this will automatically enforce only one of field_a or field_b "being set" at a time, because the values are stored in the eitherField field that can only have a single value at a time.
(Note that eitherField is optional b/c oneof in Protobuf means "at most one field" is set, and does not mean one of the fields must be set.)
In ts-proto's currently-unscheduled 2.x release, oneof=unions-value will become the default behavior.
There is also a oneof=unions option, which generates a union where the field names are included in each option:
interface YourMessage {
eitherField?: { $case: "field_a"; field_a: string } | { $case: "field_b"; field_b: string };
}
This is no longer recommended as it can be difficult to write code and types to handle multiple oneof options:
The following helper types may make it easier to work with the types generated from oneof=unions, though they are generally not needed if you use oneof=unions-value:
/** Extracts all the case names from a oneOf field. */
type OneOfCases<T> = T extends { $case: infer U extends string } ? U : never;
/** Extracts a union of all the value types from a oneOf field */
type OneOfValues<T> = T extends { $case: infer U extends string; [key: string]: unknown } ? T[U] : never;
/** Extracts the specific type of a oneOf case based on its field name */
type OneOfCase<T, K extends OneOfCases<T>> = T extends {
$case: K;
[key: string]: unknown;
}
? T
: never;
/** Extracts the specific type of a value type from a oneOf field */
type OneOfValue<T, K extends OneOfCases<T>> = T extends {
$case: infer U extends K;
[key: string]: unknown;
}
? T[U]
: never;
For comparison, the equivalents for oneof=unions-value:
/** Extracts all the case names from a oneOf field. */
type OneOfCases<T> = T['$case'];
/** Extracts a union of all the value types from a oneOf field */
type OneOfValues<T> = T['value'];
/** Extracts the specific type of a oneOf case based on its field name */
type OneOfCase<T, K extends OneOfCases<T>> = T extends {
$case: K;
[key: string]: unknown;
}
? T
: never;
/** Extracts the specific type of a value type from a oneOf field */
type OneOfValue<T, K extends OneOfCases<T>> = T extends {
$case: infer U extends K;
value: unknown;
}
? T[U]
: never;
In core Protobuf (and so also ts-proto), values that are unset or equal to the default value are not sent over the wire.
For example, the default value of a message is undefined. Primitive types take their natural default value, e.g. string is '', number is 0, etc.
Protobuf chose/enforces this behavior because it enables forward compatibility, as primitive fields will always have a value, even when omitted by outdated agents.
This is good, but it also means default and unset values cannot be distinguished in ts-proto fields; it's just fundamentally how Protobuf works.
If you need primitive fields where you can detect set/unset, see Wrapper Types.
Encode / Decode
ts-proto follows the Protobuf rules, and always returns default values for unsets fields when decoding, while omitting them from the output when serialized in binary format.
syntax = "proto3";
message Foo {
string bar = 1;
}
protobufBytes; // assume this is an empty Foo object, in protobuf binary format
Foo.decode(protobufBytes); // => { bar: '' }
Foo.encode({ bar: "" }); // => { }, writes an empty Foo object, in protobuf binary format
fromJSON / toJSON
Reading JSON will also initialize the default values. Since senders may either omit unset fields, or set them to the default value, use fromJSON to normalize the input.
Foo.fromJSON({}); // => { bar: '' }
Foo.fromJSON({ bar: "" }); // => { bar: '' }
Foo.fromJSON({ bar: "baz" }); // => { bar: 'baz' }
When writing JSON, ts-proto normalizes messages by omitting unset fields and fields set to their default values.
Foo.toJSON({}); // => { }
Foo.toJSON({ bar: undefined }); // => { }
Foo.toJSON({ bar: "" }); // => { } - note: omitting the default value, as expected
Foo.toJSON({ bar: "baz" }); // => { bar: 'baz' }
Protobuf comes with several predefined message definitions, called "Well-Known Types". Their interpretation is defined by the Protobuf specification, and libraries are expected to convert these messages to corresponding native types in the target language.
ts-proto currently automatically converts these messages to their corresponding native types.
booleanUint8Arraynumberstring[]numbernumbernumberany[]numbernumberstringany (i.e. number | string | boolean | null | array | object){ [key: string]: any }Wrapper Types are messages containing a single primitive field, and can be imported in .proto files with import "google/protobuf/wrappers.proto".
Since these are messages, their default value is undefined, allowing you to distinguish unset primitives from their default values, when using Wrapper Types.
ts-proto generates these fields as <primitive> | undefined.
For example:
// Protobuf
syntax = "proto3";
import "google/protobuf/wrappers.proto";
message ExampleMessage {
google.protobuf.StringValue name = 1;
}
// TypeScript
interface ExampleMessage {
name: string | undefined;
}
When encoding a message the primitive value is converted back to its corresponding wrapper type:
ExampleMessage.encode({ name: "foo" }); // => { name: { value: 'foo' } }, in binary
When calling toJSON, the value is not converted, because wrapper types are idiomatic in JSON.
ExampleMessage.toJSON({ name: "foo" }); // => { name: 'foo' }
Protobuf's language and types are not sufficient to represent all possible JSON values, since JSON may contain values whose type is unknown in advance. For this reason, Protobuf offers several additional types to represent arbitrary JSON values.
These are called Struct Types, and can be imported in .proto files with import "google/protobuf/struct.proto".
any
number | string | boolean | null | array | object).any[]
{ [key: string]: any }
ts-proto automatically converts back and forth between these Struct Types and their corresponding JSON types.
Example:
// Protobuf
syntax = "proto3";
import "google/protobuf/struct.proto";
message ExampleMessage {
google.protobuf.Value anything = 1;
}
// TypeScript
interface ExampleMessage {
anything: any | undefined;
}
Encoding a JSON value embedded in a message, converts it to a Struct Type:
ExampleMessage.encode({ anything: { name: "hello" } });
/* Outputs the following structure, encoded in protobuf binary format:
{
anything: Value {
structValue = Struct {
fields = [
MapEntry {
key = "name",
value = Value {
stringValue = "hello"
}
]
}
}
}
}*/
ExampleMessage.encode({ anything: true });
/* Outputs the following structure encoded in protobuf binary format:
{
anything: Value {
boolValue = true
}
}*/
The representation of google.protobuf.Timestamp is configurable by the useDate flag.
The useJsonTimestamp flag controls precision when useDate is false.
| Protobuf well-known type | Default/useDate=true | useDate=false | useDate=string | useDate=string-nano | useDate=temporal |
|---|---|---|---|---|---|
google.protobuf.Timestamp | Date | { seconds: number, nanos: number } | string | string | Temporal.Instant |
When using useDate=false and useJsonTimestamp=raw timestamp is represented as { seconds: number, nanos: number }, but has nanosecond precision.
When using useDate=string-nano timestamp is represented as an ISO string with nanosecond precision 1970-01-01T14:27:59.987654321Z and relies on nano-date library for conversion. You'll need to install it in your project.
When using useDate=temporal timestamp is represented as a Temporal.Instant object with nanosecond precision. Since the Temporal proposal isn't yet fully implemented in all browsers, this will rely on having a polyfilled environment, such as through temporal-polyfill. You'll need to install it in your project. Enabling this feature may also require ensuring you have BigInt support, either directly or through a polyfill, depending on your choice of Temporal library.
Numbers are by default assumed to be plain JavaScript numbers.
This is fine for Protobuf types like int32 and float, but 64-bit types like int64 can't be 100% represented by JavaScript's number type, because int64 can have larger/smaller values than number.
ts-proto's default configuration (which is forceLong=number) is to still use number for 64-bit fields, and then throw an error if a value (at runtime) is larger than Number.MAX_SAFE_INTEGER.
If you expect to use 64-bit / higher-than-MAX_SAFE_INTEGER values, then you can use the ts-proto forceLong option, which uses the long npm package to support the entire range of 64-bit values.
The protobuf number types map to JavaScript types based on the forceLong config option:
| Protobuf number types | Default/forceLong=number | forceLong=long | forceLong=string |
|---|---|---|---|
| double | number | number | number |
| float | number | number | number |
| int32 | number | number | number |
| int64 | number* | Long | string |
| uint32 | number | number | number |
| uint64 | number* | Unsigned Long | string |
| sint32 | number | number | number |
| sint64 | number* | Long | string |
| fixed32 | number | number | number |
| fixed64 | number* | Unsigned Long | string |
| sfixed32 | number | number | number |
| sfixed64 | number* | Long | string |
Where (*) indicates they might throw an error at runtime.
string name = 1.StringValue name = 1.SubMessage message = 1.protobufjs is a popular library for working with Protocol Buffers in JavaScript and TypeScript. It provides a runtime library for parsing and serializing protobuf messages, as well as a code generator for generating TypeScript definitions. Unlike ts-proto, protobufjs does not generate gRPC client code.
grpc-tools is a package that provides the protoc compiler and gRPC plugins for generating gRPC client and server code. It supports multiple languages, including JavaScript and TypeScript. However, grpc-tools does not generate TypeScript types directly from .proto files, which is a key feature of ts-proto.
grpc-web is a JavaScript implementation of gRPC for browser clients. It allows you to use gRPC in web applications. While grpc-web focuses on enabling gRPC in the browser, ts-proto focuses on generating TypeScript types and client code for Node.js and browser environments.
FAQs
[](https://www.npmjs.com/package/ts-proto) [](https://github.com/stephenh/ts-proto/actions)
The npm package ts-proto receives a total of 616,236 weekly downloads. As such, ts-proto popularity was classified as popular.
We found that ts-proto demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
/Security News
Socket researchers discovered nine malicious NuGet packages that use time-delayed payloads to crash applications and corrupt industrial control systems.

Security News
Socket CTO Ahmad Nassri discusses why supply chain attacks now target developer machines and what AI means for the future of enterprise security.

Security News
Learn the essential steps every developer should take to stay secure on npm and reduce exposure to supply chain attacks.