Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
graphql-transformer-core
Advanced tools
A framework to transform from GraphQL SDL to AWS cloudFormation.
Object types that are annotated with @model
are top level entities in the
generated API. Objects annotated with @model
are stored in DynamoDB and are
capable of being protected via @auth
, related to other objects via @connection
,
and streamed into Elasticsearch via @searchable
.
directive @model(queries: ModelQueryMap, mutations: ModelMutationMap) on OBJECT
input ModelMutationMap {
create: String
update: String
delete: String
}
input ModelQueryMap {
get: String
list: String
}
Define a GraphQL object type and annotate it with the @model
directive to store
objects of that type in DynamoDB and automatically configure CRUDL queries and
mutations.
type Post @model {
id: ID! # id: ID! is a required attribute.
title: String!
tags: [String!]!
}
You may also override the names of any generated queries and mutations as well as remove operations entirely.
type Post @model(queries: { get: "post" }, mutations: null) {
id: ID!
title: String!
tags: [String!]!
}
This would create and configure a single query field post(id: ID!): Post
and
no mutation fields.
Object types that are annotated with @auth
are protected by one of the
supported authorization strategies. Types that are annotated with @auth
must also be annotated with @model
.
# When applied to a type, augments the application with
# owner and group based authorization rules.
directive @auth(rules: [AuthRule!]!) on OBJECT
input AuthRule {
allow: AuthStrategy!
ownerField: String = "owner"
identityField: String = "username"
groupsField: String
groups: [String]
queries: [ModelQuery]
mutations: [ModelMutation]
}
enum AuthStrategy {
owner
groups
}
enum ModelQuery {
get
list
}
enum ModelMutation {
create
update
delete
}
# The simplest case
type Post @model @auth(rules: [{ allow: owner }]) {
id: ID!
title: String!
}
# The long form way
type Post @model @auth(rules: [{ allow: owner, ownerField: "owner", mutations: [create, update, delete], queries: [get, list] }]) {
id: ID!
title: String!
owner: String
}
Owner authorization specifies that a user (or set of user's) can access an object. To do so, each object has an ownerField (by default "owner") that stores ownership information and is verified in various ways during resolver execution.
You may use the queries and mutations arguments to specify which operations are augmented:
get: If the record's owner is not the same as the logged in user (via $ctx.identity.username
), throw $util.unauthorized()
.
list: Filter $ctx.result.items
for owned items.
create: Inject the logged in user's $ctx.identity.username
as the ownerField automatically.
update: Add conditional update that checks the stored ownerField is the same as $ctx.identity.username
.
delete: Add conditional update that checks the stored ownerField is the same as $ctx.identity.username
.
IN PROGRESS
# TODO: (WORK IN PROGRESS) Does not yet support multi-owner
type Post @model @auth(rules: [{ allow: owner, ownerField: "owners", mutations: [create, update, delete], queries: [get, list] }]) {
id: ID!
title: String!
owners: [String]
}
Static Group Auth
# Static group auth
type Post @model @auth(rules: [{ allow: groups, groups: ["Admin"] }]) {
id: ID!
title: String
}
If the user credential (as specified by the resolver's $ctx.identity
) is not
enrolled in the Admin group, throw an unauthorized error via $util.unauthorized()
.
Dynamic Group Auth
# Dynamic group auth with multiple groups
type Post @model @auth(rules: [{ allow: groups, groupsField: "groups" }]) {
id: ID!
title: String
groups: [String]
}
# Dynamic group auth with a single group
type Post @model @auth(rules: [{ allow: groups, groupsField: "group" }]) {
id: ID!
title: String
group: String
}
With dynamic group authorization, each record contains an attribute specifying
what groups should be able to access it. Use the groupsField argument to
specify which attribute in the underlying data store holds this group
information. To specify that a single group should have access use a field of
type String
. To specify that multiple groups should have access use a field of
type [String]
.
The @connection
directive allows you to specify relationships between @model
object types.
Currently this supports one-to-one, one-to-many, and many-to-one relationships. An error
will be thrown when trying to configure a many-to-many relationship.
directive @connection(name: String) on FIELD_DEFINITION
Relationships are specified by annotating fields on an @model
object type with
the @connection
directive.
In the simplest case, you may define a one-to-one connection:
type Project @model {
id: ID!
name: String
team: Team @connection
}
type Team @model {
id: ID!
name: String!
}
Once transformed you would then be able to create projects with a team via:
mutation CreateProject {
createProject(input: { name: "New Project", projectTeamId: "a-team-id" }) {
id
name
team {
id
name
}
}
}
Note: The Project.team resolver will be preconfigured to work with the defined connection.
Likewise you may make a simple one-to-many connection:
type Post {
id: ID!
title: String!
comments: [Comment] @connection
}
type Comment {
id: ID!
content: String!
}
One transformed, you would create comments with a post via:
mutation CreateCommentOnPost {
createComment(input: { content: "A comment", postCommentsId: "a-post-id" }) {
id
content
}
}
Note: The "postCommentsId" field on the input may seem like a strange name and it is. In the one-to-many case without a provided "name" argument there is only partial information to work with resulting in the strange name. To fix this, provide a value for the @connection's name argument and complete the bi-directional relationship by adding a corresponding @connection field to the Comment type.
The name arguments specifies a name for the connection and is used to create bi-directional relationships that reference the same underlying foreign key.
For example, if you wanted your Post.comments
and Comment.post
fields to refer to opposite sides of the same relationship
you would provide a name.
type Post @model {
id: ID!
title: String!
comments: [Comment] @connection(name: "PostComments")
}
type Comment @model {
id: ID!
content: String!
post: Post @connection(name: "PostComments")
}
One transformed, you would create comments with a post via:
mutation CreateCommentOnPost {
createComment(input: { content: "A comment", commentPostId: "a-post-id" }) {
id
content
post {
id
title
comments {
id
# and so on...
}
}
}
}
In order to keep connection queries fast and efficient, the graphql transform manages GSIs on the generated tables on your behalf. We bake in best practices to keep your queries efficient but this also comes with additional cost.
The @searchable
directive handles streaming the data of an @model
object type to
Elasticsearch and configures search resolvers that search that information.
# Streams data from dynamodb into elasticsearch and exposes search capabilities.
directive @searchable(queries: SearchableQueryMap) on OBJECT
input SearchableQueryMap {
search: String
}
What is the Amplify GraphQL Transform
The Amplify GraphQL Transform is a set of libraries committed to simplifying the process of developing, deploying, and maintaining APIs on AWS. With it, you define your API using the GraphQL Schema Definition Language (SDL) and then pass it to this library where it is expanded and transformed into a fully descriptive cloudformation template that implements your API's data model.
For example, you might define the data model for an app like this:
type Blog @model @searchable {
id: ID!
name: String!
posts: [Post] @connection(name: "BlogPosts")
}
type Post @model @searchable {
id: ID!
title: String!
tags: [String]
blog: Blog @connection(name: "BlogPosts")
comments: [Comment] @connection
createdAt: String
updatedAt: String
}
type Comment @model {
id: ID!
content: String!
}
And then pass the schema to an instance of the GraphQLTransform
class with the DynamoDB, Elasticsearch, and Connection transformers enabled:
import GraphQLTransform from 'graphql-transformer-core';
import AppSyncDynamoDBTransformer from 'graphql-dynamodb-transformer';
import AppSyncElasticsearchTransformer from 'graphql-elasticsearch-transformer';
import AppSyncConnectionTransformer from 'graphql-connection-transformer';
import AppSyncAuthTransformer from 'graphql-auth-transformer';
const transformer = new GraphQLTransform({
transformers: [
new AppSyncDynamoDBTransformer(),
new AppSyncElasticsearchTransformer(),
new AppSyncAuthTransformer(),
new AppSyncConnectionTransformer(),
],
});
const cfdoc = transformer.transform(schema.readSync());
const out = await createStack(cfdoc, name, region);
console.log('Application creation successfully started. It may take a few minutes to finish.');
The GraphQLTransform
class implements a single transform()
function that when invoked parses the document, walks the AST, and when a directive such as @model is found invokes any relevant transformers.
In this case the transformers were defined for you but the code is structured to make writing custom transformers as simple as possible.
The output of the above code is a full CloudFormation document that defines DynamoDB tables, an Elasticsearch cluster, a lambda function to stream from DynamoDB -> Elasticsearch,
an AppSync API, AppSync data sources, CRUD resolvers (create, update, delete, get, list, search), resolvers that implement connections between types stored in different DynamoDB tables,
a number of minimally scoped IAM roles,
The code is contained in a mono-repo that includes a number of packages that are related to the transform and a number of packages that are not. The related packages are broken up as follows
graphql-transform
The package contains the core of the library and acts as the entry point to the transform. The core class GraphQLTransform
takes as config a list of transformers and handles the logic that parses the input SDL, walks the AST, and routes directives to transformers.
graphql-dynamodb-transformer
This package implements a number of directives that deal with DynamoDB. Out of the box, this implements the @model and connection directives.
graphql-elasticsearch-transformer
This package implements any directives that deal with Elasticsearch. Out of the box, this implements the @searchable directive.
graphql-auth-transformer
This package implements any directives related to authN or authZ workflows. Out of the box, it configures an Amazon Cognito UserPool and implements the @auth directive.
graphql-transformer-e2e-tests
This pacakge implements end-to-end tests for the transform libraries. It builds an API with the transform, deploys it via CloudFormation, and hits the AppSync data plane to test all generated code paths.
graphql-mapping-template
This package provides a lightweight wrapper around the AppSync Resolver VTL and is used by transformer libraries as a convenience.
lerna
and yarn
as npm global packages.npm install -g lerna
npm install -g yarn
Install the dependencies
lerna bootstrap
And build
lerna run build
Tests are written with jest and can be run for all packages with
lerna run test
Alternatively, there are some debug configurations defined in .vscode/launch.json you can use Visual Studio code to add debug breakpoints to debug the code.
TODO
TODO
This project is licensed under the MIT License - see the LICENSE.md file for details
FAQs
A framework to transform from GraphQL SDL to AWS cloudFormation.
The npm package graphql-transformer-core receives a total of 51,907 weekly downloads. As such, graphql-transformer-core popularity was classified as popular.
We found that graphql-transformer-core demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.