Migrating From Serverless Framework
A guide to migrating your Serverless Framework app to SST.
This document is a work in progress. If you have experience migrating your Serverless Framework app to SST, please consider contributing.
Incrementally Adopting SST
SST has been designed to be incrementally adopted. This means that you can continue using your existing Serverless Framework app while slowly moving over resources to SST. By starting small and incrementally adding more resources, you can avoid a wholesale rewrite.
Let's assume you have an existing Serverless Framework app. To get started, we'll first set up a new SST project in the same directory.
A hybrid Serverless Framework and SST app
To make it an easier transition, we'll start by merging your existing Serverless Framework app with a newly created SST app.
Your existing app can either have one service or be a monorepo with multiple services.
- In a temporary location, run
npm init sst
- Copy the
sst.json
file and thesrc/
andstacks/
directories. - Copy the
scripts
,dependencies
, anddevDependencies
from thepackage.json
file in the new SST project root. - Copy the
.gitignore
file and append it to your existing.gitignore
file. - If you are using TypeScript, you can also copy the
tsconfig.json
. - Run
npm install
.
Now your directory structure should look something like this. The src/
directory is where all the Lambda functions in your Serverless Framework app are placed.
serverless-app
├── node_modules
├── .gitignore
├── package.json
├── serverless.yml
├── sst.json
├── stacks
| ├── MyStack.js
| └── index.js
└── src
├── lambda1.js
└── lambda2.js
And from your project root you can run both the Serverless Framework and SST commands.
This also allows you to easily create functions in your new SST app by pointing to the handlers in your existing app.
Say you have a Lambda function defined in your serverless.yml
.
functions:
hello:
handler: src/lambda1.main
You can now create a function in your SST app using the same source.
new sst.Function(stack, "MySnsLambda", {
handler: "src/lambda1.main",
});
Monorepo with multiple Serverless Framework services
If you have multiple Serverless Framework services in the same repo, you can still follow the steps above to create a single SST app. This is because you can define multiple stacks in the same SST app. Whereas each Serverless Framework service can only contain a single stack.
After the SST app is created, your directory structure should look something like this.
serverless-app
├── node_modules
├── .gitignore
├── package.json
├── sst.json
├── stacks
| ├── MyStack.js
| └── index.js
└── services
├── serviceA
| ├── serverless.yml
| ├── lambda1.js
| └── lambda2.js
└── serviceB
├── serverless.yml
├── lambda3.js
└── lambda4.js
The src/
directory is where all the Lambda functions in your Serverless Framework app are placed.
Add new services to SST
Next, if you need to add a new service or resource to your Serverless Framework app, you can instead do it directly in SST.
For example, say you want to add a new SQS queue resource.
- Start by creating a new stack in the
stacks/
directory. Something like,stacks/MyNewQueueService.js
. - Add the new stack to the list in
stacks/index.js
.
Reference stack outputs
Now that you have two separate apps side-by-side, you might find yourself needing to reference stack outputs between each other.
Reference a Serverless Framework stack output in SST
To reference a Serverless Framework stack output in SST, you can use the cdk.Fn.import_value
function.
For example:
// This imports an S3 bucket ARN and sets it as an environment variable for
// all the Lambda functions in the new API.
import { Fn } from "aws-cdk-lib";
new sst.Api(stack, "MyApi", {
defaults:
function: {
environment: {
myKey: Fn.importValue("exported_key_in_serverless_framework")
}
}
},
routes: {
"GET /notes" : "src/list.main",
"POST /notes" : "src/create.main",
"GET /notes/{id}" : "src/get.main",
"PUT /notes/{id}" : "src/update.main",
"DELETE /notes/{id}" : "src/delete.main",
}
});
Reference SST stack outputs in Serverless Framework
You might also want to reference a newly created resource in SST in Serverless Framework.
// Export in an SST stack
stack.addOutputs({
TableName: {
value: bucket.bucketArn,
exportName: "MyBucketArn",
},
});
// Importing in serverless.yml
!ImportValue MyBucketArn
Referencing SST stack outputs in other SST stacks
And finally, to reference stack outputs across stacks in your SST app.
import { StackContext, Bucket } from "sst/constructs";
export function StackA({ stack }: StackContext) {
const bucket = new s3.Bucket(stack, "MyBucket");
return { bucket };
}
import { StackContext, use } from "sst/constructs";
import { StackA } from "./StackA";
export function StackB({ stack }: StackContext) {
// stackA's return value is passed to stackB
const { bucket } = use(StackA);
// SST will implicitly set the exports in stackA
// and imports in stackB
bucket.bucketArn;
}
Reference Serverless Framework resources
The next step would be to use the resources that are created in your Serverless Framework app. You can reference them directly in your SST app, so you don't have to recreate them.
For example, if you've already created an SNS topic in your Serverless Framework app, and you want to add a new function to subscribe to it:
import { Topic } from "aws-cdk-lib/aws-sns";
// Lookup the existing SNS topic
const snsTopic = Topic.fromTopicArn(
stack,
"ImportTopic",
"arn:aws:sns:us-east-2:444455556666:MyTopic"
);
// Add 2 new subscribers
new sst.Topic(stack, "MyTopic", {
snsTopic,
subscribers: {
subscriber1: "src/subscriber1.main",
subscriber2: "src/subscriber2.main",
},
});
Migrate existing services to SST
There are a couple of strategies if you want to migrate your Serverless Framework resources to your SST app.
Proxying
This applies to API endpoints and it allows you to incrementally migrate API endpoints to SST.
note
Support for this strategy hasn't been implemented in SST yet.
Suppose you have a couple of routes in your serverless.yml
.
functions:
usersList:
handler: src/usersList.main
events:
- httpApi:
method: GET
path: /users
usersGet:
handler: src/usersGet.main
events:
- httpApi:
method: GET
path: /users/{userId}
And you are ready to migrate the /users
endpoint but don't want to touch the other endpoints yet.
You can add the route you want to migrate, and set a catch all route to proxy requests the rest to the old API.
const api = new sst.Api(stack, "Api", {
routes: {
"GET /users": "src/usersList.main",
// "$default" : proxy to old api,
},
});
Now you can use the new API endpoint in your frontend application. And remove the old route from the Serverless Framework app.
Resource swapping
This is suitable for migrating resources that don't have persistent data. So, SNS topics, SQS queues, and the like.
Imagine you have an existing SNS topic named MyTopic
.
Create a new topic in SST called
MyTopic.sst
and add a subscriber with the same function code.Now in your app, start publishing to the
MyTopic.sst
instead ofMyTopic
.Remove the old
MyTopic
resource from the Serverless Framework app.
Optionally, you can now create another new topic in SST called MyTopic
and follow the steps above to remove the temporary MyTopic.sst
topic.
Migrate only the functions
Now for resources that have persistent data like DynamoDB and S3, it won't be possible to remove them and recreate them. For these cases you have two choices:
- Use them as-is by referencing them
- Or, migrate them over
We talk about this in detail over on our doc on Importing resources.
Here's an example of referencing a resource for DynamoDB streams. Assume you have a DynamoDB table that is named based on the stage it's deployed to.
resources:
Resources:
MyTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.stage}-MyTable
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: noteId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: noteId
KeyType: RANGE
BillingMode: 'PAY_PER_REQUEST'
StreamSpecification:
StreamViewType: NEW_IMAGE
Now in SST, you can reference the table and create an SST function to subscribe to its streams.
// Import table
const table = dynamodb.fromTableName(
stack,
"MyTable",
`${this.node.root.stage}-MyTable`
);
// Create a Lambda function
const processor = new sst.Function(stack, "Processor", "processor.main");
// Subscribe function to the streams
processor.addEventSource(
new DynamoEventSource(table, {
startingPosition: lambda.StartingPosition.TRIM_HORIZON,
})
);
If you want to completely migrate over a resource, it is a manual process but it'll give you full control. You can follow these steps.
Workflow
A lot of the commands that you are used to using in Serverless Framework translate well to SST.
Serverless Framework | SST |
---|---|
serverless invoke local | sst dev |
serverless package | sst build |
serverless deploy | sst deploy |
serverless remove | sst remove |
SST also supports the IS_LOCAL
environment variable that gets set in your Lambda functions when run locally.
Invoking locally
With the Serverless Framework you need to run the following command serverless invoke local -f function_name
to invoke a function locally.
With SST this can be done via PostMan, Hopscotch, curl or any other API client. However, with this event you are actually sending a request to API Gateway which then invokes your Lambda.
CI/CD
If you are using GitHub Actions, Circle CI, etc., to deploy Serverless Framework apps, you can now add the SST versions to your build scripts.
# Deploy the defaults
npx sst deploy
# To a specific stage
npx sst deploy --stage prod
# To a specific stage and region
npx sst deploy --stage prod --region us-west-1
# With a different AWS profile
AWS_PROFILE=production npx sst deploy --stage prod --region us-west-1
Serverless Dashboard
If you are using the Serverless Dashboard, you can try out Seed instead. It supports Serverless Framework and SST. So you can deploy the hybrid app that we've created here.
Seed has a fully-managed CI/CD pipeline, monitoring, real-time alerts, and deploys a lot faster thanks to the Incremental Deploys. It also gives you a great birds eye view of all your environments.
Lambda Function Triggers
Following is a list of all the Lambda function triggers available in Serverless Framework. And the support status in SST (or CDK).
Type | Status |
---|---|
HTTP API | Available |
API Gateway REST API | Available |
WebSocket API | Available |
Schedule | Available |
SNS | Available |
SQS | Available |
DynamoDB | Available |
Kinesis | Available |
S3 | Available |
CloudWatch Events | Available |
CloudWatch Logs | Available |
EventBus Event. | Available |
EventBridge Event | Available |
Cognito User Pool | Available |
ALB | Available |
Alexa Skill | Available |
Alexa Smart Home | Available |
IoT | Available |
CloudFront | Coming soon |
IoT Fleet Provisioning | Coming soon |
Kafka | Coming soon |
MSK | Coming soon |
Plugins
Serverless Framework supports a long list of popular plugins. In this section we'll look at how to adopt their functionality to SST.
To start with, let's look at the very popular serverless-offline plugin. It's used to emulate a Lambda function locally but it's fairly limited in the workflows it supports. There are also a number of other plugins that work with serverless-offline to support various other Lambda triggers.
Thanks to sst dev
, you don't need to worry about using them anymore.
Plugin | Alternative |
---|---|
serverless-offline | sst dev |
serverless-offline-sns | sst dev |
serverless-offline-ssm | sst dev |
serverless-dynamodb-local | sst dev |
serverless-offline-scheduler | sst dev |
serverless-step-functions-offline | sst dev |
serverless-offline-direct-lambda | sst dev |
CoorpAcademy/serverless-plugins | sst dev |
serverless-plugin-offline-dynamodb-stream | sst dev |
Let's look at the other popular Serverless Framework plugins and how to set them up in SST.
Examples
A list of examples showing how to use Serverless Framework triggers or plugins in SST.
Triggers
HTTP API
functions:
listUsers:
handler: listUsers.main
events:
- httpApi:
method: GET
path: /users
createUser:
handler: createUser.main
events:
- httpApi:
method: POST
path: /users
getUser:
handler: getUser.main
events:
- httpApi:
method: GET
path: /users/{id}
new Api(stack, "Api", {
routes: {
"GET /users": "listUsers.main",
"POST /users": "createUser.main",
"GET /users/{id}": "getUser.main",
},
});
API Gateway REST API
functions:
listUsers:
handler: listUsers.main
events:
- http:
method: GET
path: /users
createUser:
handler: createUser.main
events:
- http:
method: POST
path: /users
getUser:
handler: getUser.main
events:
- http:
method: GET
path: /users/{id}
new ApiGatewayV1Api(stack, "Api", {
routes: {
"GET /users": "listUsers.main",
"POST /users": "createUser.main",
"GET /users/{id}": "getUser.main",
},
});
WebSocket
functions:
connectHandler:
handler: connect.main
events:
- websocket: $connect
disconnectHandler:
handler: disconnect.main
events:
- websocket:
route: $disconnect
defaultHandler:
handler: default.main
events:
- websocket:
route: $default
sendMessageHandler:
handler: sendMessage.main
events:
- websocket:
route: sendMessage
new WebSocketApi(stack, "Api", {
routes: {
$connect: "src/connect.main",
$default: "src/default.main",
$disconnect: "src/disconnect.main",
sendMessage: "src/sendMessage.main",
},
});
Schedule
functions:
crawl:
handler: crawl.main
events:
- schedule: rate(2 hours)
new Cron(stack, "Crawl", {
schedule: "rate(2 hours)",
job: "crawl.main",
});
SNS
functions:
subscriber:
handler: subscriber.main
events:
- sns: dispatch
subscriber2:
handler: subscriber2.main
events:
- sns: dispatch
new Topic(stack, "Dispatch", {
subscribers: {
subscriber1: "subscriber.main",
subscriber2: "subscriber2.main",
},
});
SQS
functions:
consumer:
handler: consumer.main
events:
- sqs:
arn:
Fn::GetAtt:
- MyQueue
- Arn
resources:
Resources:
MyQueue:
Type: "AWS::SQS::Queue"
Properties:
QueueName: ${self:custom.stage}-MyQueue
new Queue(stack, "MyQueue", {
consumer: "consumer.main",
});
DynamoDB
functions:
processor:
handler: processor.main
events:
- stream:
type: dynamodb
arn:
Fn::GetAtt:
- MyTable
- StreamArn
resources:
Resources:
MyTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.stage}-MyTable
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: noteId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: noteId
KeyType: RANGE
BillingMode: 'PAY_PER_REQUEST'
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
new Table(stack, "MyTable", {
fields: {
userId: TableFieldType.STRING,
noteId: TableFieldType.STRING,
},
primaryIndex: { partitionKey: "noteId", sortKey: "userId" },
stream: true,
consumers: {
myConsumer: "processor.main",
}
});
Kinesis
functions:
processor:
handler: processor.main
events:
- stream:
type: kinesis
arn:
Fn::Join:
- ":"
- - arn
- aws
- kinesis
- Ref: AWS::Region
- Ref: AWS::AccountId
- stream/MyKinesisStream
new KinesisStream(stack, "MyStream", {
consumers: {
myConsumer: "processor.main",
}
});
S3
functions:
processor:
handler: processor.main
events:
- s3:
bucket: MyBucket
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
new Bucket(stack, "MyBucket", {
notifications: {
myNotification: {
function: "notification.main",
events: ["object_created"],
filters: [{ prefix: "uploads/" }],
}
}
});
CloudWatch Events
functions:
myCloudWatch:
handler: myCloudWatch.handler
events:
- cloudwatchEvent:
event:
source:
- "aws.ec2"
detail-type:
- "EC2 Instance State-change Notification"
detail:
state:
- pending
const processor = new sst.Function(stack, "Processor", "processor.main");
const rule = new events.Rule(stack, "Rule", {
eventPattern: {
source: ["aws.ec2"],
detailType: ["EC2 Instance State-change Notification"],
},
});
rule.addTarget(new targets.LambdaFunction(processor));
CloudWatch Logs
functions:
processor:
handler: processor.main
events:
- cloudwatchLog:
logGroup: "/aws/lambda/hello"
filter: "{$.error = true}"
const processor = new sst.Function(stack, "Processor", "processor.main");
new SubscriptionFilter(stack, "Subscription", {
logGroup,
destination: new LogsDestinations.LambdaDestination(processor),
filterPattern: FilterPattern.booleanValue("$.error", true),
});
EventBus Event
functions:
myFunction:
handler: processor.main
events:
- eventBridge:
eventBus:
Fn::GetAtt:
- MyEventBus
- Arn
pattern:
source:
- acme.transactions.xyz
resources:
Resources:
MyEventBus:
Type: AWS::Events::EventBus
Properties:
Name: MyEventBus
const processor = new sst.Function(stack, "Processor", "processor.main");
const rule = new events.Rule(stack, "MyEventRule", {
eventBus: new events.EventBus(stack, "MyEventBus"),
eventPattern: {
source: ["acme.transactions.xyz"],
},
});
rule.addTarget(new targets.LambdaFunction(processor));
EventBridge Event
functions:
myFunction:
handler: processor.main
events:
- eventBridge:
pattern:
source:
- aws.cloudformation
detail-type:
- AWS API Call via CloudTrail
detail:
eventSource:
- cloudformation.amazonaws.com
const processor = new sst.Function(stack, "Processor", "processor.main");
const rule = new events.Rule(stack, "rule", {
eventPattern: {
source: ["aws.cloudformation"],
detailType: ["AWS API Call via CloudTrail"],
detail: {
eventSource: ["cloudformation.amazonaws.com"],
},
},
});
rule.addTarget(new targets.LambdaFunction(processor));
Cognito User Pool
functions:
preSignUp:
handler: preSignUp.main
events:
- cognitoUserPool:
pool: MyUserPool
trigger: PreSignUp
existing: true
new Cognito(stack, "Auth", {
triggers: {
preSignUp: "src/preSignUp.main",
},
});
Plugins
serverless-domain-manager
plugins:
- serverless-domain-manager
custom:
customDomain:
domainName: api.domain.com
function:
listUsers:
handler: src/listUsers.main
events:
- httpApi:
method: GET
path: /users
new Api(stack, "Api", {
customDomain: "api.domain.com",
routes: {
"GET /users": "src/listUsers.main",
},
});
serverless-pseudo-parameters
plugins:
- serverless-pseudo-parameters
resources:
Resources:
S3Bucket:
Type: AWS::S3::Bucket,
DeleteionPolicy: Retain
Properties:
BucketName: photos-#{AWS::AccountId}
new s3.Bucket(stack, "S3Bucket", {
bucketName: `photos-${stack.account}`
};
serverless-step-functions
plugins:
- serverless-step-functions
functions:
hello:
handler: hello.main
StartAt: Wait
States:
Wait:
Type: Wait
Seconds: 300
Next: Hello
Hello:
Type: Task
Resource:
Fn::GetAtt:
- hello
- Arn
Next: Decide
Decide:
Type: Choice
Choices:
- Variable: $.status
StringEquals: Approved
Next: Success
Default: Failed
Success:
Type: Succeed
Failed:
Type: Fail
// Define each state
const sWait = new sfn.Wait(stack, "Wait", {
time: sfn.WaitTime.duration(cdk.Duration.seconds(300)),
});
const sHello = new tasks.LambdaInvoke(stack, "Hello", {
lambdaFunction: new sst.Function(stack, "Hello", "hello.main"),
});
const sFailed = new sfn.Fail(stack, "Failed");
const sSuccess = new sfn.Succeed(stack, "Success");
// Define state machine
new sfn.StateMachine(stack, "StateMachine", {
definition: sWait
.next(sHello)
.next(
new sfn.Choice(stack, "Job Approved?")
.when(sfn.Condition.stringEquals("$.status", "Approved"), sSuccess)
.otherwise(sFailed)
),
});
serverless-plugin-aws-alerts
plugins:
- serverless-plugin-aws-alerts
custom:
alerts:
stages:
- production
topics:
alarm:
topic: ${self:service}-${opt:stage}-alerts-alarm
notifications:
- protocol: email
endpoint: foo@bar.com
alarms:
- functionErrors
// Send an email when a message is received
const topic = new sns.Topic(stack, "AlarmTopic");
topic.addSubscription(new subscriptions.EmailSubscription("foo@bar.com"));
// Post a message to topic when an alarm breaches
new cloudwatch.Alarm(stack, "Alarm", {
metric: lambda.metricAllErrors(),
threshold: 100,
evaluationPeriods: 2,
});
alarm.addAlarmAction(new cloudwatchActions.SnsAction(topic));
serverless-stage-manager
plugins:
- serverless-stage-manager
custom:
stages:
- dev
- staging
- prod
if (!["dev", "staging", "prod"].includes(app.stage)) {
throw new Error("Invalid stage");
}