fogbugz-spec

Scope & Solution Summary

Product Summary

Briefly recap what the product is, focusing on what’s relevant to this change

FogBugz is an issue tracking service that simplifies the management of issues for small-medium sized software development teams.  It is primarily used by 5-50 person software development teams to keep track of Bugs and Features and aid in the management of the software development lifecycle. The product predominantly facilitates the entry, assignment, and retrieval of core objects (primarily issues).

 

Problem Summary

What is the problem, deficiency, or gap which will be addressed by this spec?

Background
The product has 1.6M LOC and based on our 4-quadrant Code Value vs Complexity Analysis, much of its homegrown legacy code base is bad and unimportant code that can be simplified.  The code base has been incrementally developed over the past 17 years – and as a result – has an architecture that is outdated and limited in scale.
1-sentence Goal
Massively simplify FogBugz’s homegrown legacy code base by rebuilding using modern software development methods and a cloud-first architecture.

 

Solution Summary

In a nutshell, how are we solving the problem?

Replace bad and unimportant legacy code with the modern AWS cloud stack

 

  • Use AWS Appsync  (ITD1, ITD6) + Lambda + DynamoDB (ITD7) to anchor the service offering
  • Use S3 to host static website content (ITD2) and front with CloudFront + WAF (ITD 3) for web site hosting
  • Use AWS Elasticsearch service for case searching/filtering (ITD4)
  • Use Cognito user pools for authentication (ITD5)
  • Amazon SES with lambda triggers are used to listen to and feed the system (ITD9)
    • Cases are stored in Dynamo
    • Attachments, bodies, etc are stored in S3

Scope Details

Input Constraints

What important constraints were placed in a Prod3 or by the customer, rather than being decided by the spec author?

  • Maintain the current operations & ensure the initial version is an exact clone with modern backend
  • Full feature parity with the current product

Important Scope Items

What important items are in scope for the spec?

  • All current functionality (e.g. case creation, wiki, email, etc.)
  • Kiln integration – supported by the current product

Important Scope Exclusions

What important items are out of scope for the spec?

  • User migration
  • New functionality
  • Provisioning, monitoring, etc.

Solution Analysis

Important Topics

These are the sections in the next part of the document. Each section will have some background and narrative, and a set of Important Decisions. Think of them as the logical first-level decomposition of the problem.

 

Section Name
Brief Description
Architecture What AWS building blocks will be used for the rebuild?
Authentication/Access How should authentication and authorization be managed?
AWS Interface How will the current FogBugz API be retrofitted to use the new AWS backend?
Data Modeling Are there simplifications that can be made to the underlying data model?
Email Classification How can email based creation/updating of issues be supported?

 

Important Technical Decisions (ITDs)

 

Architecture

 

ITD 1 – Use AWS Appsync as the platform for the rewrite

THE PROBLEM Which AWS technology should anchor our rewrite?
OPTIONS CONSIDERED
(Decision in bold)
  1. AWS Appsync
  2. AWS API GW
REASONING AppSync is a managed service supporting GraphQL while API Gateway is a managed service that allows developers to create REST APIs. While both AppSync and API Gateway are valid options, GraphQL is an improvement over REST in that reduces network communication (single vs multiple requests, the response only contains what is needed, etc.). In addition, AppSync, also simplifies code writing with its schema based Dynamo table generators and resolver processing logic that can replace much of the custom and proprietary validation code that is currently in place.
ADDITIONAL NOTES Please note that RESTful APIs that are currently in place will have to be converted to GraphQL (ITD 6).

 

ITD 2 – Host UI static assets on Amazon S3 and use S3 to deliver the web application to the users

THE PROBLEM How should the UI be hosted?
OPTIONS CONSIDERED
(Decision in bold)
  1. Host UI static assets on Amazon S3 and use S3 to deliver the web application to the users
  2. Use an Nginx/Apache/Spring-boot/other web servers
REASONING As FogBugz functionality will be handled by Appsync we can simply configure an S3 Bucket for static website Hosting and eliminate the additional complexity that may be associated with deploying an actual web server.
ADDITIONAL NOTES Please note that In the context of AWS S3, a static website means a website using pure frontend technologies (HTML/CSS/JS), with the ’static’ word  referring to the static files (e.g. flat files or images) that are returned directly from the S3 bucket to the browser without being processed by any backend server.

 

ITD 3 – Front the S3 based static assets with CloudFront + WAF

THE PROBLEM How should the static content be served?
OPTIONS CONSIDERED
(Decision in bold)
  1. Directly from the S3 bucket
  2. Front the S3 based static assets with CloudFront + WAF
REASONING Using CloudFront (AWS CDN) to cache and serve content improves performance by providing content closer to where viewers are located. CloudFront integrates with WAF, a web application firewall that helps protect web applications from common web exploits. WAF lets you control access to your content, based on conditions that you specify, such as IP addresses or the query string value on a content request. CloudFront then responds with either the requested content, if the conditions are met, or with an HTTP 403 status code (Forbidden).
ADDITIONAL NOTES Here is an example of how the specified AWS components can be used to serve content in a secure manner.

Elasticsearch is currently used to speed up searching capability. AWS has an Elasticsearch service offering that is easy to keep in sync with a DynamoDB using lambda functions.

ITD 4 – Use a RESTful endpoint to access AWS Elasticsearch service for search
THE PROBLEM How should the AWS Elasticsearch instance be accessed by the UI?
OPTIONS CONSIDERED
(Decision in bold)
  1. Use a RESTful Appsync endpoint
  2. Use a GraphQL Appsync endpoint
REASONING While GraphQL has a native Elasticsearch resolver, Option 1 is nonetheless our chosen approach as internal benchmark testing showed the GraphQL resolver to be slower than its RESTful counterpart. This decision can be easily revisited and modified in the future.
ADDITIONAL NOTES Please note that as shown in the example below, a Route53/CloudFront combination allows a browser to fetch everything it needs from a single server (GraphQL and RESTful endpoints as well as HTML content).

 

Use Case
FQDN
public/static content from https://fogbugz.com/public
graphQL queries/mutations/subscriptions https://fogbugz.com/graphql
static but non-public content with fine-grained lambda backed auth checks https://fogbugz.com/private
Elasticsearch REST endpoints https://fogbugz.com/api/core/v3/query
Parts of dashboards via QuickSight (e.g. iframe) https://fogbugz.com/dashboard

 

Authentication/Access

Background

Currently, FogBugz has its own username:password authentication mechanism

ITD 5 – Use AWS Cognito user pools to provide identity and access tokens

THE PROBLEM How should users be authenticated using this backend?
OPTIONS CONSIDERED
(Decision in bold)
  1. Use AWS Cognito user pools to provide identity and access tokens
  2. Use Auth0 based JWT tokens
REASONING Cognito is very well integrated into the AWS ecosystem and is the natural choice for AWS based services
ADDITIONAL NOTES Note that Social login or SAML based auth (should this be introduced at some point in the future to FogBugz) can be supported as well.

 

Note as well that using a user migration lambda trigger approach as described to the right can enable users to keep their current passwords while still using Cognito for access to AWS AppSync and Elasticsearch

AWS Interface

Background

The current product uses  2 UIs (new and old)

  • New – using backend RESTful APIs (Ajax)
  • Old – not using APIs

Most of the features have already been migrated to the new (ajax based) UI with only a few holdouts (e.g. pages under the account and settings section and the WIKI pages) remaining.

ITD 6 – Use GraphQL endpoints for Appsync

THE PROBLEM Appsync lets you write a REST endpoint or a GraphQL one – what is the suggested approach?
OPTIONS CONSIDERED
(Decision in bold)
    1. Use RESTful endpoints
    2. Use GraphQL endpoints
REASONING Implementing the GraphQL endpoint with appsync and using the Appsync client library to replace the REST calls with the GraphQL ones is simple enough and eliminates the need to massage the JSON to suit the UI – you get back JSON the way you want it from the backend with GraphQL.
ADDITIONAL NOTES Note ITD-4 P2 feedback as well

 

Data Modeling

 

ITD 7 – Use NoSQL store (DynamoDB) to store Issues Wiki and Discussion related data

THE PROBLEM The original FogBugz data model was relational consisting of 50 or so columns in the issue table. What kind of data model (and resultant store) should be used for the rewrite?
OPTIONS CONSIDERED
(Decision in bold)
  1. Dynamo (key-val)
  2. Aurora (RDBMS)
REASONING While the original data model was relational – owing to the fact that the case obj has custom-fields, it is believed that this can be better modeled using NoSQL. In addition, according to AWS documentation, the Data API for Aurora Serverless is a Beta release and should only be used for testing.
ADDITIONAL NOTES Appsync works with well with Dynamo for both writing to and receiving updates from using lambda functions. DynamoDB can easily reindex an Elasticsearch instance using lambda functions and makes sure the two are always in sync. Note that receiving updates from Dynamo requires using DynamoDB streams

 

Email Classification

 

IFD 9 – Use AWS SES and lambda functions for email support

THE PROBLEM How can the current proprietary SMTP handling be deprecated?
OPTIONS CONSIDERED
(Decision in bold)
  1. Use SendGrid or other email service provider
  2. Use AWS SES + lambda triggers
REASONING SES is very well integrated into  the AWS ecosystem and is the natural choice for AWS based services