Engineering Standards
π Introduction
Welcome to the Engineering Standards Document, a cornerstone of our commitment to excellence in software and engineering practices. This document outlines the established guidelines, best practices, and coding standards that we follow to ensure the quality, consistency, and maintainability of our engineering efforts.
The purpose of this document is to provide a comprehensive reference for all engineers within our organization, serving as a blueprint for the way we design, develop, and maintain our software and systems. By adhering to these standards, we strive to create code that is not only functionally sound but also easily comprehensible, maintainable, and robust.
This document is a living, evolving resource, reflecting our commitment to continuous improvement. Engineers are strongly encouraged to actively contribute to its refinement and enhancement.
How to contribute
We are really excited that you’re interested in contributing to our standards!
- The first thing to do is clone the following repository:
https://github.com/fasttrack-solutions/standards
- Create a branch from the main branch
Development setup
Install Hugo brew install hugo
, we also recommended to update your brew by running: brew update
.
To run the server: hugo server -D --disableFastRender
π Workflow
Our software development lifecycle include the following phases:
-
Ideation: This phase is where different teams and stakeholders are working together to progress on an idea. The ideas for different projects can either be set from internal teams, our goals & milestones, partner requests or from various areas.
-
Grooming: Once an project has passed the ideation phase and we have received sign-off from the stakeholders, the project can enter grooming process. Grooming means that we are workshopping the technical solutions in more detail and often involved creating tasks/describing technical solutions. This usually involves the developers related to the project, the product team and/or Tech Leads or CTO.
-
Ready for Development: This is projects that have passed the previous states and are ready to be worked on. The product and tech team will prioritise the different projects we are going to work on at any moment in time.
-
PR: Once a piece of work is completed, a pull request is submitted for review. A minimum of two approvals (sometimes by a Tech Lead) is required to be able to merge. Always perform a self-review before sharing the pull request with other engineers.
-
Testing: Some work require testing on a staging environment. Follow this guide on how to deploy through OCD.
-
QA: Our QA team performs extensive testing on our QA envrionment which includes automated and smoke- tests.
-
Merge: The work has been tested and is now ready to be merged. If the branch contain many commits that do no not provide any value to history tracking, consider squashing it.
-
Release: The work will be included in the next release version.
βοΈ Pull Request Practices
To improve the clarity, traceability, and filtering of our pull requests, we should always adhere to the following standards. Following these practices helps us maintain better organization through clear categorization of changes, enhance change tracking to easily identify what changed in which version, enable faster reviews by helping reviewers quickly understand context and scope, and provide enhanced traceability linking PRs back to specific tasks and teams.
PR Title Format
Use the following format for all pull request titles:
[ClickUp Task ID]: [emoji] [Task Name]
Example:
DEV-123: π Fix timezone issue in calendar sync
Labels
Tag PRs with relevant labels to help keep things organized, track changes better, and speed up reviews:
- Type:
feature
,fix
,refactor
,hotfix
- Version Introduced (VERY IMPORTANT):
2.48
,2.49
,2.50
- Stream/Team:
rewards
,vector
,reliability
- Status:
wip
,blocked
π Code review
The primary purpose of code review is to make sure that the overall code health of our code base is improving over time.
Etiquette
-
Be Respectful in Tone: When providing feedback, maintain a respectful and professional tone. Constructive criticism can be delivered with kindness and courtesy, promoting a positive and productive atmosphere.
-
Avoid Personal Critiques: Remember that we are evaluating code, not individuals. Critiques should focus on the code itself, its quality, and adherence to coding standards. Avoid making personal criticisms or judgments.
-
Collective Ownership: Our codebase is collectively owned by the engineering team. Everyone shares the responsibility for its quality and maintainability. Approving a PR means co-responsibility for the change.
-
Acknowledge Positive Contributions: In addition to providing constructive feedback, take the opportunity to acknowledge and praise positive contributions in the code. Recognize and commend well-implemented solutions, clear documentation, and code that aligns with best practices.
Checklist
In doing a code review, you should make sure to:
-
Understand the requirements π€
- What’s the problem/feature we’re trying to solve/implement?
-
Understand the code changes π€οΈ
- How is the developer trying to solve the issue?
-
Think about the code changes π€
- Is this the optimal way of solving the issue or is there a better way? Think about performance, complexity, scalability and readability.
- Could the developer have reused something that we have already implemented instead of rebuilding it?
- Make sure we observe the DRY (Don’t Repeat Yourself) principle - no duplicated code.
- If removing stuff, make sure all unused references are cleaned up so we don’t leave dangling pieces of code around.
- Was the debugging code cleaned up? (No console.log, commented/unereachable code, etc..)
- Look for orthographic errors, typos or unrequired abbreviations, as well as missing translations (front-end) or unclear naming for variables, methods, etc..
- The developer isnβt implementing things they might need in the future.
- Any parallel programming is done safely.
- Comments are clear and useful, and mostly explain why instead of what.
- Code is appropriately documented and conforms to our style guide.
-
Make sure Unit Tests are implemented π§ͺ
- Did the developer cover all the relevant code with Unit Tests?
- Look at the Unit Tests. Do they make sense? Do they add value? Do they actually test the relevant flow(s)?
- Were all the edge cases (that you can think of) considered?
-
Consider testing the code ποΈ
- Click the Cloudflare Pages link (if not available checkout the code and run it locally) and make sure the feature/bug is actually implemented/fixed (front-end).
- Checkout the branch and perform testing on the code, locally.
- Any UI changes are sensible and look good.
-
Don’t be afraid of commenting on the PR and ask for clarifications if anything isn’t clear π¬
- If it helps, consider adding a snippet of the desired code.
-
At this point you either approve the PR or post some comments β
Branches
Branches should be deleted once PRs are merged, given the branch is no longer needed for a custom Platform Version.
π¨ Code Refactoring Principles
At Fast Track, we aim to maintain high-quality, efficient, and sustainable code. This section outlines our key principles and standards when it comes to refactoring code.
Key Principles
-
Collaboration: There may be multiple ways to refactor a piece of code to improve its structure or performance. If you find this to be the case, it is crucial to reach out to the wider team for discussion and to decide on the best approach.
-
Consideration of Impact: If your refactoring change spans multiple files and the repository is actively being worked on by several team members, consider the potential impact of your changes. You should evaluate the possible effects and disruptions that may be caused, and whether it should be planned in advance to mitigate any negative consequences.
-
Planning for Larger Refactorings: For substantial refactoring tasks, they should be formally logged as an item on ClickUp. This will allow for better tracking, planning, and resource allocation. It also ensures the team is aware of ongoing refactoring work, encouraging communication and collaboration.
By following these principles and standards, we can ensure the quality of our codebase while promoting effective team collaboration. We strive to continuously improve our practices and encourage all team members to contribute their insights to our ongoing development.
π Style guide
Consistency in code style and structure is a fundamental aspect of maintaining a clean, readable, and collaborative codebase.
The primary objective of this section is to establish a uniform coding style that not only enhances the readability of our code but also facilitates seamless collaboration among team members.
Backend
Libraries
This table provides an overview of the external libraries that have been approved for use within our development projects. These libraries have been carefully selected to enhance the functionality, efficiency, and reliability of our software solutions.
βΉοΈ Libraries need to be reviewed from a performance, compliance and security perspective. Always obtain approval from CTO or a Tech Lead before introducing a new external library in an application.
Type | Library name |
---|---|
Logging | slog, zap* , logrus* |
Web server | gin |
REST client | resty |
Integration tests | dockertest v3 |
Flags | envs (FT fork) |
Database library | sqlx |
Database migration | golang-migrate, sql-migrate |
Testing library | testify |
Testing utils | docker test utils |
Message SDK | MSDK |
HTTP Mocking library | jarcoal/httpmock |
*
- Deprecated, meaning that in existing services we will use continue use it until there is a bigger feature and we have a opportunity to replace the library with our new standard. In new services the current standard should always be applied.
Service architecture
Anatomy
We follow the standard project layout folder structure for organising projects.
These are the core directories of an application:
/cmd
for main applications. The directory name for each application should match the name of the executable you want to have (e.g.,/cmd/api
)./internal
for private application and library code - internal packages that others should not be able to import./pkg
for library code that is OK to use by external applications (e.g., public gRPC clients)./deployments
for Terraform and Docker files required by OCD.
A service should ideally include the following:
- An extensive Readme which descibes the purpose of the service and a table of any potential configuration flags, and how to run the service locally.
- A Miro board that visually presents the service design.
- A docker compose file and any required additional Docker files to run the service locally.
Design Philosophy
- Packages should be contained and portable into a different project.
utils
package is usually an indication of poor organisation. With that said, packages can contain the wordutils
.main.go
should be minimal, with mostly initialisation and configuration of the application.
Personas & Consistency
We have a lot of micro services in our architecture. Services can have different personas. That means some things may differ compared to another service, such as the logging library in use or how code is organised.
By granting a certain amount of freedom when building services, we allow for flexibility in terms of trying new libraries, patterns, tools and so on.
Constantly attempting to keep all services 100% consistent with each other requires a great effort at a low return of value and hence is not prioritised.
However, within these boundaries a number of things remain consistent, such as:
- Project structure according to our anatomy guidelines.
- The use of Message SDK for reading and publishing queue messages.
- CI is built through Github actions workflows.
- Deployment code for OCD can be found in
deployments
,oneclickdeployment
orocd
. - The use of an approved logging library.
- The service has integration tests, testing the input and output.
- The service is unit tested.
- No staging/production credentials should form part of any config files.
Communication
Asyncronous
When a service need to communicate asynchronously, it should always be done via queues. One of the benefits with a queue based approach is persistance. We can be sure that no data will be lost, regardless if the consuming service is able to read the message or not.
We utilise the following message brokers, with varying ranges of coverage.
- RabbitMQ (AmazonMQ or self hosted in Kubernetes)
- Kafka (MSK)
- NATS (Jetstream) (experimental usage in some internal projects)
Each message should be addressed with a topic. All topics are defined in the topics library.
Syncronous
When a service need to communicate syncronously via a request/response pattern, we have two ways of doing so depending on the destination.
For internal service to service communication, we use gRPC.
For external frontend to backend communication (such as Backoffice β CRM API) we use REST. On the horizion, we are looking at GraphQL as an option.
βΉοΈ To indicate the type of response, we rely on HTTP status codes. Errors should be returned in the following format:
{
"error": "message"
}
Style Guide
We resonate with the Google Go style guide and have adopted it as our foundation.
In addition to this, we have established a number of guidelines internally:
Package Design
Packages should have a clear purpose and be responsible for that particular scope of work only. Aim to be able to reuse a package for multiple projects with little to no modification.
The name of the package should be related to what the package provides. Consider what the callsite will look like.
Interfaces
By defining interfaces, we specify the expected behavior of types rather than their concrete implementations. They contribute to testability, as they provide a straightforward way to create mock implementations for unit testing.
The naming convention is to either prefix it with I
, IService
or suffix it es
, Services
.
Interfaces should be small and focused rather than big and generic, to provide a more modular design.
Ideally mocked interfaces are designed in a reusable manner. This reduces code duplication, improves maintainability, and ensures consistency in how different parts of the system are tested against the same interface.
Context as First Argument
The first argument in a method should be context. This allows us to pass existing context that may have a timeout/deadline as well as leveraging values that travels through context.
Error Handling
It should be up to the top-most level caller of a method to decide how it wants to handle an error that gets returned from a method. We should never Fatal
deep inside methods as we’re loosing control.
Error strings should not be capitalized (unless beginning with proper nouns or acronyms) or end with punctuation, since they are usually printed following other context. That is, use fmt.Errorf(“something bad”) not fmt.Errorf(“Something bad”), so that log.Printf(“Reading %s: %v”, filename, err) formats without a spurious capital letter mid-message. This does not apply to logging, which is implicitly line-oriented and not combined inside other messages.
Anonymous Methods
Anonymous methods greatly degrade readability and make testing hard. Of course they can be used within a test as a small utility method, such as wrapping a bool as a pointer. However, using them within methods to do large portions of work should be avoided.
Reduce Nesting
Code should reduce nesting where possible by handling error cases/special conditions first and returning early or continuing the loop. Reduce the amount of code that is nested multiple levels.
Unnecessary Else
If a variable is set in both branches of an if, it can be replaced with a single if.
Capitalised Abbrevations
Abbrevations in Go should be capitalised. This applies to variables and methods.
Error Check Condensing
By prefering no line breaks between assignment and error checking, we are grouping the code to improve readability.
Choosing Good Variable Names
We believe that our software will be read far more many times than written. For this reason, we choose to pay the extra cost in time to find good, descriptive variable names that clarifies its purpose.
Avoid hardcoded values
Instead of using hardcoded values directly in the code, always prefer using constants. This practice improves readability, reduces the risk of errors, and makes it easier to maintain or update values in one place.
Existing Conventions
When working in older services that may not comply with our standards, it is better to remain consistent with the existing style to avoid confusion. The service could be considered for refactoring if there’s enough value to be gained.
Docker file
The Dockerfile should be as minimal as possible. It should be based on the official image of the language and version used. The Dockerfile should be able to build the application and run it. It should not contain any secrets or sensitive information.
Each service should have a unique binary name to make it easier to identify the service when running it.
Example for clickhouse-writer-service (replace with your service name):
FROM golang:1.23-alpine AS build-env
RUN apk add --update --no-cache git gcc musl-dev openssh
RUN apk add build-base
RUN mkdir /root/.ssh/
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa
# Use git with SSH instead of https
RUN git config --global url."git@bitbucket.org:".insteadOf "https://bitbucket.org/"
RUN git config --global url."git@github.com:".insteadOf https://github.com/
ENV GOPRIVATE=bitbucket.org/fasttrackdevteam,github.com/fasttrack-solutions
# Skip Host verification for git
RUN echo "StrictHostKeyChecking no " > ~/.ssh/config
# Enable Go modules
ENV GO111MODULE=on
ADD . /go/src/github.com/fastrack-solutions/clickhouse-writer-service
# Install api binary globally within container
RUN cd /go/src/github.com/fastrack-solutions/clickhouse-writer-service && go build -o dist/clickhouse-writer-service ./cmd/service
FROM alpine
RUN apk update && apk upgrade \
&& apk add --no-cache ca-certificates wget \
&& update-ca-certificates
WORKDIR /app
COPY --from=build-env /go/src/github.com/fastrack-solutions/clickhouse-writer-service/dist/clickhouse-writer-service /app/
ENTRYPOINT ["./clickhouse-writer-service"]
Performance
API endpoints
When designing API endpoints that return large datasets, always implement pagination. This ensures efficient data retrieval, improves performance, and reduces the risk of overwhelming clients with excessive data.
Infra
Kubernetes
Annotations
Annotations are applied according to the Kubernetes naming convention, x.fasttrack.dev/key: "value"
.
Available Fast Track specific annotations:
Name | Purpose |
---|---|
environment-scaler.fasttrack.dev/whitelisted |
Signal to the scaler that services carrying this annotation should be whitelisted from scaling |
scaling.fasttrack.dev/by |
Signal that manual scaling has been applied and by whom (can be a user or a service) |
scaling.fasttrack.dev/timestamp |
Indicating when the scaling was applied |
scaling.fasttrack.dev/note |
Reason of scaling |
βΉοΈ All scaling should be done through the ft scale
command to ensure metadata is captured.
More information on the utility available here.
Terraform
Variable Naming Convention
When adding module specific variables, always add the module name as a prefix. This makes it easier to identify system-wide variables from module specific ones.
Locals
Utilize local variables to handle complex logic within resource configurations. This simplifies complex expressions, reduces errors, and makes the resource definitions more readable.
Frontend
β οΈ Future Update:
We need to migrate to Vue 3.5.3. Ensure that all dependencies used on our sites are compatible with this new version of Vue.
Frameworks and tools
Our legacy projects are made in Vue2
+ Vuex
+ Webpack
, while the new ones use Vue3
+ Pinia
+ Vite
. We also make use of ESLint
for code linting and Jest
/ Vitest
for unit tests. We also use LESS
as CSS pre-processor.
For smaller projects, such as our main sites like Fasttrack Solutions and TheGreco.com, we develop using Nuxt3
and Nuxt Studio
to manage content and handle site translations.
Style Guide
As a general rule we try to make the code as readable as possible with enhanced code reviews and automatic formatting tools such as ESLint
and Prettier
with newer project using TypeScript
as well.
For coding style, we normally refer to the AirBnB Javascript Style Guide or the ts.dev/style TypeScript for what concerns generic coding best practices in Javascript or TypeScript and Vue.js Style Guide for what concerns more Vue-specific coding best practices.
Other than the general coding style above we have specific Fast Track styles as well:
script
In our returns we should keep one-liners as small as possible. If there are more than three (3) checks we should try to use simple if-statements instead.
// bad
if (
this.computation_type?.slug === "realtime" &&
this.computation_qualifying_classes?.length > 0 &&
this.computation_triggers.length > 0 &&
this.mode !== "New"
) {
return true;
}
// good
if (this.computation_type?.slug !== "realtime") return false;
if (this.computation_qualifying_classes?.length === 0) return false;
if (this.computation_triggers.length === 0) return false;
if (this.mode === "New") return false;
We also aim to use the Composition API as a new standard when creating components. In that way we structure the code a bit different than in the Options API.
The first thing that is different is that we should use <script>
at the top, <template>
second and lastly the <style>
in our components.
The structure in <script>
is to organize by logical concern wrapped into functions to be used as composables. What we mean by that is that we should group a ref, the computed that depends on it, the function that reads one of those, and so on into one group. Code that does something together, goes together.
<script setup>
import { ref, computed } from 'vue';
const { count, doubleCount, increment } = useCounter();
const { name, greeting, changeName } = useGreeting();
function useCounter() {
// all things related to count
const count = ref(0);
const doubleCount = computed(() => count.value * 2);
const increment = () => {
count.value++;
};
return {
count, doubleCount, increment
};
}
function useGreeting() {
// all things related to name
const name = ref('John');
const greeting = computed(() => `Hello, ${name.value}!`);
const changeName = (newName) => {
name.value = newName;
};
return {
name, greeting, changeName
};
}
</script>
template
In our vue components we try to keep the template as little bloated as possible. We achieve this by doing the following (examples)
- use a computed in a v-if (or v-show) if there are two (2) conditions or more
// bad
<template>
<queues
v-if="queues.some(x => x.selected) && queues.filter(x => x.name === 'test').length > 0"
>
</queues>
</template>
// good
<template>
<queues v-if="showQueues"> </queues>
</template>
- use a method instead of setting data to more than one (1) data prop in the template on a click-event
// bad
<template>
<queues
@click="expandQueues = !expandQueues, hideOtherStuff = !hideOtherStuff"
>
</queues>
</template>
// good
<template>
<queues @click="expandAndHide"> </queues>
</template>
style
- use css-classes instead of inline styling
- try to use the BEM - convention
Naming
Interfaces
We try to name our interfaces with the ‘I’ prefix or ‘Interface’ as a suffix
interface IAnimal {
sound: string;
name: string;
}
interface IntegrationInterface {
brand: string;
brandId: number;
}
Architecture
We are trying out Micro Frontend Architecture based on mono repo and federated modules in few parts of the fronted. More info on Module Federation can be found here. And since it is a Webpack
plugin, our Vite
projects make use of an additional package to support Module Federation, which is vite-plugin-federation.
The idea is to have several Micro Apps exposing themselves as federated modules, and a Root Application which will host them as remotes and will take care of dynamically loading them as in-browser modules.
More in details
The Root Application will have its own repo and will be using Vue3
+ Vite
. The Micro Apps will live in the same repo, inside a folder named packages
but will not need to use the same technology, they can potentially use any framework as long as they expose themselves as remotes.
β’οΈ Testing
Type | Library Name |
---|---|
For vue2 | jest |
For vue3 | vitest |
For vue3 | Vue test utils |
Tests
Currently we are using Vue 3 + Typescript on the most recent repositories and Vue 2 on older ones. This changes the test framework we are using when making unit tests which this guide will cover. But as a start; we go through the similarities and a high-level guide on why, what, which structure and what library we use.
Why we test
It will achieve our users to be contentful and have a good performance experience using our applications. On the other hand, for us the developers, it will save a lot of time to resolve bugs or when adding new features it will not break previous behaviors of the code.
What to test
You can test the components’ behavior, such as whether
- an event was emitted
- a data property was correctly updated
- a computed property is correctly calculated.
Structure
We aim to use the AAA-structure. Arrange, Act, Assert.
Each test should start with an arrange section, then the act section, and finally the assert section. Sections don’t generally overlap (with few exceptions).
it("should return 5 when given 2 and 3", () => {
let adder = new Adder();
let sum = adder.add(2, 3);
expect(sum).toBe(5);
});
The Arrange section is where the initial state is setup. A good unit test tests only a single state change. So we want to put our code into the initial state. That’s what the arrange is for.
Next comes the Act section where we execute some kind of state change or computation. In our example we call the add method in the Act section.
Finally comes the Assert section. Here we assert that the resulting state, or computation is what we expect. This is where we cause the test to fail if we get an unexpected result, or pass if we get the expected result. In the example we expect the sum to be 5. Notice that each section is easy to identify. Whitespace helps.
This also makes tests predictable and improves readability. If every test has some uniformity like this, then dealing with any test, whether you wrote it or not, becomes easier.
Library
Currently we are using @vue/test-utils library, which provides a set of utility functions for testing Vue components
Vue 3 with Vitest:
This component library uses Vue 3 with vitest.
An example:
import { shallowMount } from "@vue/test-utils";
import { describe, it, expect, vi } from "vitest";
import FTMockComponent from "@/components/FTMockComponent.vue";
function factory(props) {
const wrapper = shallowMount(FTMockComponent, {
props,
});
return wrapper;
}
describe("FTMockComponent.vue", () => {
it("Our component should render a mock element", async () => {
const props = {};
const wrapper = factory(props);
expect(wrapper.findByTestId("mock").exists()).toBe(true);
});
});
Helpers to find DOM element.
Jest
test-utils.js
const findByTestId = (wrapper, id) => wrapper.find(`[data-test-id='${id}']`);
export { findByTestId };
Then import as needed
import { findByTestId } from '@/scripts/test-utils';
describe('ActionGroupListItem', () => {
it('emits an event when the split percentage value changes', () => {
const wrapper = shallowMount(ActionGroupListItem);
const input = findByTestId(wrapper, 'action-group-list-item-percentage');
}
});
Vitest
In existing projects the helper should already be setup and you should be able to find the DOM element with the helper method findByTestId
that is extended on the wrapper item.
If you are creating a new project however you might have to create the helper method with the following code in a new file:
setupTests.js
import { DOMWrapper, createWrapperError, config } from "@vue/test-utils";
const DataTestIdPlugin = (wrapper) => {
function findByTestId(selector) {
const dataSelector = `[data-testid='${selector}']`;
const element = wrapper.element.querySelector(dataSelector);
if (element) {
return new DOMWrapper(element);
}
return createWrapperError("DOMWrapper");
}
return {
findByTestId,
};
};
config.plugins.VueWrapper.install(DataTestIdPlugin);
and then add it to your Vite config
vite.config.js
export default defineConfig(({ mode }) => {
return {
...
test: {
setupFiles: './src/setupTests.js',
},
...
}
});
π§ͺ Data Science
We are relying on the PEP8 style guide.
ποΈ Databases
Our primary database of choice is MySQL8 (RDS) with utf8mb4 collation.
Migration tools
We always use a CLI tool to modify the schema. It ensures consistency in naming and the order they are applied in. The following migration tools are approved:
Naming
Schemas
In our database schemas, including tables and columns, we adopt the use of snake_case
syntax.
Indexes should have names
It is best practice to name indexes. Doing so makes it easier to remove them in future migrations.
Enhancing is_x
columns
Instead of storing an is_deleted
boolean, reach for a deleted_at
timestamp instead.
Its cost is negligible, both in data storage and coding overhead.
By considering a NULL
timestamp as false
and any non-NULL
timestamp as true
we can determine if something was deleted and when.
Idempotency
CREATE TABLE IF NOT EXISTS hello
rather than CREATE TABLE hello
Clickhouse
Consider using ReplacingMergeTree
rather than MergeTree
to achieve idempotency directly on the table.
Rollback support
All database migrations must be designed with rollback support in mind. Each migration should include both up
and down
steps. The up
step applies the migration, while the down
step reverts it. This practice ensures that our database can be rolled back to a previous state in case of issues or changes in requirements.
Cross joining
To maintain the integrity and isolation of our databases, we strictly prohibit cross joining with other databases. Our databases should remain self-contained, and data access should be constrained within the owning service.
β‘ Performance
With bigger clients come bigger expectations on performance.
It is simply not enough to have working code - the code needs to be performant.
This is achieved by considering:
-
Scalability: If it takes a service
1ms
to process a message and we have 2000m/s incoming, it will fall behind. We need to be able to scale our servies to run multiple instances. -
Cost Awareness: Consider the time and space complexity (
O(n) notation
) of an operation.
Keep in mind
Cost of an operation
Be mindful about resources. A waste of memory or CPU is detrimental to performance.
Consider the following example:
Batching
Use caching to boost performance by storing frequently accessed data, reducing retrieval times and minimizing resource usage.
Databases - RDS
Caching - Redis
Logging
Effective logging is a vital aspect of our production environment and plays a crucial role in diagnosing issues, monitoring performance, and ensuring the reliability of our services. To maintain a high level of clarity and security, we follow the following guidelines:
Error and Warning Modes
In our production environment, services should primarily run in error
or warning
mode.
Because we operate on a fatal-and-retry basis when consuming messages, we choose to restrict logs to issues that require immediate attention to save the performance cost that comes with logging and instead prioritise high throughput. Logs can be elevated on demand when
Performance
While logging is essential for diagnosing issues, it’s equally crucial to avoid unnecessary or excessive logging, which can negatively impact performance. Care should be taken to log only information that is valuable for troubleshooting and monitoring. Always question what value a particular log will bring.
Reserved Levels
Each logging level has a specific purpose:
Error
: Reserved for critical issues that require immediate attention, such as unexpected failures or application crashes.
Warning
: Used for non-fatal issues that might impact performance or functionality but do not cause service failure.
Info
: Typically used for high-level operational information, like service startup and shutdown.
Debug
: Intended for detailed debugging information that may be useful during development and troubleshooting but is not necessary in production.
Keep in mind
Always strive to use one of the described levels when logging to remain in control of which logs to display.
π GIT
We aspire to use conventional commits.
βΉοΈ When working on large PRs with many commits, consider using the squash feature to reduce clutter in the tree. Keep in mind that this is only encouraged if there is no value in keeping the history.
π Security
Checks
To maintain a strong security posture, it is essential to ensure all security checks are passing in CI during build stage. Always aim for 100% passing checks.
- We have govulncheck integrated in our CI.
- We have gosec integrated in our CI.
We encourage you to install these tools in your local workstation and execute them as a precautionary step before doing a commit.
You can install them using brew:
brew install govulncheck gosec
Once installed, you can execute them on any go repository.
vulnerabilities analysis:
govulncheck ./...
static code analysis
gosec ./...
Logs
β οΈ Sensitive information, such as passwords or other confidential data, must never be logged. Ensure that no sensitive information is included in log entries to protect the security of our systems and user data.
Exposing sensitive data
β οΈ Be careful when returning errors from an endpoint. We generally want to return generic error messages to the consumer of an endpoint to avoid exposing sensitive data and log more details on the server to help the internal debugging process.
SQL injection
β οΈ All database queries using variables should be leveraging the placeholder syntax from our database library to prevent SQL injection.