Skip to main content

Bringing the Modern Data Stack to Engineering Operations

Lars Kamp
Some Engineer
Founder & CEO at Faros AI

In the old world of software engineering, developer productivity was measured by lines of code. However, time has shown how code quantity is a poor measure of productivity. So, how come engineering organizations continue to rely on this metric? Because they do not have a "single-pane" view across all the different systems that have data on various activities that actually correlate with productivity.

That's where Faros AI comes in. Faros AI connects the dots between engineering data sources—ticketing, source control, CI/CD, and more—providing visibility and insight into a company's engineering processes.

Vitaly Gordon is the founder and CEO of Faros AI. Vitaly came up with the concept for Faros AI when he was VP of Engineering in the Machine Learning Group at Salesforce. As an engineering leader, it's not always code; you also have business responsibilities. That meant interacting with other functions of the business, like sales and marketing.

In those meetings, Vitaly realized that other functions used standardized metrics that measure the performance of their business. Examples are CAC, LTV, or NDR. These functions built data pipelines to acquire the necessary data and compute these metrics. Surprisingly, engineering did not have that same understanding of their processes.

An example of an engineering metrics framework is DORA. DORA is an industry-standard benchmark that correlates deployment frequency, lead time, change failure rate, and time to restoration with actual business outcomes and employee satisfaction. For hyperscalers like Google and Meta, these metrics are so important that they employ thousands of people just to build and report them.

So, how do you calculate DORA metrics for your business? With data, of course. But, it turns out the data to calculate these metrics is locked inside the dozens of engineering tools used to build and deliver software. While those tools have APIs, they are optimized for workflows, not for exporting data. If you're not a hyperscaler with the budget to employ thousands of people, what do you do? You can turn to Faros AI, which does all the heavy lifting of acquiring data and calculating metrics for you.

The lessons learned from the modern data stack (MDS) come in when building data pipelines to connect data from disparate tools. In this episode, we explore the open-source Faros Community Edition and the data stack that powers it.

How to Build for the Cloud on Your Laptop!

Lars Kamp
Some Engineer
Waldemar Hummer
Co-Founder & CTO at LocalStack

Waldemar Hummer is Co-Founder and CTO at LocalStack. LocalStack gives you a fully functional local cloud stack so you can develop and test your cloud and serverless apps offline. LocalStack is an open-source project that started at Atlassian, where its initial purpose was to keep developers productive on their daily commutes despite poor internet connectivity.

LocalStack emulates AWS cloud services on your laptop, increasing the number of phases in your infrastructure environment to four: local, test, staging, and production—with LocalStack efficiently covering the local and test phases (including CI builds). LocalStack also integrates with a large set of other cloud tools, such as Terraform, Pulumi, and CDK.

While the commute problem went mostly away with COVID, it became clear that a local development environment has speed, quality, and cost advantages. Local provisioning of resources is faster and can speed up dev feedback cycles by an order of magnitude. Developers can start their work without IAM enforcement, then later introduce security policies and migrate to the cloud. A local environment also reduces the cost of cloud sandbox accounts.

A key requirement for LocalStack to be valuable is parity with cloud provider services, which means replicating services and API responses. LocalStack is built in Python, and Waldemar walks us through LocalStack's process of building out the platform to have 99% parity with AWS.

In this episode, we also cover developer marketing, community building, and how LocalStack amassed over 44,000 stars on GitHub. Waldemar takes us through both a live LocalStack demo and a deep-dive into LocalStack's GitHub repository.

A Shift-Left Approach to Building Serverless FinTech Applications

Lars Kamp
Some Engineer
Jonathan Bernales
DevOps Engineer at Ekonoo

There is a new generation of companies that are building their applications 100% cloud-native, with a pure serverless paradigm. One such company is Ekonoo, a French FinTech startup that enables customers and organizations to efficiently invest in retirement funds.

Jonathan Bernales is a DevOps Engineer at Ekonoo. In this interview, Jonathan walks us through Ekonoo's approach of giving developers the autonomy to build and deploy code along with the responsibility for security and cost.

Holding developers responsible for security and cost is a rather new part of "shift-left." Cost awareness becomes part of the development culture. To keep cloud bills under control, Ekonoo developers are responsible for their individual test accounts and have access to the AWS Billing Console and AWS Cost Explorer.

At Ekonoo, there is no dedicated "production team." Rather, DevOps collaborates with developers to create guidelines and guardrails for architecture, automation, security, and cost. The entire Ekonoo stack runs on AWS using native AWS services such as CloudFormation, Lambda, and Step Functions.

Watch this episode to learn about Ekonoo's transition to a microservices architecture and the lessons learned along the way.

Site Reliability Engineering and Cloud-Native Architecture

Lars Kamp
Some Engineer
Andreas Grabner
DevOps Activist at Dynatrace

Andreas Grabner is a DevOps Activist at Dynatrace, where he has fifteen years of experience helping developers, testers, operations, and XOps folks do their jobs more efficiently.

In this episode, Andreas and I discuss how the shift to cloud-native and more dynamic infrastructure is followed by a change in how developers, architects, and site reliability engineers (SREs) work together.

With the sheer quantity of resources running in cloud-native infrastructure and the monitoring signals produced by each resource, the only way to keep growing without "throwing people at the problem" is to turn to automation.

Andreas makes a noteworthy distinction between DevOps engineers and SREs:

  • DevOps engineers use automation to speed up delivery and get new changes into production.
  • SREs use automation to keep production healthy.

SREs are often former IT operations and system administrators responsible for physical machines, virtual machines (VMs), and Kubernetes clusters. As SREs, they move up the stack and become responsible for everything from the bottom of the stack all the way up to serverless functions and the service itself.

We dive into the differences between SLAs, SLOs, and Google's four golden signals of monitoring—latency, traffic, errors, and saturation. Andreas shares the example of a bank and how they started defining SLOs to measure the growth of their mobile app business versus just defining engineering metrics.

This episode covers "engineering for game days," chaos engineering, and making the unplannable, plannable. Andreas shares his perspective on the general trend to "shift left" and include performance engineering in the development and architecture of cloud-native systems.

Shifting from FinOps to Financial Engineering

Lars Kamp
Some Engineer
Head of Financial Engineering at Wix

Dvir Mizrahi is Head of Financial Engineering at Wix, the leader in website creation with 220 million users running e-commerce operations. And with over six thousand employees, Wix ships more than fifty thousand builds each day.

Dvir is also among the original authors of the AWS Cloud Financial Management certification.

In this episode, Dvir covers how Wix shifted from FinOps to Financial Engineering. It's an engineering-first approach to build tooling and processes tracking financial key performance indicators (KPIs) for its multi-cloud infrastructure. The new approach established a culture of financial responsibility that supports Wix's continued growth.

Wix started in 2006 and initially ran its infrastructure on-premise. Today, Wix runs a multi-cloud environment on Google Cloud and Amazon Web Services (AWS). As Wix shifted from on-premise to the cloud, the procurement process of resources changed with it.

In the old world, purchasing additional hardware was a closed and controlled process. But in the cloud, Dvir compares resource procurement to "a supermarket where people can go in, take whatever they want, and leave without passing the registers." A developer could spin up a hundred thousand instances with just the click of a button.

Wix realized the financial risk that comes with liberal permissions to spin up infrastructure and hired Dvir in 2017. FinOps approaches infrastructure governance from a billing perspective and handles workloads already provisioned in the cloud. But at Wix's scale, where there are thousands of engineers, the FinOps approach stops working. "By the time you have a financial incident, it's too late and you didn't govern anything."

Dvir shifted the strategy to proactively preventing waste in the first place, by incorporating financial KPIs into engineering goals. In addition, Dvir built an internal platform called "InfraGod" which collects infrastructure data, integrates with Terraform, and enforces rules at the time of resource provisioning. Taking action at the time resources are provisioned rather than after the fact is "the difference between Finance and Financial Engineering."

Listen to this episode for a deep dive into the tactics that Dvir uses to run Financial Engineering at Wix, such as data collection, engineering post-mortems, monthly reports, and mandatory resource tagging.

Contact Us

Have feedback or need help? Don’t be shy—we’d love to hear from you!

 

 

 

Some Engineering Inc.