40+ Tools that we use to make web development easier

Krzysztof Szromek CEO

We get asked regularly what services we prefer to use for certain tasks or problems, so we’ve created this list of products that helps us at Exlabs deliver better products. You may notice that vast majority of those services is paid, although we can’t stress enough how much time and money they’ve saved us and our clients.

Hope you find this list useful!

Infrastructure & Engineering

Heroku – Heroku allows us to focus on building apps instead of infrastructure, so we don’t need to spend time on DevOps

AWS – we specifically use Amazon S3 for assets storage and backups, but occasionally we use Amazon EC2 cloud services as well

CloudFlare – helps us speed up sites delivery

DNSimple – how we streamline domain and SSL certificate management

CircleCI – it takes care of continuous integration and testing

NewRelic – makes keeping bull’s eye on performance of production applications easy

Logentries – we use it for applications log storage

DataDog – does pretty much the same as NewRelic, but for significantly lower price

HoneyBadger – monitors errors in our application

Github – all of our code and versioning is stored here

BrowserStack – our QA team use it to test front-end work across various browsers and devices

CodeClimate – it makes sure quality of our code is top-notch by automated code quality checks

Watchdocs – helps us keep our APIs documentation up to date

Sidekiq – takes care of background jobs

Pusher – helps us deliver real-time data to the browser

Sublime – this is where most of our developers

Atom – alternative choice for code editing

Debugmail – it moves all our test emails from development/staging into one place

Product Management & Design

JIRA – Our Scrum Masters’ kingdom. Although we are not fans of JIRA’s UX, it’s still by far the most comprehensive agile project management tool.

Craft – Craft helps our Product Owners to manage the Product Backlog and vision clearly

Trello – we use Trello a lot, but mostly for Information Management and checklists. Read more about how using checklists on trello helped improve our processes quality here.

Google Docs – we use Docs mostly in pair with Trello to document processes more thoroughly

UXPin – we use it for mocking the UI and presenting it to the clients

Sketch – this is what we use for web design,

Illustrator – and this for vector editing stuff

Communication

Slack – this is our company headquarters

Zoom – we use zoom for our weekly team updates

Toggl – helps our team be in control with the most precious asset – time. More about how Toggl helped us with estimates.

Features as a Service

Auth0 – saves us time on building identity management

Stripe – takes care of online payments management

Algolia – search engine as a service

MapBox – whenever we need a map feature in the app we use MapBox

Keen.io – ever tried to build custom reporting? Thanks to Keen.io it’s simpler than ever before

Gengo – we use it for translating websites and apps

Wistia – our choice for video hosting

Twilio – powerful platform that we use mostly for sms

Tokbox – great service that we use when we need to build video communication into our apps

Api.ai – helps us create smart conversational chatbots

Kloudless – it saves us time allowing integrate many file storage services at once

Backend As a Service

Backand – we use Backand when we need to speed up product development. It gives us out of the box backend ready for action. Here you can read more about that approach.

Syncano – does pretty much the same as Backand

Firebase – very powerful platform that can save a lot of backend work, we use it mostly for online chats

Email Marketing

Sendgrid – we use it for transactional emails delivery

Mailgun – occasionally we use Mailgun instead of Sendgrid

Mailchimp – it’s mostly used by marketing teams, but we often recommend it for campaign emailing

Customer Success

Intercom – fantastic tool for customer support

Drift – messaging tool, we use it mostly for marketing websites

Clearbit – provide a lot of person and company data that we use to enhance our products  

Analytics

Google Analytics – no introduction needed here

Heap – fantastic alternative to Google Analytics

Segment – it helps us merge users data streams from all sources

Mixpanel – great for tracking funnels and product usage

So this is our toolkit. Interested in our values & how we get things done? See the page about us

What about you? What other services are indispensable for your business?

🛡️ 5 Strategies to Improve
Database Security on AWS

Krzysztof Szromek CEO

Safeguarding company data and maintaining a reliable infrastructure are key responsibilities for IT managers. This article presents five strategies to enhance AWS database security, from secrets rotation to adopting serverless computing. Let’s dive into fortifying your AWS database security!

TL;DR 📝

  • Fundamentals first: encryption, access control, backup, logging & monitoring
  • Secure credentials with Secrets Manager
  • Adopt serverless computing for reduced vulnerabilities
  • Detect leaks using honeypots
  • Attend AWS RDS security webinar for more insights


📚 Start with the Fundamentals – Encryption, Access Control, Backup, Logging & Monitoring

Though not unusual, the essentials—encryption, access control, backups, logging, and monitoring—are often overlooked. Use robust encryption algorithms like AES-256 for data encryption at rest or in transit. Implement proper access controls to ensure only authorized users access the data. Conduct regular backups to protect your data in case of system failures or breaches. Lastly, employ logging and monitoring to detect any suspicious activity within your environment and take swift action when needed.

🗝️Secure RDS Credentials with Secrets Manager


Securing your database credentials using a secrets manager, such as AWS Secrets Manager or similar alternatives, is highly effective. This tool securely stores and manages your application credentials, preventing the need to hard code them into your code. Instead, store the credentials as secrets and securely access them through AWS. This way, if a resource is compromised, the rest of your data remains safe. You may want to read our CTO’s article, “An Introduction to Secrets Rotation.

☁️ Embrace Serverless Computing for Simplified Vulnerability Management


One of the key benefits of serverless computing is that it eliminates the need for infrastructure maintenance. By shifting the responsibility of managing servers and their associated vulnerabilities to the cloud provider, you can focus on writing code and implementing application-level security measures. This approach reduces the attack surface and minimizes potential vulnerabilities that may arise from server misconfigurations or outdated software. As a result, serverless computing enables you to maintain a secure and robust environment while enjoying the benefits of simplified infrastructure management.

👀 Monitor for Credential Leaks Using Honeypots


One effective method to detect leaks is through the use of honeypots. Honeypots are decoy systems or resources designed to attract attackers and gather information about their methods and techniques. By setting up honeypots with fake credentials, you can monitor for unauthorized access attempts and gain valuable insights into potential vulnerabilities in your system. This approach enables you to identify and mitigate security threats proactively, ensuring your actual credentials and sensitive data remain well-protected.

📢 Join Our Webinar to Learn More


Don’t miss our upcoming webinar on AWS RDS database security best practices! Gain insights from our expert speakers on critical topics, including encryption, access controls, credential rotation, backups, and monitoring. Discover practical tips for enhancing your database security posture. This informative and engaging session is perfect for AWS RDS newcomers and seasoned pros alike. Register now to secure your spot!

Conclusion


Enhancing your AWS database security necessitates a multifaceted approach that combines Amazon’s built-in security features with your own robust policies. Establishing a strong foundation for security through encryption, access control, backups, logging, and monitoring is vital. Moreover, leveraging tools like secrets managers and serverless computing can further improve database security. Lastly, consistently monitoring potential credential leaks and unauthorized access enables swift mitigation of security threats.

7 principles of software
development testing.

Adela Cirocka QA Engineer

Efficient and cost-effective software development testing is essential in delivering high-quality products to end-users. Adhering to the 7 principles of software development testing can result in improved testing performance and reduced costs. These principles provide guidance to testers and prevent logical mistakes during the testing process.


1. Testing Shows The Presence Of Defects, Not Their Absence

A testing team’s objective is to confirm that products/applications can meet end-user needs and business requirements, not to prove they are defect-free. 💻🔍 In software testing, various testing types and methodologies are deployed to continuously search for and reveal hidden defects.

2. Exhaustive Testing Is Impossible

It would require an unlimited effort and fall outside the project timeline, so specific techniques are prioritized to test important functions based on risk assessment. Identifying crucial functions to test is a key skill for a testing expert. 🧪🕹️

3. Early Testing Saves Time And Money

💻 The testing process should begin at the earliest stage possible to detect defects and errors quickly, thus limiting issues found in later stages and saving money. 💼 Generally speaking, a start should be made once the requirements of the test have been defined.

4. Defects Cluster Together

During software testing, it is often observed that the majority of defects are related to a small number of modules. This is commonly referred to as the Pareto Principle where 80% of the defects can be found in 20% of the modules being tested. 🔎️ By focusing on these critical areas, testers can efficiently locate and resolve defects. 😎💼

5. Beware Of The Pesticide Paradox

This testing principle of “testing to exhaustion” highlights that repetitive testing can become ineffective in finding new defects. To combat this, testers must constantly revise their test cases. To maintain maximum efficiency, experienced testing teams will vary their techniques and approach, introducing new modifications and scenarios. 🔍🔧

6. Testing Is Context Dependent

Testing depends heavily on context and the approach must be tailored to the subject being tested. For instance, an application for the cruise industry will differ vastly from one for the insurance industry. 💻🛳️💼

7. Absence-Of-Errors Is A Fallacy

Software that has been tested and found to be 99% bug-free can still be unusable. This is often due to the system being tested for the wrong requirement, which makes identifying and rectifying these errors virtually useless if the system does not fulfill the requirements of the end user. 🔎🧪

Based on 7 Principles of Software Testing by International Software Testing Qualifications Board

AWS Security Assessment

You might have set up your AWS account a while ago and never looked back, or maybe your development team manages it. The question remains – is your AWS secure? What are the chances of a breach?

Complete a survey to grade
your AWS account security level.

After completing the survey, you will receive your security score
& will be eligible to a 30 minutes consultation to revisit your results.

Exlabs Terraform Module:
ecs-secrets-manager

Enable secure access to secret values for Docker containers run on AWS ECS.

Module available in...

It uses the AWS Secrets Manager service to store, retrieve and rotate secrets.
No secret is hardcoded in the container definition. AWS injects them on the container startup.
You can restrict and audit access to the Secrets Manager, no secret will be accessed without your permission and knowledge.

Best practices on keeping your codebase clean

Krzysztof Szromek CEO
Productivity over time for bad quality code
 

Writing bad code is easy. You just make things work and move on to next task, not worrying about the code quality. But what you don’t realize is that it’s going to become unmanageable very quickly — to others at first, eventually to you as well. That’s a good reason to try to write good code.

What is a good code?

It’s hard to define what is good code, but here’s the definition from Michael Feathers which seems to explain it pretty well.

Clean code always looks like it was written by someone who cares. There is nothing obvious that you can do to make it better. All of those things were thought about by the code’s author, and if you try to imagine improvements, you are led back to where you are, sitting in appreciation of the code someone left for you — code written by someone who cared deeply about the craft.

So how do we write a good code?

Well, we start with a test.

It’s a waste of time to write test for everything, I’ll just add integration tests afterwards — you may say. But the test is a base for ensuring that you didn’t break your code during refactoring.

Refactoring? I know what code I want to write, why would I refactor it afterwards? — writing great code upfront is hard and not really efficient. Would you rather spend an hour developing great module just to find out that it’s not a great concept after all after seeing it in action, or rather make it work in 15 minutes and then spend some more time on refactoring if it indeed prove itself useful? Not to mention that small improvements to an existing code are easy. Especially when test ensures us that the code is still working properly.

TDD cycle

TDD cycle

 

Meaningful naming

After reading a function or variable name we should know what exactly is it responsible for. Consider following example

def getThem
  list1 = []
  theList.each do |x|
    if x[0] == 4
      list1 << x
    end
  end
  list1
end

We see that function returns some filtered list, but what kind of list does it apply to or what is the condition of filter is not clear at all. Now let’s look at the same function with meaningful names.

def getFlaggedCells
  flaggedCells = []
  gameBoard.each do |cell|
    if cell.isFlagged()
      flaggedCells << cell
    end
  end
  flaggedCells
end

Besides meaningful names we extracted logic that was not this function main concern — checking whether cell is flagged — to cell object.

Another thing to keep in mind is to avoid shortcuts and pronounceable names. genTmstmp is a few characters shorter than generateTimestamp but for someone who has to skim through a bunch of code that looks like this it may make things hard.

Functions

Functions should do one thing. They should do it well. They should do it only.

The rule actually means that a function should only do those things that are on one level of abstraction below the stated named of the function — i.e., it should not include calls to functions at other levels of abstraction, so the essential/high level concepts should not be mixed with lower-level details.

Another way to know that a function is “doing more than one thing” is if you can extract another function from it with a name that is meaningful.

“They should do it only” part means that not only they should do one thing, but they should have no side effects, as these are really easy to omit and can be hard to debug.

Comments

The best code has none. The goal is to explain yourself in code. Code is always timely, where comments can get out of date, can get misplaced and the compiler won’t complain. If you feel you need to add a comment then you most probably should improve the code.

Formatting

Formatting guidelines come and go, but there’s one rule that is timely and ensures that the codebase you work on will be clean and readable:

Every programmer has his own favorite formatting rules but if he works in a team then the team rules.

Exceptions handling

If you’re afraid of throwing exceptions and prefer to return example error codes instead, think twice.

if (deletePage(page) == E_OK) {
  if (registry.deleteReference(page.name) == E_OK) {
    if (configKeys.deleteKey(page.name.makeKey()) == E_OK){
      logger.log("page deleted");
    } else {
      logger.log("configKey not deleted");
    }
  } else {
    logger.log("deleteReference from registry failed");
  }
} else {
  logger.log("delete failed");
  return E_ERROR;
}

Noticed the level of indentation we reached here to catch returned errors?
The exception-based alternative says it all.

try {
  deletePage(page);
  registry.deleteReference(page.name);
  configKeys.deleteKey(page.name.makeKey());
}
catch (Exception e) {
  logger.log(e.getMessage());
}

Tests

No matter what type of tests you prefer (hopefully it’s not integration tests), keep in mind the F.I.R.S.T. rule set:

  • Fast — you won’t run tests that run for 5 minutes frequently enough to catch errors on time.
  • Isolated — each test should have a single reason to fail.
  • Repeatable — you should obtain the same results every time you run a test.
  • Self-validating — you should not need to analyze the test result to know whether it has passed or not. PASS or FAIL should be the result.
  • Thorough and Timely — keep’em up to date.
  •  

Summary + Boy scout rule

This article is kind of a list of good rules and you may feel that some of these are overkills or just useless. I thought so about some of these too, until I read the Clean Code book by Robert C. Martins, known as Uncle Bob and I recommend you doing the same.

There’s just one more rule I think is worth mentioning, it’s Boy scout role and it says:

Always leave the campground cleaner than you found it.

All it means that you should always try to update at least a little piece of code when you do (change) something, which by small incremental changes to the codebase lead to a better code.

I hope your code will be at least a little better from now on 😉

References

https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882

Protect your data from Cross-Site Scripting Attacks
with Content Security Policy

Kamil Kusy Frontend Developer

Neglecting to implement a Content Security Policy (CSP) for your application can have unfortunate consequences for your business. Without a CSP in place, your application may be vulnerable to cross-site scripting (XSS) attacks, which can lead to a range of negative outcomes.

For instance, sensitive user data, such as login credentials and financial information, may be stolen and lead to data breaches. Additionally, attackers could use XSS to inject malware into your website which could then be spread to other users. This can cause damage to users’ computers and further security incidents, resulting in loss of customer trust and potential legal or regulatory issues.

Even though implementing a Content Security Policy (CSP) is a relatively simple process, we may run into some difficulties. Recently, when updating the CSP in one of our projects, I encountered an issue that required additional research and a specific solution.

After adding the directive:

script-src: ‘self’

[script-src is one of the main directives in the Content Security Policy (CSP) that specifies which sources of scripts are allowed to be loaded by the browser. It is used to restrict the types of scripts that can be executed on a website, which helps to prevent cross-site scripting (XSS) attacks. In this case, the value ‘self’ is used to allow scripts that are on the same origin as the website to be executed, while blocking any external scripts from being loaded.]

The following message greeted me in the console:

After thorough research, it turned out that one of the libraries we use in the project applies under-the-hood inline styles leaving us with two potential solutions. The first option was to utilize the “unsafe-inline” directive, however, this would disable the most vital aspect of the CSP and leave the application vulnerable to outside script injection. Therefore, the second option of adding a hash or nonce to the applied inline style was chosen as it is faster, simpler, and more effective for our specific use case.

How to:

As we can see in the console output, we already got our hashed script obtained using the SHA-256 algorithm (Secure Hash Algorithm 256-bit). How it’s working? The script is converted into a unique string of characters that cannot be reversed, i.e. having the output data (hash) we will not specify the input data. Here you can see what the hashing process looks like step by step.

The received hash must then be added to the CSP, remembering to enclose it within single quotes. The result would look like this:

script-src 'sha256-YGDvU5q+cB+Qm/hzBAtqUGTRsHo19RnXbGLHUr5Gk/o='

That’s all. 😊

It is important to keep in mind that any changes made to the script will result in a change of the hash, requiring the CSP rule to be updated accordingly. Additionally, whitespace must be taken into consideration and if the script is subject to frequent changes, it may be more suitable to consider using a CSP nonce. What is it?

A nonce (number used once) is a unique, random value that is generated for each request. It is typically generated on the server side and included in the Content-Security-Policy header of the response (just like the hash used above). It is generated anew for every request that your web server receives for a specific document, and unlike a hash, it is not dependent on the script code.

In conclusion, it is essential to protect your application and brand reputation by implementing a CSP, such as the script-src directive. Especially since so little can prevent so many potential threats.

Build or buy software dilemma

Krzysztof Szromek CEO

Fine, I get it. Your bias alarm has just gone nuts. I can’t blame you — after all our bread and butter is about shipping better software products faster. If you could however turn a blind eye for a second, I promise to answer interesting and useful questions that will help you make informed decisions, not just push a certain view.

While a sea of ink has been spilled on proving that custom software is the way to go emphasizing its flexibility, it’s certainly not a zero-sum-game. Buy is rarely just a “buy”. Vast majority of the time it means to “integrate”. And since the same applies to “build”, in the end they both require similarly skilled engineering talent. And this is the very source of the dilemma itself.

Does your business currently have (or can you build a team of product) experts capable of building, maintaining, and supporting the solution?

If the answer is no (and it really is ok for it to be so) that may limit your options to just one at the start and not reading any further may not be too bad of a choice. But what are the sacrifices?

The good, the bad and the fatal

I promised you not to enter the argument of rigidity vs flexibility. And the reality is buying an off the shelf package can often make sense in cases where your business follows well-established industry practices and innovation isn’t part of your game plan. Unfortunately that’s not where most businesses are today.

In practical terms buying a ready package often makes businesses passive and less resilient. Upon a glance on the first page of Google it’s apparent that one of the major perceived disadvantages of building your own solution is the necessity to understand the processes first (instead of just pulling the trigger). My years of experience building products that succeed (and closely observing those that do not) prove the very opposite. Sure, it is absolutely a necessity to nail a problem you are trying to solve before you even get it to end users hands, but this is the very advantage.

How complex does your solution really need to be to fit your business size and needs? Vast majority of the time it’s much simpler than you realise. The reason for this is once you get passed the bridge of accepting a longer wait time and higher upfront cost (which undoubtedly is true a lot of the time), you no longer think about essentials but instead on all the nice-to-haves that the ready package didn’t allow you to have which usually account for majority of the development. That doesn’t make it a fair game and you’re not comparing like for like. Building your own solution does not justify skipping the MVP stage.

You are not your customer

Most of the time, you are not even the end user. Majority of businesses begin product development making the same fatal mistake: they try to build something that their customers want. They assume that there’s one right answer to what their customers actually want.

Reality is a great deal messier and will continually surprise you.

Cindy Alvarez, the author of Lean Customer Development bestseller writes, “Customer development starts with a shift in mindset. Instead of assuming that your ideas and intuitions are correct and embarking on product development, you will be actively trying to poke holes in your ideas, to prove yourself wrong, and to invalidate your hypotheses”.

Seemingly, just buying a solution takes care of all these problems. And indeed, if the software product you’re after is about a do-it-and-forget-it automation, the answer is just buy.

One of the core principles of product development is not to reinvent the wheel (Silicon Valley forgets about this way too often). Experience tells me however that attempts to shorten the validation cycle end with a big chunk of the product budget being spent on unnecessary pivots born out of desperation to protect the initial investment.

Do both

The typical cycle I’ve observed is that an off the shelf system is usually fast and cost-effective to start with, then most businesses eventually find that the lack of customisation relative to their day-to-day operations ultimately leads to inefficient, manual processes. So back where you started.

Even if the decision to make a slow start is made for wrong reasons, the choice is right and still stands. Unless what you’re building is so innovative it has never been even remotely attempted before (in that case we should get in touch!), always seek ways to achieve 80% of the result with 20% of the budget (and off the shelf systems are usually great at that, even if just for getting a prototype up and running fast). That way you’ll be left with most of your cash available for reinvestment into a product that will actually help you truly innovate and excel beyond your peers.

Introduction to CIS Benchmark
What is it? Why should you use it

Peter Kolka

Adoption of public cloud has been growing almost exponentially. While it is true that vendors, like AWS spend obscene amounts of money on securing the cloud infrastructure they provide, the shared responsibility model exists and will continue to exist for a reason. Unfortunately, the growth in adoption and plethora of solutions available, does not go in pair with awareness of risks and best practises within engineering teams. But there is some order to this chaos.

CIS Benchmark – what is it?

Center for Internet Security
Let’s start with a bit of background. The 
Center for Internet Security (CIS) is a well-established, non-profit organisation that has developed its own CIS Standards and CIS Benchmarks for all types of IT systems, including the public cloud and specifically AWS. Their programme provides neat, unbiased and most importantly, consensus-based industry best practices to help businesses of pretty much any size assess and improve their security and is by now considered an undisputed global standard.

The benchmarks themselves are fairly simple too – they’re configuration baselines for securely configuring a system. They allow businesses to quantify their security posture and provide a way to demonstrate their level of compliance with particular security frameworks, including the NIST Cybersecurity Framework (CSF) and the ISO 27000 series of standards, PCI DSS, HIPAA, and others. Whilst passing the benchmarks does not mean an automatic accreditation for the above standards, adhering to the principles of CIS is very likely to make the certification processes themselves much easier, providing tangible evidence of the steps taken.

Why should I use the CIS Benchmark standard for my AWS cloud setup?

 

Security professionals have been using CIS templates and hardening guides for some time now. The CIS Benchmark is a great baseline standard for AWS and continuously evolves with the help of the CIS SecureSuite members and Consensus Community. By using its benchmarks, scoring methods and guidelines for your own business, you will also be helping safeguard the wider community against cyber threats.

But while the greater good is all-important (the only long-term solution), let’s get onto some practical (and valid!) motivations that may be driving you right now:

  • you will be able to establish where you stand without disruptive changes (or expensive recuitments) to the current state of affairs – get the data first
  • once you know your position, you’ll have a much clearer path towards the desired outcome as well as the required cultural shift to make those checks and validations systematically
  • everything is represented in the most non-geek language possible, allowing for efficient buy-in from all stakeholders

Is it difficult to get started?

CIS benchmarks provide two security settings:

  • Level 1 recommends essential basic security requirements that can be configured on any system and should cause little or no interruption of service or reduced functionality.
  • Level 2 recommends security settings for environments requiring greater security that could result in some reduced functionality.
    No alt text provided for this image

CIS Level 2

Go beyond CIS

Given your interest in CIS benchmarks, you’re likely in a process of establishing a security baseline for your business, perhaps for the first time.

If something’s too good to be true it usually is, and it’s hard to deny that CIS can at times feel fairly generic, with a ‘one side fits all’ spirit that omits recommendations applicable to the newer systems (a downside of the consensus mechanism).

So while CIS is certainly an excellent foundation, no standard is perfect. A number of key elements of infrastructure cybersecurity are not in the scope of the AWS benchmark, and this can include areas such as ensuring no secrets are stored in Lambda functions variables or ensuring that the RDS database storage is encrypted, so certainly not edge cases either.

For a broad and comprehensive security review of your technical estates which incorporates CIS benchmarks but is better tailored for modern infrastructure patterns (like serverless computing), consider solutions such as Exlabs Cloud Security Audit where the recommendations provided in the completion report are structured to facilitate remediation of the identified security risks.

Such reports won’t just explain what needs to be done to harden the systems, they will also explain why. Add to it things like numeric scores, and the end package is the most jargon-free, simple-to-understand means of communication with colleagues who have limited knowledge and experience with information security, yet their buy-in is essential for the journey to be successful.

From the practical standpoint, to help ensure the process is not a rubberstamping exercise where the outcome lands in the drawer with a whole series of noble intentions either, every security recommendation getting broken down into several sections that explain why and how a particular recommendation should be implemented:

  1. A description that provides a high-level overview of the recommendation.
  2. A rationale that clarifies why it is important to implement the recommendation.
  3. A report that helps you evaluate and understand the impact of implementing a recommendation.
  4. The audit also identifies how to prove a recommendation has been implemented.
  5. Finally, the remediation will discuss steps to implement the recommendation.

In terms of added benefits, the report document can also serve as a formal letter of attestation for the cloud security evaluation of your product or the business. A comprehensive audit like this will not only increase the peace of mind of any technical leader but also demonstrate to any stakeholder that responsible actions have been taken to mitigate the risks, making that client or investor meeting much easier.

Protecting Web Application:
Cloudflare vs AWS WAF

Mateusz Wilczyński CTO

When we hear the term Firewall, we often think about the Standard/Network Firewall. This kinds of Firewall offer OSI Layer 3 and layer 4 protection, which consists of checking the traffic source and destination IP addresses, protocol, source and destination ports. They are often called first and second generation firewalls.

As the Network Firewall can be still very useful, it’s not everything that Firewall technology has to offer. In this article, we will focus on the third generation, the Application Firewall.

Do you need a Web Application Firewall?

The key differentiator of an Application Firewall is the fact that it works on OSI layer 7, which means it can understand certain applications protocols like FTP, DNS and HTTP – which is most useful from a web application perspective. 

Understanding HTTP in conjunction with other technologies like the Deep Packet Inspection means that a well-configured Web Application Firewall can help you protect your application against the most common web application attacks including the OWASP Top Ten attacks. A very simple configuration can protect you against File inclusion, Cross Site Scripting, SQL injections and much more. Of course, an Web Application Firewall is not a silver bullet solution and it can’t mitigate all attack vectors, but definitely it can be very helpful. 

Let’s answer the question from the section title: “Do you need a Web Application Firewall?”. The answer can be: “It depends”.

Suppose your application is written in a truly secure way. In that case, you are sure that there are no security gaps in the application and the infrastructure now and in the future, and there are no and will be no reported Common Vulnerabilities and Exposures for your framework and libraries in the future. The answer can be “no”. 

As you might suspect, it’s impossible to meet the above circumstances in the real world. Because of that, the real-world answer is: “yes – you need a Web Application Firewall”. 

Web Application Firewall can be also very useful in the microservices architecture. Using a Firewall as a first line of defence that verifies all HTTP traffic, in case of detecting the same security issue in many microservices, you can implement the mitigation strategy directly on the Firewall fastly by a small team. Then, over time, you can implement it in each microservice with a respect to the given microservice team roadmap. 

Cloudflare WAF

Cloudflare is a company that gathered a lot of traction a few years ago with the DDoS Protection product. Currently, it’s offering a wide range of products and the Web Application Firewall is one of them. 

Cloudflare WAF offers easy integration with your current infrastructure, with a few limitations. This service setup requires using Cloudflare DNS, there’s no other need to change existing infrastructure or sacrifice performance. CloudFlare needs to have control over your DNS records, so if you are using other tools like AWS Route53 for DNS, using CloudFlare is not an option.

Cloudfare Web Application Firewall
Source: https://www.linkedin.com/products/cloudflare-waf/

CloudFlare uses a crowd sourced engine from all of its clients to learn about the attacks, and help create rules automatically for you. Also, a certain amount of customization is possible. To be more specific, the CloudFlare WAF contains the following rulesets:

  • Cloudflare Managed Ruleset
  • OWASP ModSecurity Core Ruleset
  • Custom Firewall Rules

This solution offers quite an easy to use dashboard to visualize and analyze threats with Firewall Analytics. It helps you tailor your security configurations. The Simulate feature is very useful when adding a new Custom Firewall Rule, as it allows you to preview the new rule results to reduce false positive marks. 

The CloudFlare WAF is quite a cheap solution. There’s no option to buy just the WAF, but the whole CloudFlare Pro package, which is the cheapest one containing WAF, costs 20 $ per month.  

Adding CloudFlare to an existing project seems to be very easy at first glance, but be ready for some unexpected issues. We’ve encountered issues with handling external APIs webhooks and the WYSIWYG editor. Both issues have been resolved by adding custom rules. After using such a product, the proper Quality Assurance process is essential to keep the application working as expected. 

AWS WAF

Amazon Web Services is a hyperscaler cloud computing platform. It’s the industry leader that is the most comprehensive and enterprise-ready service offerings. It offers over 200 services, which can be overwhelming at the beginning, but in this article, we will focus just on services connected with the Web Application Firewall functionality. 

Amazon Web Application Firewall

Source: https://aws.amazon.com/waf/ 

The main service is the AWS WAF, it’s offering protection at the OSI Layer 7 (Application Layer) for your business logic. 

Other useful services are:

  • AWS Shield – managed DDoS protection.
  • AWS Network Firewall – protection at the OSI Layer 3 and 4 (Network/Transport Layer) for your VPCs.
  • AWS Firewall Manager – centrally configure and manage security rules across accounts and applications. 

If you are already hosting your application on AWS, the setup of the AWS WAF is frictionless, you can deploy it without changing your existing infrastructure. It can be assigned directly to a few of AWS services including Application Load Balancer, CloudFront distribution, and Elastic IP address. No changes in DNS records are required. 

AWS WAF also offers a low operational overhead. It offers ready-to-use, built-in managed rules as well as wide rulesets from AWS Marketplace. AWS Managed Rules offers functionality similar to CloudFlare managed rules, including protection against OWASP Top 10 and other common attack vectors and threads. The rule engine also offers an advanced customisation mechanism. 

The AWS WAF Metrics are sent to the CloudWatch service, where you can set up dashboards and alarms according to your needs. The rules insight is available directly in the WAF dashboard, you can use it to inspect requests with single millisecond latency. 

The pricing for AWS WAF is usage-based, as often in AWS. It’s based on the AWS WAF Web ACL capacity units (WCU). WCU is a dimension used to calculate and control the operating resources that are used to process your rules within a web ACL. As a rule of thumb, we can assume AWS WAF is more expensive than the CloudFlare solution. For small scale deployments, we are seeing a running monthly cost of about $30. There are also more expensive setups available, for example, the feature-rich Shield Advanced costs 3000$ per month as a fixed price. 

The AWS WAF offers great integration with other AWS services, but when adding it to an existing application be prepared for some unexpected issues. We’ve encountered problems with the E2E testing tool – Cypress. 

Final thoughts

In the past, there was a significant difference between the CloudFlare WAF and AWS WAF. Cloudflare was a more automatic and easy to set up product, and the AWS solution was a more customizable and advanced tool. The two products were aimed at different markets, CloudFlare was more customer grade, mass market, and provide peace of mind by hiding the complexities in the background. AWS was a more powerful toolset for people who like to get their hands dirty and know the details. 

Right now this distinction is not so clear anymore. CloudFlare introduced more advanced customizations, AWS added managed rules and make the product easier to setup. Be aware that the distinction from the past can still be visible in some areas of the solution. 

If you have your infrastructure on AWS, choosing AWS WAF means easier setup, procurement, access and payment management. CloudFlare is a 3rd party tool that you have to manage. On the other hand, CloudFlare can be beneficial when using other cloud providers, on-premises or multi-cloud solutions. 

Whatever you choose, both will help you in the first line of defence against SQL Injection attacks, Cross Site Scripting, CSRF and basic IP blocking. It’s definitely worth to use them.