⏩ Writing cleaner Jest tests 🚀

Wojciech Mikusek Tech Lead

Unit and integration tests are an essential part of a daily software developer job. Sometimes it may take more time to write a proper test suite than to actually implement the functionality. That’s why writing well-organised and readable tests is one of the most important skills every developer should acquire. In this article, I will provide you with some tips that will help you with writing clearer tests.

Jest is a dominant JavaScript testing framework with over 22 millions weekly downloads from NPM in late 2023. It provides a wide variety of tools for writing good test cases, however, it is not focused on the structure of larger test suits. There are some alternative libraries but they are far less popular as shown in the picture below.

Source: https://npmtrends.com/jasmine-vs-jest-vs-mocha-vs-vitest

I have a Ruby on Rails background where RSpec –   is a go-to test framework. It describes itself as “Behaviour Driven Development for Ruby” which is “making TDD productive and fun.” – source: https://rspec.info/. Let’s see how can we introduce some of this BDD and other good practices into Jest written tests.

 

1. Use many nested describe blocks, instead of specifying context within test message


In many projects that are using Jest, test suits tend to look like this:

As an example we could take a look at TensorFlow.

With simple unit tests this approach could be fine, but as the number of variants grows, the test suite starts to become less readable. During code review or when implementing new requirements, the developer may struggle to check if all the edge cases are thoroughly covered.  

A simple solution for this is to move context outside of the test message. This approach originates from Behavior-Driven Development (BDD) development process which organises tests in specifications based on behaviour rather than on simple inputs. It is utilised by many test frameworks like RSpec.

RSpec provides additional key word “context” which is used instead of describe to wrap tests against one functionality under the same state. In Jest we can just stay with using only describe, but optionally we could add a new global using Jest Plugin Context

 

Assume we have a method that checks if the user’s age is above 18 (isAdult). Typical Jest test suit would look like this:

Ok, but now what will happen if the age of majority is dependent on the place you are? We should add new test case and adjust test messages

 

Now our test messages start to become quite long and not so easy to read. In the business reality we may have several cases to handle in such a method like expiry, uniqueness checks, some other business validation. Such test suite structure is not maintainable in the long run or will result in messages that do not explain what really is happening in the test. Luckily we can try to refactor this:

Ok, but now what will happen if the age of majority is dependent on the place you are? We should add new test case and adjust test messages

 

2. Use beforeEach block to setup prerequisites from describe block

 

In the classic Jest test suite structure beforeEach blocks are used to setup prequalities to run tests like setup of the database. However, they are not used as much as they could be. Moreover, the developer can leverage BDD structure described above to combine describe and beforeEach blocks. Let’s imagine scenario: 

 

  • The user wants to post a job offer  

  • The job offer has some validations eg. needs to provide salary range, proper description

  • Only specific type of users can post a role – validation is handled by a service which is mocked in the test

 

How this test would look like in classic Jest structure:

As we can see, this code has a lot of duplications and with more attributes to be validated during a job offer creation, it  would become difficult to read. Let’s refactor this using patterns described above:

As we can see, beforeEach blocks and constants are linked with what is described in the describe block. It increases tests readability and would be easier to maintain.

 
 

 3. Use default params and override only changing attributesUse default params and override only changing attributes


In the test example above we still have one problem – for each test we define new jobOffer object. With just 2 attributes it is not a problem, but we can easily imagine much more fields that would result in a interface like this:

With such a structure copying the object and changing 1 attribute for each test would be highly unreadable as it is difficult to actually spot which attribute is different and tests get longer. Solution for this is to have a default constant fulfilling the interface and just override the attribute the test is referring to. Let’s see the example:

4. Use factories for creating objects or records in DB


One of the things I noticed when I switched from Ruby on Rails (RoR) to NodeJS is a different way how DataBase records are populated in tests. In RoR the default way is to use factories with gem FactoryBot, while NodeJS favours manual SQL inserts.

Using factories offers many benefits over manual SQL inserts

  • Requires less code and is more DRY

  • Developer doesn’t need to care about all required fields in the SQL table (e.g. created_at will be filled automatically), only those that are tested

  • No need to rewrite tests after adding new fields

  • Offers high readability with traits and default values

  • Types checks (when using typescript)

  • Automatic creation of related records in other tables

Luckily, ThoughtBot offers implementation of FactoryBot in JavaScript – Fishery. In my current project we use it with great success saving thousands of lines of code. Using factories provides a similar effect to using default params, in tests you only specify attributes you care about in a given scenario.

Conclusions


As I stated above, the most important part of writing good tests (apart from actually testing the code) is readability. Other developers in your team need to understand what scenarios are being covered in your tests and what may be missing. It could help them to easily add new tests when requirements change or to spot any missing edge cases. In the article I’ve shown some tricks that we use in Exlabs to write cleaner and more maintainable tests using the Jest testing framework.

Why are fast-paced companies
moving back to static web pages

Krzysztof Szromek CEO

Was the internet more fun in the ’00s? I remember it as a place full of Information-dense forums, flash animations, and blinking texts. Everyone was hacking web pages in “tables” with HTML. WordPress was not even a thing until 2003. We used CGI scripts to bring some life to the web, but they were not widely adopted and tricky to maintain. Creating dynamic services was definitely “not-for-mortals”. Because of it, the internet was mostly a static place, granting the same experience for everyone.

Static pages are back in the game now! Seeing the IT market going back to solutions from over two decades ago made me feel nostalgic. We have finally gone full circle, and soon Flash will be a thing once again 🤓. Jokes aside – forward-thinking companies are relying on static content even more than a few years ago. They are not recreating the internet from the past though. Those websites are not even close to ones from over 20 years ago in user experience, development, and content management. They are part of complex content delivery pipelines and use cutting edge technologies for best results.

Transforming IT systems
into agile data hubs

Read the guide on how to manage complex IT premises and still be open for innovation, use of new technologies, and don’t suffer from the maintenance burden.

What is a modern content pipeline?

 

Web portals from the early ‘00s are similar to automobiles before Ford T. They were not mass-produced, complex to put together, and expensive to maintain. Modern content pipelines are more like assembly lines that we know today. They are divided into isolated, repetitive steps that ensure high quality, cost-effective, and decoupled product delivery flow.

For a modern content pipeline, you need page templates written in HTML and CSS with placeholders for all the content – titles, copy, images, names. For effective collaboration, store them in a version control system like Git.

The second ingredient is a shared content repository. Use headless CMS like Contentful, more traditional WordPress or Drupal, or even JSON/markdown files stored on a disk.

Don’t forget about a static site generator. It gets data from a content repository and puts it into page templates. It is also responsible for content optimization (image processing, social tags, and more) and feature enrichment (search tools, page analytics, e-commerce, payments). In the end, it exports final HTML files to a designated location.

As soon as static pages are generated, you can upload them to your hosting server and propagate them across the globe with Cloud Delivery Network. This last step ends the life cycle of a content change in the modern content delivery pipeline.

What is great about it is the fact that you can automate it. Re-deploy your site on every single content change or a page template update automatically in a matter of seconds. Many modern hosting providers like Netlify provide built-in continuous delivery solutions with an easy few-click setup.
Diagram of static web pages process.

How does it help?

Transition towards static page delivery model enables you to not only reduce hosting cost. Every static HTML/CSS/JS file can be propagated globally to a Cloud Delivery Network (CDN), further reducing loading times and therefore increasing conversion rates. Improving loading time by 1s can benefit the conversion rate by a few percent (2% Walmart7% COOK). Switching from dynamic page to static one can yield 2s or even more significant load time improvements.

Removal of content processing step simplifies web server requirements. Server no longer needs a database instance, additional software, and complex configuration. It decreases the attack surface significantly, rendering the server a low-risk asset, comfortable to maintain for DevSecOps.

Modern stack for static page delivery embraces a strong separation of CMS features and HTML deliverables. Adoption of headless CMS software like Contentful enables companies to have a unified marketing strategy. All company content materials live in a single place. System integrators use those materials to build static pages, augment mobile apps, or enrich dynamic web applications. Frameworks like GatsbyJS build HTML code from predefined templates on every data change. They enable teams to reuse existing components from existing ReactJS codebases when crafting user experiences.

When to use it?

 

Whenever a portal you want to present is the same for all visitors and has strict performance requirements, you should consider deploying it as modern content delivery pages. They work well as marketing websites, blogs, knowledge hubs, product documentation. Translation of portals targeted towards multilingual audiences is a lot easier with an optimized static page delivery pipeline.

Modern content delivery accommodates well cases featuring limited sets of dynamic functions too. They deliver a rich user experience by connecting to external API with the help of JavaScript code. There are many service providers, including payment gateways, page commenting solutions, search engines – you name it – all ready to integrate.

How much does it cost?

Deploying a modern content delivery pipeline is usually a low-to-medium size effort and generally does not require a significant budget. To fully understand costs associated with such technology, you need to account for:

  • Development cost – This includes getting a development team familiar with technology and understanding continuous delivery concepts. You will address it using well-documented and popular static site builders that is the most familiar to your team – GatsbyJS being the default choice for most of the companies.
  • Content management cost – You will need to onboard the content delivery team to a CMS that integrates with static site builder. It will usually generate recurring costs by having to pay a subscription fee. The most popular solution Contentful.com is free for small teams and costs around $500 for medium-sized ones.
  • Hosting cost – Hosting solutions already include continuous delivery features, cloud distribution networks, and more, so you don’t have to pay for them separately. Most popular Netlify.com will cost you around $50-$150 a month.

Following costs are most likely going to be covered by decreased server fleet size and increased conversion rates resulting in a positive net result.

Wrapping up

Reimagining content delivery pipelines can be a first step towards building well architectured and interconnected platforms for all players. With its relatively low entry point and upfront cost, you can democratize access to company’s information. It drives further innovation and, with a bit of luck, will become a digital transformation catalyst.

If you would like to learn more about building connected IT systems, I was involved in creating the “Connected Platform” whitepaper – soon on our website (be first to know when it’s published and follow us on our Company Page on LinkedIn)

Which front-end framework should I choose for my project?

Krzysztof Szromek CEO

Developers are always bragging about the superiority of their technology above the rest of the world. It’s not different at the front-end framework market and before making a choice for your project, you should consider some factors first.

Environment

Whether you’re planning a structure for a new project or upgrading an existing moloch, you might be te under the impression of some great press around given framework, which would lead you to decision: “let’s use framework X, as it’s such a great framework”.

Not quite always the best choice – consider as an example Vue and Angular. Former you just throw in, kind of plug-and-play style, where latter is quite egocentric and requires restructuring app for it to work well.

Performance

In spite, it’s common to hear that framework X is waaay outperformed by framework Y. You even can find tons of benchmarks proving that point. The truth is unless you are building data or graphic heavy app, most frameworks will fulfil your needs performance-wise.

Community and popularity

Framework popularity and community activity play a great role in how fast you can move on with work due to the support you get. For example – Vue is a great framework, but since it’s not as popular as React or Angular, it means that there’s much less reusable code around (libraries, components, …) which you could just plug in and move ahead, but instead, you must write it from scratch.

The team

This may be the most important factor of all. If your team has zero experience in framework X, which you think is best for your needs, then it might do the job, but the probability of your team making lots of smaller or bigger mistakes along the way is quite high and this, at very least, would cost you time, at worst – you might end up rewriting your code pretty soon.

On the other side, if you’d chosen the other framework which is known to at least one developer from that team, his experience will result in good architecture; and his code reviews will catch most of the rookie bugs the rest of the team might do.

However, if you’re still fixed on this one perfect framework that no one on your team has experience with, maybe the best choice is to get some on-demand support from the experienced dev team, that will help you kickstart the project. Also keep in mind, that learning curves can be quite different for various frameworks.

Conclusion

You may not pick the perfect framework for your project, but this doesn’t mean you won’t do a great job. Consider all pros and cons and don’t despair over 10% worse performance or 20% bigger library size if that’s not really such a great concern for the project.
That being said, we at Exlabs are getting fonder with React lately, but we also work with Angular 4 quite often.

Further reading
https://medium.com/@ZombieCodeKill/choosing-a-javascript-framework-535745d0ab90

When should you modernise front-end?

The web world is moving fast and solutions that have been built a few years ago are quickly becoming outdated. Does it mean your company should throw away all of the work and investment made into those products and rebuild them from scratch? Not really, but you can’t neglect the importance of having a modern and fresh stack. So when should you consider to modernise?

Slow & Heavy

Poor web application performance might have a major impact on your users’ satisfaction, which will result in low conversion and fewer sales.  Usually, this is the result of a combination of many factors, including poor quality of the code, obsolete technology stack or serving too much data on your pages.

Complex and cumbersome code

You’ve realised that pace of maintaining of the app and further development of it is becoming more time consuming and expensive. You find it hard to find talent to work with your code, or even worse, your talent is leaving! As a matter of fact, developers rather than work with legacy code, prefer to experiment with new technologies. This requires having a solid plan for front-end architecture that will allow you to move swiftly. Starting with some Single page applications might be the first step.

Outdated usability

Do your current services offer visitors modern and clean design? Users now are expecting intuitive, easy to navigate online experiences. Having outdated UX of both consumer-facing means you are probably leaving money on the table. Also, don’t forget about your internal systems – using outdated systems affect your employees’ performance and make training very expensive.

No RWD or Mobile-first Approach

You’d be surprised how many web apps are still neglecting mobile experience, while data says it’s starting to exceed desktop in almost every market! Oh, and keep in mind various OS versions – you should test them regularly to make sure your app is usable there!

Summary

The reality is, building an application is only a first step. The serious problems emerge in time, and you constantly need to keep your eye on your product’s state. Being proactive will help you avoid most of the problems, but usually it’s too late. In that case, don’t panic. All you need is a proper roadmap and strategy – it will be painful at first, but eventually you will end up with a healthy stack, satisfied team and end users.

Virtual CISO: How to sleep well at night without experienced Chief Information Security Officer (featuring DevSecOps, SSDLC, vCISO)

Peter Kolka
Let’s face it – application security (AppSec) and compliance have never been sexy, but there’s no need to convince anyone they are not optional. What needs emphasising, though, is that apart from the risk factor that has been played over and over again, there’s also a significant positive driver – customers. Enterprises and regulated (but often smaller) businesses highly value partners who meet governance criteria and can easily demonstrate it. As a result, running a tight ship tends to massively simplify the procurement process and even increase sales altogether.
Source: https://www.dynatrace.com/news/blog/what-is-shift-left-and-what-is-shift-right/
Over the past two decades, IT has shifted left with DevOps practices delivering a development infrastructure that’s fully automated and operating on a self-service basis which improved development productivity and velocity beyond recognition. Nowadays, you are expected to deliver software releases even several times a day to keep up with the market. In this fast-paced environment, there is simply little time for post-development security reviews of new software versions or analysis of cloud infrastructure configurations before the next development sprint begins. Now, DevSecOps is an expansion of the same concept based on a further culture shift that aims to bake security into those rapid release cycles (as opposed to being a stage at the end of Software Development Life Cycle [SDLC]). Such a change does not happen by merely a committed decision – instead, it needs to be continuously reinforced – a natural thing for a CISO in the business.  But how to get one?

Life without a CISO

Over 45% of businesses do not have a CISO position filled, although, at the most basic level, GDPR nominations are required. It’s not that companies don’t value or want cybersecurity leadership – it is just getting harder and harder to find and retain these individuals. As things stand, it’s not difficult, however, to make an argument that every software and test engineer (or even an admin) is a security engineer, as in practice, even the smallest decisions can (and do) translate to the security of their business as a whole. The problem, though, is that the traditional approach to security within SDLC too often ignores the context and focuses on vulnerabilities on a micro scale (for example, individual ticket level) – not the whole user journey that’s being created or changed. Security testing tends to come up too late in the production process (sometimes even outsourced via pentests). As a result, it ends up tremendously expensive and disliked (that’s assuming the problem is picked up and fixed in the first place – if not, think incident response, risk assessment, the fix itself, more testing, plus a lot of the time a reputation cost or even a ransom or penalty ).
Source: https://hackernoon.com/what-is-purple-teaming-in-cybersecurity
At this stage, it is clear that a single dedicated CISO that can be effectively outsourced to be responsible for the complete security of the application is not viable for the vast majority of cases. While the big enterprises do have budgets for luxuries essentials such as the red and blue teams and possibly a whole team of security specialists to deal with obvious bottlenecks in an imperfect process (e.g., multiple sprints closing within a short time space causing demand spikes as an obvious issue), the same cannot be said for the challengers whose risks are just as high, but traditional ‘model’ solutions are simply out of reach for them. 

Levelling the AppSec field

Even though you try your best to simplify them, your technical estates become increasingly sophisticated, and the competitive landscape only results in the number of initiatives your business needs to juggle to grow exponentially. With that in mind, no CISO nor security architect could become the single point of failure as they are physically unable to aggregate all functional requirements and augment them with non-functional, security ones.
Source: https://www.zentao.pm/blog/SDLC-software-engineering-software-development-lifecycle-811.html
That does not mean, however, that the battle is lost -far from it. Just like DevOps commoditised the development infrastructure ecosystem, an analogical approach is starting to take place with AppSec, where the owner can, and should, shift left in the SDLC, regardless of the budgets. Instead of passively addressing issues flagged by pentests (i.e., well after the work is done), the SDLC can be adjusted to accommodate security-first culture from inception. More often than not, it is at the very least a surprise to many technical leaders that the majority of the pentesting is not a manual specialist task but a very automated process. And if that’s the case, a natural realisation should be that the same automation should (and can) be moved up the development lifecycle where it is much more likely to reduce the cost of change. Don’t get your hopes too high, though – automation is an essential and seemingly a passive aspect (even if you implement Dynamic Application Security Testing  [DAST] and Static Application Security Testing [SAST] as part of your testing protocol, you still have to monitor and police the console!) – making the whole SDLC security-driven (‘i.e., SSDLC’) considerably more refined. DAST vs SAST
Source: https://www.invicti.com/statics/img/blogposts/iast-vs-dast-vs-sast.png
My ambition here is not to scare you but at least inspire you to consider taking the first steps, and there is undoubtedly more low-hanging fruit.

First steps to focus on

In my experience, the following three pillars tend to make the biggest difference overall and provide the highest ROI:
  • Quality of testing (automation)
  • Quality of coding (training and onboarding newcomers, checklists, plugins)
  • Quality of design
While we have already talked a little bit about testing, the most difficult change to make is the design and architectural flaws. It is hard because that’s where a policy change most visibly becomes the culture change I alluded to earlier, rather than a tool or add-on that you can add to software that has recently been recognised by OWASP (Open Web Application Security Project).
Source: https://www.techtarget.com/searchsoftwarequality/definition/OWASP

Starting with secure application design

The element that is typically missing in solution design is threat modelling, i.e., a setup where security consideration becomes a pre-requisite of the requirement being finalised. While there are no magic pills or simple pieces of advice that you can generically apply to every setup, the foundations of threat modelling can be distilled into 3 aspects:
  • Understanding the threat actors – i.e., who may be interested in compromising your system
  • Defining the threat itself – i.e., what are you afraid of, what is the risk
  • Identifying the possible attack vectors – i.e., how they may try to get it done
Source: https://www.praetorian.com/blog/what-is-threat-modeling-and-why-its-important/
Once you have an idea of the base threat models (OWASP’s list of 40 original sins is a great place to start), you’re able to incorporate the abuser stories (security requirements) into the actual user stories being developed, and those can then be easily augmented with appropriate test cases and security metrics. By this stage, you’re likely thinking that what I am proposing sounds incredibly disruptive and expensive. In reality, however, not only is such a mindset change considerably cheaper, but in the long run, it’s just as important to remain pragmatic in how you implement the shift. You quickly realise that not all requirements introduce equal risks, and hence it’s only natural and sensible for the user stories to be filtered against threats to the model, which is a form of prioritisation.

Demystifying application security

The message I am trying to get across is certainly not that you don’t need a CISO – just don’t assume they ride to work on a white horse. The transition from SDLC to SSDLC is a convoluted one, and while there is low-hanging fruit to be picked, there are also no universal truths or one-size-fits-all. And that means a CISO needs to continuously monitor that transition and have the authority to pivot should the process break or become ineffective at any stage. 
Source: https://nix-united.com/blog/secure-software-development-process-nixs-approach-to-secure-sdlc/
Let’s be realistic – things like abuser stories will not magically solve all your problems but are a great starting point. As the process matures, there will still be increasing pressure from the business to drive the costs of the secure development process down – and that’s where things like checklists for particular system types and guides (secure design patterns, relevant reference architectures) will start to come handy. It sure will take some time to distil general solutions to security problems that you can apply in different situations, but eventually, you’ll start to automate and centralise the knowledge and ensure all engineers get to know the enemy – it will be in their blood.

The future is brighter than you think

The further left you manage to shift the security mindset within the SDLC (testing-> coding-> design) the bigger the impact you are going to experience. If you don’t (or can’t have) a dedicated CISO to orchestrate and own this transition, let it not stop you – there are virtual (vCISO) options available too.  Virtual Chief Information Security
Source: https://underdefense.com/wp-content/uploads/2021/04/vCISO-for-Technology-companies-001.png
These typically will not only get the ground running quickly but also tend to feel less obliged to play nice with office politics. So while you should not underestimate the task at hand, there is also no reason not to start questioning the status quo now and begin to make small steps with a huge impact. 

Facebook and Clickjacking Attack -
Check If Your Website is Vulnerable

Damian Pawłowski Head of Frontend

Could a popular website, developed by a large team of developers and used by millions of users worldwide, be vulnerable to a clickjacking attack? 
– Unfortunately, yes.

This was the case with Facebook, which for a long time was vulnerable to this attack, it even had its own name – “likejacking”. In this article, I will explain to you what a clickjacking attack is and what methods you can use to protect your app against it.

If you prefer to see a short video about this topic – check out our content on YouTube about Clickjacking in the React application:

Clickjacking is a type of web attack where an attacker tricks a user into clicking on something that they did not intend to click on. One example of a clickjacking attack on Facebook was the “Likejacking” attack. In this attack, the attacker would create a webpage that would appear to contain a legitimate Facebook “Like” button. However, when the user clicked on the button, they would actually be activating a hidden action, such as sharing a malicious link on their Facebook profile or even “liking” a page that they did not intend to.

In the video posted above, I show what the process of layering one application on top of another looks like.

Clickjacking example

The attacker would typically use social engineering techniques, such as enticing the user with a sensational headline or offer, to encourage them to click on the fake button. Because the fake button was layered over the real button, the user would inadvertently activate the hidden action.

This type of attack is often designed to allow the attacker to trigger user actions on the target website, even if anti-CSRF tokens are used. Testing should be performed to determine whether websites are vulnerable to clickjacking attacks.

Check if your website is vulnerable to clickjacking attack

If you want to check if your app is vulnerable, create any local application and try to insert your app via iframe. This way you can check if the page becomes visible or if you get an error in the console. The simplest possible way is to create .html file and insert this code inside: 

<html>
    <head>
        <title>Clickjacking attack test</title>
    </head>
    <body>
        <iframe src="LINK_TO_YOUR_WEBSITE" width="600" height="600"></iframe>
    </body>
</html>
Remember to replace LINK_TO_YOUR_WEBSITE with your actual link to the website.

Clickjacking defense

There are three main possible mechanisms that can be used to defend against Clickjacking:

  • Recommended way: Preventing browser from loading a website in frame using the X-Frame-Options or Content Security Policy HTTP headers
  • Preventing cookies from being included when the app is loaded in a frame using the SameSite cookies attribute (The use of this attribute should be considered as part of a defence-in-depth approach)
  • Using Javascript code called “frame-buster” to prevent load website in a iframe


Example of using Content-Security-Policy:


Content-Security-Policy: frame-ancestors 'none'; 
This prevents any domain from framing the content of your app. With this setting any website can’t display your app with iframe. 

Content-Security-Policy: frame-ancestors 'self'; 
This setting will allow display your app only inside your domain.

If you want to support even outdated browsers, you should use several methods to reduce the risk as much as possible.

Facebook has implemented various countermeasures over the years to prevent clickjacking attacks, such as using frame-busting scripts and X-Frame-Options headers to prevent third-party sites from embedding app content in an iframe. 

Check if your application is safe. Every additional, even small layer of security makes a big difference.

Use cases of
HashiCorp Vault

Mateusz Wilczyński CTO

Hashicorp Vault is a popular tool for securely storing and accessing sensitive information, such as passwords, API keys, and certificates. It does this through “secrets engines,” which are plugins that extend Vault’s functionality and enables it to store and manage various types of secrets. 

Secrets engines offer different functionalities, from simply storing and reading data, generating dynamic credentials on demand, or even providing services like encryption, TOTP generation, certificates, and much more. 

In this article, we’ll take a look at some of the most valuable use cases of Hashicorp Vault and the secret engines that are used to deliver that functionality. 

If you’re new to Hashicorp Vault, check out our previous article via https://exlabs.com/insights/hashicorp-vault_is-it-worth-it 

Storing Static Secrets

This functionality is delivered by the KV secrets engine, which is Vault’s most essential and widely used secrets engine. It allows you to store and manage simple key-value pairs of secrets, such as passwords and API keys. The KV secrets engine comes in two versions: KV v1 and KV v2.

KV V1 is the original version of the secrets engine and has a simple key-value storage model. It offers reduced storage size for each key and slightly better performance than the V2 because of not store additional metadata. 

KV v2, on the other hand, offers more advanced features, such as data versioning. Versioning supports soft delete, undelete, or entirely remove data functions, while each can have a different permissions set. Additionally, you can use the Check-and-Set operations to avoid overwriting data unintentionally.

Shortlived Cloud Identity-Based Access

Let’s take a look at this challenge from the perspective of AWS Cloud and the corresponding AWS Secrets Engine. This engine allows you securely store and manage AWS access credentials. It also dynamically generates AWS credentials based on IAM policies. Such credentials are time-based, and Vault automatically revokes them on the AWS side when the Vault lease expires. 

The engine supports generatinon of the following credential types: IAM User, STS AssumeRole, and STS Federation Tokens.

Similar Secrets Engine exists for Azure, Google Cloud, or even AliCloud. Using them is beneficial in Multi-cloud and hybrid-cloud applications. In that scenario, users authenticate to Vault, and Vault generates particular cloud credentials with proper access without creating and managing each user’s account in each cloud separately.

Generating dynamic secrets for databases

Hashicorp Vault offers a dedicated secret engine for each major database: PostgreSQL, MySQL, MSSQL, Elasticsearch, MongoDB, and Snowflake. 

The database secrets engine generates dynamic credentials for your applications. For example, it can help grant application access to specific databases or tables and revoke access when it is no longer needed. The database secrets engine also supports rotating credentials on a scheduled basis for improved security.

The database engine generates a unique username on the database side for each service accessing it. It gives you much better auditing possibilities. 

Enhanced SSH access control

This functionality is delivered by the SSH secrets engine, which offers secure authentication and authorisation to access hosts via the SSH protocol.  

The engine supports the following modes:

  • Signed SSH Certificates – the most straightforward, powerful, and platform-agnostic solution. Works based on the Vault’s CA (Certificate Authority) capabilities. 
  • One-time SSH Passwords – Vault generates a One-Time Password each time a client wants to SSH into a remote host. 
  • Dynamic SSH Keys (deprecated) – vault generates a new SSH key pair for each client and saves the newly-generated public key on the host. This method is deprecated and not recommended because of security drawbacks. 

Owning Public Key Infrastructure

The PKI secrets engine enables you to manage your own public key infrastructure (PKI) within Vault. The engine generates dynamic X.509 certificates.

It can be useful for generating SSL/TLS certificates for your applications and servers. Additional features like auditing, telemetry, and fine-grained role-based access are also available. 

Conclusion

Hashicorp Vault offers a wide range of use-cases useful for various types of organisations and circumstances. Dedicated secret engines deliver them. The KV, Cloud, Databases, SSH, and PKI secrets engines are Vault’s most valuable and widely used ones. Using them, you can securely store and manage sensitive information and grant your applications the necessary access to resources.

Unlock the Benefits of FinOps: Streamline Operations, Increase Efficiency and Drive Business Growth

Paweł Żurawka Delivery Manager

Imagine a scenario where the IT department is a well-oiled machine, where resources are allocated with precision, and costs are transparent and aligned with business goals. That’s the power of FinOps. But for many businesses, this might seem like a pipe dream. However, with the right approach, it’s entirely possible to transform the way your IT organization operates and start seeing real results.

At its core, FinOps is all about bringing financial discipline and accountability to the management of IT resources. It’s a mindset and set of practices that help organizations to better understand the true costs of their IT resources and make more informed decisions about how to allocate them. But it’s not just about cutting costs – FinOps is also about maximizing the value of those resources to the business.

What is FinOps?

The key is to start small and build momentum. But, where to begin? Many organizations are just starting to explore FinOps and are at the “Crawl” stage of the FinOps Maturity Model. To help these organizations take the next step, we’ll take you on a journey from Crawl to Run with FinOps.

First, assess your current state. Take a deep dive into your operations to understand your current level of cost optimization and financial control. Identify the key processes, systems, and technologies that are used and the people and teams responsible for managing them. This information will help you set specific, measurable goals for what you want to achieve with FinOps.

Next, it’s time to start walking. As you implement FinOps practices and see improvements, you’ll be on your way to the “Walk” stage of the FinOps Maturity Model. Key practices at this stage include:

  •  Cross-functional collaboration: Collaboration between finance, operations, and technology teams is key to the success of FinOps. Make sure everyone is working together towards the same goals.

 

  • Technology adoption: Leverage the right technologies to support your FinOps journey. Consider tools such as cost optimization platforms, financial management systems, and cloud management tools to help you achieve your goals.

Finally, it’s time to run. As you continue to make progress, you’ll reach the “Run” stage of the FinOps Maturity Model. At this stage, you’ll be able to fully optimize costs and improve financial control, thanks to:

  • Advanced processes: You’ll have fully integrated FinOps into your operations, streamlining processes and maximizing efficiency.
  • Advanced technologies: You’ll be leveraging the latest technologies to support your FinOps practices and drive continuous improvement.
  • Continuous improvement: You’ll be continuously reassessing your operations and making improvements to keep your FinOps journey on track.

As you start to implement FinOps, data and analytics will play a vital role. By collecting and analyzing data on the usage and costs of your IT resources, you can understand how resources are being used, identify areas for improvement, and make decisions on resource allocation. As you see results, the importance of data will become more apparent and you’ll begin to see the true value of FinOps.

As your business starts to embrace FinOps, you’ll begin to notice a shift in the culture of the organization. All stakeholders will begin to take ownership of the financial performance of IT resources and work together to optimize their use. This culture of collaboration and accountability is crucial for achieving strategic goals.

When implemented correctly, FinOps is not just a cost-saving measure, it’s a way to drive business growth and create a culture of continuous improvement. With FinOps, businesses can finally break free from the status quo and start achieving their true potential. With data-driven decision-making and efficient resource allocation, the sky’s the limit for what your IT organization can achieve.

Time tracking exercise and what problems it solved

Over the years I have migrated between the role of a Product Owner towards a Scrum Master. While those may seem like heaven and earth (depending on where you’re seated at the time) one problem was common and equally frustrating i.e. sprint planning vs actual delivery. I’ve worked with a countless number of teams I would using anything from time to points estimation but regardless of the technique, the problem of not delivering of the agreed scope was always there.

There’s been a project where we have decided to start using story points: the team was working on it for over a 6 months at that point and they were happy and open to make a change. After 5 sprints I had a solid idea what the velocity of the team was.

All of the sudden some strange things started to happen. Velocity was shrinking — each sprint we delivered less and less points, and even if I took fewer points into a sprint, tickets were still not getting completed. I have quickly realised that there are some external factors that I’m not aware of that are impacting my team and slowing the work. Not to mention the client that was not exactly happy 🙁

So I have decided to introduce Toggl in 2 teams in parallel to test:

  • What is time spent on — time not covered by a Sprint tickets,
  • The accuracy of the estimations,
  • How is estimation affected by a senior developer. Even though we used poker planning software when everyone had to vote before the score was revealed, it was clear that the less senior members were estimating under the influence of senior member. They tend to not see all possible problems or were over-optimistic in their estimations.

I have of course explained the reasons for the time tracking exercise to the team. That I’m going to treat it as an exercise, that I would track my time too etc. I have explained we need to quickly pinpoint where the biggest challenges with the project are, how we are falling behind the scope and that I want to understand how the recently introduced code review rules seemed to hugely increase time that is being spent on the ticket.

At first, the team wasn’t happy about time tracking. The reasons can be easily guessed:

  • It’s time-consuming,
  • I’m forgetting to switch the tracker when I start new task,
  • When I’m doing code review I’m quickly switching between the tasks so it’s pain to change also tracker every time.

There are also reasons that were not being expressed out loud but are common to human nature:

  • Someone wants to control me,
  • I’m being micro-managed,
  • The management want to check on me,
  • They don’t believe I’m working my hours (especially remote workers),
  • I do my tasks longer then I admit, now I will have to show it — I’m afraid of the results / consequences,
  • Am I not that good programer as John, he does his tasks faster…

One of the great benefits of time tracking is the understanding of where and how your time is spent. Once you start tracking your time you can see the big picture and how productive you are. By productive I don’t mean how quickly you work of course.

Speed of delivery doesn’t equal the quality of the code produced. I much prefer to work with developer whose tasks are properly done with time spent on understanding of the ticket scope, that doesn’t require a lot of changes during the code review and pass QA, rather than a developer who quickly does his task, then needs to fix it several times during code review and at the end QA rejects this ticket as it’s not fully completed.

I went slightly off topic here, so let me get back to my teams…

 Very quickly I have realised that in team A the problem was with work that was done outside tickets. The biggest offenders were:

  • Client would ask for calls to discuss technical requirements for the new features or solutions. And those were not 5 min calls. As we started to measure those calls I was able to add placeholders for next sprint to cater time spent on new features discussions. I was also able to present to the client how it impacts speed of work.
  • There was some maintenance work required for deploys, problems with the server, changes to code review / deployment process etc. They were important but not communicated clearly so done outside sprint scope. Also if you measure it, you can be better prepared in the future and allocate the time needed to complete these.

At the same time, team B has just started to work on new project and was using time estimation. After the first time-tracked sprint, I realised tickets were underestimated by an average of 64% (!). When the time for new sprint planning came, I have pulled my report and explained the results to the guys. They were surprised but really interested. You can probably guess that the next sprint was much better estimated 🙂

How time tracking helped me to manage my team better:

Having collected the data I could see the bigger picture and introduce small, iterative adjustments. I could see the breakdown of weekly time blocks like calls, tasks, code review, and other not sprint related tasks. Here are some of the main points:

  • Time on calls is significantly longer for remote teams
  • Time breakdown will be different for each role in the team (developer / tester / SM / PO)
  • Newly introduced rules for code review were too strict and slowing down work
  • Team tried to generate “best code” and to cater for all future possible scenarios instead of focus on the ticket scope
  • Some of the team members had their tickets stuck in code review for a longer time as the code was not properly written. Also, some team members took longer to code review. Knowing that we could help them to improve areas that needed it most.
  • During daily planning, it was soo much easier to talk about yesterday work when you had list of the tasks from toggl in front of you.

How time tracking helped me to manage my time better:

I never had problem what to do at work, I wouldn’t sit during work hours staring at the computer screen wondering what to do – there was always something that urgently needed to be done. The problem was that when working on multiple projects where I was billed to the clien a on % basis, I was sometimes confused how much time I spent on what project and if that was fair to the customer. Also the question was if I was not spending too much time on the project. When you overdeliver, the customer assumes it’s still the standard 25% week time and would not understand the reason of increasing my role permanently to fulfil all tasks.

Time tracking software

There are several time tracking tools on the market. I personally have used Toggl and Harvest but you should have a play around to investigate before you decide.

Here’s a list of other time tracking tools you should consider:

And what about you? Have you tried to implement time tracking within your Agile workflow? Please share in the comment below or feel free drop me a line 🙂

🔒 The Essential Areas of AWS Security 🔒
What You Need to Know

Krzysztof Szromek CEO

Navigating the AWS security landscape is crucial for IT leaders. Let’s explore vital areas you need to focus on to protect your organization’s data and infrastructure in the cloud.

TL;DR 📝

  • 🚪 IAM: Control access and permissions
  • 🛡️ Network Security: Secure your cloud environment
  • 🗃️ Database Security: Protect sensitive data
  • 🔑 Secrets Rotation: Minimize unauthorized access risks
  • 💻 Code Quality: Deploy vulnerability-free applications
  • 🎯 CIS Benchmark: Adhere to industry-standard best practices


🚪 IAM: Your Gateway to Secure Access


IAM is central to any AWS security strategy. It manages access to resources in the AWS environment, letting users securely administer their accounts and permissions. IAM enables organizations to establish distinct identities for each user, offering granular control over access and actions.

IAM helps maintain compliance by assigning role-based permissions, ensuring only authorized users can access sensitive data or carry out tasks within the AWS environment. To reinforce security, implement best practices like creating unique user identities, rotating passwords, enabling multi-factor authentication (MFA), and using role-based access control (RBAC). Regularly reviewing IAM policies and audit logs is also vital.

🛡️ Network Security: Building a Fortified Cloud Environment


To ensure network security on AWS, you should use Amazon Virtual Private Cloud (VPC), security groups, network access control lists (ACLs), encryption, and monitor network activity. VPC enables you to create a private network in the AWS cloud – something that is of an essence for database security – while security groups and ACLs provide virtual firewalls to control traffic flow and restrict access to resources. Encryption helps protect data in transit and at rest. Monitoring network activity with AWS tools, such as Amazon CloudWatch and Amazon GuardDuty, can help detect potential security threats. By utilizing these networking tools, you can help protect your AWS network from potential threats and keep your data and applications secure.

🗃️ Database Security: Safeguarding Your Data


Database security is critical for AWS. Familiarize yourself with Amazon Relational Database Service (RDS) security features, including authentication and encryption. Use encryption at rest and in transit to protect sensitive data. Enhance database security by monitoring and logging activity, setting up alerts for suspicious behavior, and having a backup and disaster recovery plan in place.


🔑 Secrets Rotation: Keep ‘Em Moving, Keep ‘Em Safe


Rotating secrets like access keys, passwords, and certificates is crucial to minimize the risk of unauthorized access. Implement a regular rotation system, either manually or using tools like AWS Secrets Manager. Incorporate change monitoring for alerts on unexpected secret changes or unauthorized usage. Learn more on how to automate the secrets rotation process.

💻 Code Quality: Crafting Ironclad Applications


Deploying high-quality code free of vulnerabilities is essential. Adopt a “shift-left” approach to code review and testing, and consider automating risk analysis with static application security testing (SAST) tools. These actions help identify and resolve vulnerabilities before they become threats.

🎯 CIS Benchmark: Hitting the Security Bullseye


The Center for Internet Security (CIS) Benchmarks provide best practices for securely configuring cloud infrastructure in line with industry standards. Regularly use CIS Benchmarks to keep your environment secure and compliant with evolving guidelines. Leveraging CIS Benchmarks helps protect data and reduce security breach risks.

Need a helping hand? Do not hesitate to Contact Us