⏩ Writing cleaner Jest tests 🚀

Wojciech Mikusek Tech Lead

Unit and integration tests are an essential part of a daily software developer job. Sometimes it may take more time to write a proper test suite than to actually implement the functionality. That’s why writing well-organised and readable tests is one of the most important skills every developer should acquire. In this article, I will provide you with some tips that will help you with writing clearer tests.

Jest is a dominant JavaScript testing framework with over 22 millions weekly downloads from NPM in late 2023. It provides a wide variety of tools for writing good test cases, however, it is not focused on the structure of larger test suits. There are some alternative libraries but they are far less popular as shown in the picture below.

Source: https://npmtrends.com/jasmine-vs-jest-vs-mocha-vs-vitest

I have a Ruby on Rails background where RSpec –   is a go-to test framework. It describes itself as “Behaviour Driven Development for Ruby” which is “making TDD productive and fun.” – source: https://rspec.info/. Let’s see how can we introduce some of this BDD and other good practices into Jest written tests.

 

1. Use many nested describe blocks, instead of specifying context within test message


In many projects that are using Jest, test suits tend to look like this:

As an example we could take a look at TensorFlow.

With simple unit tests this approach could be fine, but as the number of variants grows, the test suite starts to become less readable. During code review or when implementing new requirements, the developer may struggle to check if all the edge cases are thoroughly covered.  

A simple solution for this is to move context outside of the test message. This approach originates from Behavior-Driven Development (BDD) development process which organises tests in specifications based on behaviour rather than on simple inputs. It is utilised by many test frameworks like RSpec.

RSpec provides additional key word “context” which is used instead of describe to wrap tests against one functionality under the same state. In Jest we can just stay with using only describe, but optionally we could add a new global using Jest Plugin Context

 

Assume we have a method that checks if the user’s age is above 18 (isAdult). Typical Jest test suit would look like this:

Ok, but now what will happen if the age of majority is dependent on the place you are? We should add new test case and adjust test messages

 

Now our test messages start to become quite long and not so easy to read. In the business reality we may have several cases to handle in such a method like expiry, uniqueness checks, some other business validation. Such test suite structure is not maintainable in the long run or will result in messages that do not explain what really is happening in the test. Luckily we can try to refactor this:

Ok, but now what will happen if the age of majority is dependent on the place you are? We should add new test case and adjust test messages

 

2. Use beforeEach block to setup prerequisites from describe block

 

In the classic Jest test suite structure beforeEach blocks are used to setup prequalities to run tests like setup of the database. However, they are not used as much as they could be. Moreover, the developer can leverage BDD structure described above to combine describe and beforeEach blocks. Let’s imagine scenario: 

 

  • The user wants to post a job offer  

  • The job offer has some validations eg. needs to provide salary range, proper description

  • Only specific type of users can post a role – validation is handled by a service which is mocked in the test

 

How this test would look like in classic Jest structure:

As we can see, this code has a lot of duplications and with more attributes to be validated during a job offer creation, it  would become difficult to read. Let’s refactor this using patterns described above:

As we can see, beforeEach blocks and constants are linked with what is described in the describe block. It increases tests readability and would be easier to maintain.

 
 

 3. Use default params and override only changing attributesUse default params and override only changing attributes


In the test example above we still have one problem – for each test we define new jobOffer object. With just 2 attributes it is not a problem, but we can easily imagine much more fields that would result in a interface like this:

With such a structure copying the object and changing 1 attribute for each test would be highly unreadable as it is difficult to actually spot which attribute is different and tests get longer. Solution for this is to have a default constant fulfilling the interface and just override the attribute the test is referring to. Let’s see the example:

4. Use factories for creating objects or records in DB


One of the things I noticed when I switched from Ruby on Rails (RoR) to NodeJS is a different way how DataBase records are populated in tests. In RoR the default way is to use factories with gem FactoryBot, while NodeJS favours manual SQL inserts.

Using factories offers many benefits over manual SQL inserts

  • Requires less code and is more DRY

  • Developer doesn’t need to care about all required fields in the SQL table (e.g. created_at will be filled automatically), only those that are tested

  • No need to rewrite tests after adding new fields

  • Offers high readability with traits and default values

  • Types checks (when using typescript)

  • Automatic creation of related records in other tables

Luckily, ThoughtBot offers implementation of FactoryBot in JavaScript – Fishery. In my current project we use it with great success saving thousands of lines of code. Using factories provides a similar effect to using default params, in tests you only specify attributes you care about in a given scenario.

Conclusions


As I stated above, the most important part of writing good tests (apart from actually testing the code) is readability. Other developers in your team need to understand what scenarios are being covered in your tests and what may be missing. It could help them to easily add new tests when requirements change or to spot any missing edge cases. In the article I’ve shown some tricks that we use in Exlabs to write cleaner and more maintainable tests using the Jest testing framework.

Mastering Cloud Cost Analysis with Metabase:
Leveraging Open-Source Business Intelligence for Effective Cloud Expense Management

Krzysztof Szromek CEO

Every cloud provider supplies distinct services to manage cloud expenses, such as AWS Cost Management, Azure Cost Management, or GCP Cloud Billing. At the moment, these tools offer a range of robust features like comprehensive cost breakdowns, forecasting, budgeting, anomaly identification, and even cost-cutting suggestions. However, can they fulfill all your requirements?

 

We think not. The following are circumstances where tools provided by cloud service providers may fall short:

  • Monitoring the expenses of multi-cloud or hybrid-cloud applications from a central location.
  • Accessing historical data – most tools from cloud service providers only provide a one-year history.
  • Circumventing tool limitations – for example, in Azure, costs can only be displayed for each subscription separately. If you’re operating under a pay-as-you-go billing model, you cannot consolidate and display costs for all your subscriptions. This can be burdensome for customers who use subscriptions for resource grouping.
  • Carrying out more complex cost analyses like displaying trend lines or comparing the current month to previous ones.

With the increasing attention to the FinOps trend and focus on cloud cost optimisation, it’s becoming evident that cloud service providers’ cost analysis tools are sometimes lacking.

The Fundamental Solution: Excel

For organisations beginning their journey into cloud cost analysis, Excel often seems like the obvious solution. However, while it may be a suitable solution for some scenarios, it comes with its own set of challenges:

🚧 Limited scalability: Excel is not built to handle large datasets or intricate calculations. As cloud computing environments expand, the data volume and complexity of calculations required for cost analysis may quickly outstrip Excel’s capabilities, resulting in errors and data inconsistencies.

🛠️ Manual data entry: Excel necessitates manual data input, which can be both time-consuming and error-prone. Guaranteeing accuracy and consistency across multiple sheets becomes progressively difficult when dealing with large quantities of data.

👥 Lack of automation: Excel does not support automated tasks, such as retrieving data from cloud cost management tools, scheduling data updates, or generating reports. This leads to a higher likelihood of errors and delays in data analysis.

🧍‍♂️ Limited collaboration: Only one user can edit an Excel file at a time, which poses challenges for collaborative work on cloud cost analysis projects. Version control can also become problematic, leading to confusion and errors when multiple users make changes to the same file.

🔐Security risks: When dealing with sensitive data, it’s vital to exercise caution when sharing or copying Excel files, as this can lead to data breaches. Unauthorised access or accidental data leaks can have severe consequences.

Introducing Metabase

Metabase is an open-source business intelligence and data analytics tool that enables users to conveniently visualise, analyse, and share data. It offers an intuitive interface that assists non-technical users in quickly creating and running queries, as well as crafting custom dashboards and reports.

With Metabase, you can establish a dedicated dashboard for different stakeholders at varying abstraction levels. For instance, a high-level view can present monthly data and cost trends for the CFO, while a low-level view can display specific resources managed by an engineering team, thereby promoting cost optimisation.

Using this BI tool, cost data export can be automated, reducing error risk and the need for manual data export and processing. Fetching data from various sources is no longer a problem. Report generation can also be automated. Issues such as scalability and version control are also addressed. Lastly, the security is significantly more robust than in Excel.

Even users with limited technical skills can use Metabase. It doesn’t take an expert to create a dashboard like the one below.

Despite all these benefits, Metabase has one significant limitation – it does not directly integrate with the cost services of cloud providers.

Accessing Cloud Cost Data

To harness the capabilities of Metabase, you need to export the cost data to SQL or another database type supported by Metabase.

The diagram below provides an overview of the basic DataPipeline we use at Exlabs for handling Azure costs data. For other clouds, you would need to use equivalent services.

Here’s a brief rundown of the main components:

  1. Azure Cost Management 🧮 – This service exports cost data to a CSV file. Each subscription’s data must be shipped separately. You can automate this process by selecting daily, weekly, or monthly export types. Historical data can also be exported, but only for the last three months! The generated CSV files are delivered to an Azure Storage Account. 
  2. The next step involves Azure Data Factory💾 – a service that processes CSV files and places the data in the database. This processing includes renaming a few columns, converting variables, and ensuring the cost data won’t be duplicated. We strongly recommend using Triggers to automatically execute data processing after a new CSV file is uploaded to the Storage Account.
  3. A database📂 is the final step of the pipeline. Any engine supported by Metabase can be used. To minimize costs, you may want to use a Serverless deployment.
  4. The last piece of the puzzle is Metabase👁️‍🗨️ itself. The simplest option is to use Metabase’s cloud offering, which starts at $85 right now. However, as Metabase is open-source, you’re free to host it on your own VMs. Once Metabase is smoothly connected to the database, you’re ready to start posing FinOps questions and creating dashboards and reports.

Your Move!

Metabase is a robust tool that can assist you in understanding your cloud costs. Now it’s your turn to harness its power!

This article outlines the approach we employ at Exlabs but glosses over the technical details that may be challenging. Check out our webinar “Serverless🚀Right fit for Your Next Project?💰 A Cost Evaluation” for more details.

Here’s what we’ll explore together:

  • 💰 The ins and outs of serverless and how it differs from just AWS Lambda
  • 🧮 Analyzing whether serverless is always a cost-effective choice
  • 📊 Understanding the components of serverless infrastructure costs
  • 🔍 Comparing serverless maintenance costs to VMs
  • 🏆 Sharing serverless success stories, and lessons from its failures
  • 🚧 Guiding you to evaluate if serverless is a cost-effective choice for your use-case

If you find yourself stuck, don’t hesitate to get in touch. We’re eager to assist you with the implementation. Conversations around FinOps are also always welcome.

🧭🌱 Scaling the IT Ladder:
Tactics for Growth and Waste Minimization

Krzysztof Szromek CEO

Are you on a journey to scale your IT team? Whether you’re a blossoming startup, a seasoned corporation, or somewhere in between, our insights are here to assist. We’ll navigate you through the intricate maze of team expansion, waste reduction, common stumbling blocks, and challenges intrinsic to IT team scaling.

The scaling journey is a mixed bag of challenges and opportunities. Companies must tackle typical hurdles such as insufficient planning, resource misallocation, and poor infrastructure design. Concurrently, businesses can exploit strategic initiatives and emerging technologies to maximize growth during the scaling process. Identifying potential weaknesses and harnessing strengths is a must for successful IT scaling.

Clarity on the challenges and opportunities associated with scaling is vital to ensure that operations continue to be efficient and cost-effective. With meticulous planning, competent personnel, and robust infrastructure, businesses can make the most out of the scaling process to attain their goals.

Challenges

Scaling operations present a host of challenges for IT teams. Poor planning can result in resource misallocation and spiraling costs. Without thorough research and planning, companies may struggle to keep up with demand while wasting valuable resources. Additionally, inadequate infrastructure design can cause service delivery delays and disruptions as systems become overwhelmed or incompatible.

Insufficient planning significantly contributes to resource wastage during the scaling process. Companies may lack an accurate understanding of their needs, leading to investments in superfluous technology or personnel that are not required to achieve their objectives. Moreover, without proper planning, companies lose sight of potential opportunities and new technologies that could enhance operations and efficiency.

The consequences of inadequate infrastructure and resource allocation can be severe. Companies may struggle to meet demand due to a lack of capacity or resources. Moreover, misallocation of personnel can cause service delivery delays as teams are stretched thin or uncertain about their responsibilities.

Strategies for Effective Scaling

Navigating pitfalls and reducing waste during scaling operations can be complex. To ensure success, companies must devise a comprehensive plan that tackles the challenges associated with scaling. This includes researching potential technologies, outlining resource needs and requirements, and strategizing personnel allocation. Additionally, businesses should regularly assess their operations to identify any potential areas for improvement and growth.

Real-world examples can offer valuable insights into the successful application of these strategies. For instance, one company developed an all-encompassing strategy to enhance their infrastructure scalability, aiming to cater to heightened customer demand without compromising on quality or performance. They accomplished this by adopting technologies like cloud computing and virtualization, and by introducing more efficient processes and procedures.

Moreover, businesses should scout for opportunities to get the most out of their scaling investments. This involves identifying potential growth areas, such as new markets or services, and allocating resources to them judiciously. Companies must also concentrate on propelling business growth by improving customer satisfaction and generating new revenue streams. With careful planning and strategic investments, businesses can fully harness their scaling initiatives to achieve their objectives.

Final Thoughts

Scaling in the IT sector is a crucial process for fueling team growth and unlocking new opportunities. For IT leaders, being cognizant of potential pitfalls, challenges, and opportunities intrinsic to scaling operations is paramount to maximize efficiency and minimize waste. Through strategic planning and investments in cutting-edge technologies, businesses can navigate the scaling process successfully.

Don’t Miss Our Upcoming Webinar on Efficient Teams Scaling!

Looking for a way to scale your IT team without wasting resources? Join us for an upcoming webinar Mastering the Scale 🚀Growing IT Teams and learn how to grow your team without sacrificing efficiency or success!

We’ll arm you with strategies to navigate scaling challenges, prevent productivity losses, and foster a dynamic culture of collaboration. Empower your team with clear roles, and achievable goals for improved performance.

Here’s what we’ll explore together:

  • 🌱 The intricate dynamics of scaling IT teams
  • ⚙️ Tactics to circumvent common productivity pitfalls
  • 🎯 Expert methods for setting clear and achievable goals
  • 🌍 Real-life examples of cultivating a culture of collaboration and knowledge sharing
  • 🤝 Best practices for assembling cohesive, high-performing IT teams
  • 👥 Team roles and responsibilities that promote efficiency

Don’t let this opportunity slip by to learn how to eliminate waste, boost productivity, and drive success with servant leadership. Register for our upcoming webinar today!

Establishing Clear Roles and Responsibilities:
A Key to Optimizing IT Team Performance

Krzysztof Szromek CEO

Understanding roles and responsibilities in IT teams is non-negotiable for achieving organizational success. Without this clarity, team members risk facing task overload and confusion, causing inefficiency, project hold-ups, wasted resources, and a dip in morale. However, a sharp focus on clearly defined roles propels IT teams to operate at their peak. Well-articulated roles spark team growth, curb waste, instil accountability, and sharpen decision-making prowess.

 

TL;DR 📝

 
  • 🚀 Clear roles and responsibilities are fundamental for IT team efficiency
  • 🎯 Uncertainty in roles can result in resource wastage and lower morale
  • 🌱 Defined roles promote team growth and seamless collaboration
  • 🗑️ Waste reduction is achievable through role clarity and task delineation
  • 📊 Accountability and decision-making are by-products of clear roles
  • 🔧 Use role clarification tools, such as job analyses and task lists
  • 👥 Robust leadership is key to setting expectations and responsibilities

 

Understanding the Ripple Effect of Ambiguity on IT Teams

 

A well-defined structure of roles and responsibilities within an IT team not only promotes growth opportunities but also minimizes resource wastage, encourages accountability, and strengthens decision-making processes. It’s a secret sauce to enhance productivity and streamline operations within an organization.

By making each team member’s responsibilities clear, you ensure everyone is equipped with the right tools to achieve their goals. This improves resource allocation, mitigates stress, and enhances job satisfaction. Moreover, when every team member knows what they are accountable for, they can bring their unique skills to the table, fostering effective collaboration and decision-making. In a nutshell, clarity in roles and responsibilities fuels team productivity and efficiency.

Conversely, lack of clarity can lead to a pile-up of tasks beyond a team member’s scope, resulting in confusion, inefficiency, and project delays. It can also spike wastage of resources and erode team morale. The presence of ambiguity can cripple productivity as employees grapple with understanding their tasks and objectives. This breeds frustration and dissatisfaction, and may even undercut accountability, hampering decision-making and overall job satisfaction. Therefore, to ensure tasks are efficiently allocated and success is collectively achieved, clarity in roles and responsibilities is vital.

Advantages of Well-Defined Roles and Responsibilities

 

Understanding the advantages of well-defined roles and responsibilities in an IT team is crucial for fostering team growth, eliminating wastage, and promoting accountability and decision-making.

🌱 Fostering Team Growth

Clear roles and responsibilities pave the way for efficient and productive team growth. By drawing distinct boundaries, team members can easily identify their tasks, ensuring they have the necessary tools to deliver. This fosters better team coordination, as each member understands their specific duties and can work in harmony with others to complete tasks efficiently. Role clarity can be maintained with ongoing training and development activities, keeping everyone up-to-date with any changes or updates.

🗑️ Eliminating Wastage

Ambiguity in roles can lead to resource wastage as team members may grapple with tasks beyond their scope. By drawing clear lines of responsibilities, you ensure everyone knows their job description and can fulfill their tasks competently. Open communication is key to reducing ambiguity and waste, as it facilitates a clear understanding of expectations, streamlining workflow, and promoting efficient resource use.

📊 Promoting Accountability and Decision-Making

Clearly outlined roles and responsibilities are crucial to instilling accountability within a team. When each member knows their tasks, progress tracking becomes easier and areas needing additional support can be quickly identified. This fosters a sense of ownership and responsibility, driving success. Additionally, role clarity can enhance decision-making by providing insights into the available resources and how they can be used most efficiently.

Strategies to Establish Clear Roles and Responsibilities

 

To ensure success, it is crucial to establish clear roles and responsibilities within an IT team. Some strategies include setting expectations early on, maintaining open communication between team members and leaders, and conducting regular feedback and performance assessments to identify areas needing improvement.

Role clarification tools such as job analyses, job descriptions, and task listings can also be employed to provide clarity on each team member’s responsibilities. Techniques such as brainstorming or problem-solving can be used to understand how roles can be better aligned to accomplish team goals.

Leaders play a pivotal role in setting clear expectations and responsibilities. They must ensure that every team member understands their job scope, tasks, and provide regular feedback and support to hold them accountable. Communication is the key to role clarity, allowing leaders to guide the team on optimal resource use and workflow coordination.

In Conclusion

 

Clear roles and responsibilities within IT teams are instrumental to team growth and organizational success. By minimizing ambiguity through effective communication of expectations, organizations can eradicate waste, encourage accountability, and enhance decision-making processes, leading to improved performance. A clear definition of roles and responsibilities for each team member paves the way for meaningful collaboration and heightened efficiency. This leads to the development of a more successful team and organization as a whole.

Strong leadership is paramount to success in IT teams. Ensuring that each team member comprehends their responsibilities, the essence of teamwork, and their role within the grand scheme of things can significantly contribute towards achieving organizational goals. With thoughtful collaboration and a clear understanding of roles and responsibilities, teams can unlock their potential and foster success.

Don’t Miss Our Upcoming Webinar on Efficient Teams Scaling!

 

Looking for a way to scale your IT team without wasting resources? Join us for an upcoming webinar Mastering the Scale 🚀Growing IT Teams and learn how to grow your team without sacrificing efficiency or success!

We’ll arm you with strategies to navigate scaling challenges, prevent productivity losses, and foster a dynamic culture of collaboration. Empower your team with clear roles, and achievable goals for improved performance.

Here’s what we’ll explore together:

  • 🌱 The intricate dynamics of scaling IT teams
  • ⚙️ Tactics to circumvent common productivity pitfalls
  • 🎯 Expert methods for setting clear and achievable goals
  • 🌍 Real-life examples of cultivating a culture of collaboration and knowledge sharing
  • 🤝 Best practices for assembling cohesive, high-performing IT teams
  • 👥 Team roles and responsibilities that promote efficiency

Don’t let this opportunity slip by to learn how to eliminate waste, boost productivity, and drive success with servant leadership. Register for our upcoming webinar today!

👨‍💼🔑 Empowering IT Teams through Servant Leadership: A Strategy to Boost Growth and Reduce Waste

Krzysztof Szromek CEO

As an IT leader, your task is to steer your teams to success by employing efficient and effective strategies. Servant leadership is a potent approach that can assist you in achieving this objective. This leadership style, which prioritizes employees and seeks to unlock their potential, fosters a culture of collaboration and innovation. Consequently, it fuels team growth and curtails wasteful practices. This article will delve into the principles of servant leadership and their application in IT management, focusing on how this approach can stimulate team growth and minimize waste.

 

TL;DR 📝

  • 🎯 Servant leadership, characterized by prioritizing employee needs and fostering a collaborative environment, significantly enhances team growth and waste reduction in the IT sector.
  • 🌱 Encourages open communication, individual growth, and a collective approach to achieving goals.
  • 💡 Results in increased productivity and higher levels of organizational success.
  • 👥 Empowers employees, nurtures talent, and fosters a supportive and collaborative IT environment.
  • 🔄 Creates a cycle of continuous learning and improvement, eliminating wasteful practices and boosting efficiency.

Decoding Servant Leadership

Servant leadership is a managerial approach that prioritizes the needs of employees, empowering them to realize their full potential. It fosters collaboration, creativity, and support, and entrusts employees with the freedom to take risks and make decisions. By cultivating an environment of support where individuals can flourish and feel valued, servant leadership encourages engagement and open communication, which translates into team growth and waste reduction.

Servant leadership challenges the conventional top-down hierarchies common in many organizations. It stands in stark contrast to authoritarian leadership, which demands absolute control and decision-making power from leaders. Servant leadership aligns with ethical leadership, emphasizing respect for others’ needs and values. It also shares facets with participative leadership, involving subordinates in goal-setting, team building, and problem-solving while retaining the final authority in decision-making.

Key Traits of Servant Leaders

There are several key traits of servant leaders:

Servant Leadership: A Catalyst for Team Growth?


Servant leadership can be instrumental in creating a supportive and collaborative IT environment. Instead of issuing top-down directives, servant leaders foster a culture of collaboration and support that promotes creativity, risk-taking, and growth. By listening, empathizing, healing, and persuading, they instill trust among team members, recognizing each person’s worth and respecting their autonomy.

Servant leaders empower and engage employees by providing the necessary support to take ownership of tasks and make decisions. This sense of accountability can improve performance and open avenues for learning and development. They also facilitate open communication within teams, promoting the free exchange of ideas and open discussions on mistakes.

Furthermore, servant leaders foster learning opportunities and nurture talent. Believing in the inherent value of each individual, they offer coaching and mentoring to help employees develop their skills and explore new career paths. By providing development opportunities, servant leaders create a motivating and enriching environment for the team.

Servant leadership is fundamentally about prioritizing people and bringing out their best. By implementing these principles in the IT industry, managers and leaders can stimulate team growth and curb wasteful practices, thereby ensuring overall organizational success.

Servant Leadership as a Waste Reduction Strategy


Servant leadership also plays a pivotal role in waste reduction by enhancing performance measurements, boosting productivity, increasing retention, and fostering greater cohesion. By focusing on individual strengths and collaborative opportunities rather than a top-down approach, servant leaders encourage employees to strive for excellence. This results in improved performance and a more efficient allocation of resources.

Servant leaders promote a culture where the team works collectively towards common goals and shares ideas openly and honestly. This enables the team to identify and rectify wasteful practices that may be obstructing progress, leading to increased productivity and an improved quality of work. Moreover, by cultivating an atmosphere of trust and respect, servant leaders can retain their top talent and minimize staff turnover.

Ultimately, the goal of servant leadership is to foster an environment where everyone can work collaboratively to achieve the desired outcomes.

Conclusion


To sum up, servant leadership is an exceptionally effective approach to IT management that encourages team growth and reduces wasteful practices. By fostering collaboration, creativity, and empowering employees to realize their full potential, servant leaders create an environment of trust and respect that enhances productivity, increases retention, and fosters learning opportunities. Thus, IT managers and leaders can greatly benefit from implementing the principles of servant leadership to stimulate team growth.

By doing so, they can create a workplace where everyone collaboratively works towards achieving their desired outcomes, leading to greater organizational success.

Don’t Miss Our Upcoming Webinar on Efficient Teams Scaling!


Looking for a way to scale your IT team without wasting resources? Join us for an upcoming webinar Mastering the Scale 🚀Growing IT Teams and learn how to grow your team without sacrificing efficiency or success!

We’ll arm you with strategies to navigate scaling challenges, prevent productivity losses, and foster a dynamic culture of collaboration. Empower your team with clear roles, and achievable goals for improved performance.

Here’s what we’ll explore together:

  • 🌱 The intricate dynamics of scaling IT teams
  • ⚙️ Tactics to circumvent common productivity pitfalls
  • 🎯 Expert methods for setting clear and achievable goals
  • 🌍 Real-life examples of cultivating a culture of collaboration and knowledge sharing
  • 🤝 Best practices for assembling cohesive, high-performing IT teams
  • 👥 Team roles and responsibilities that promote efficiency

Don’t let this opportunity slip by to learn how to eliminate waste, boost productivity, and drive success with servant leadership. Register for our upcoming webinar today!

⏩ How to Speed Up Existing Azure Infrastructure Migration to Terraform? Discover our Time-Efficient Solution - Bid Farewell to Manual Configuration! 🚀

Ewa Kowalska Backend Developer @ Exlabs

Terraform supports importing infrastructure into its state out of the box, but it’s up to the user to provide the proper configuration code for each resource that should be managed. Translating an existing Azure infrastructure into Terraform configuration can be challenging and laborious, not only for a beginner in Azure Cloud Services – sometimes it’s over a hundred resources that need to be imported!

Luckily, Azure provides a tool facilitating that effort – aztfexport – which significantly accelerated the import process in our case. Though there were some limitations of the tool to overcome, eventually the migration was completed with success 🎉

Legacy Infrastructure

The following diagram pictures the infrastructure we dealt with. It consists of several high-level Azure resources: 4 Function Apps, an App Service, a Service bus, a Cosmos DB, an Application Insights and a Key Vault.

As it turned out later, that infrastructure is represented by 102 Terraform resources, which is a significant number to process with the help of aztfexport tool, not to speak of approaching it manually.

Aztfexport limitations

 

The aztfexport tool generates configuration code along with a Terraform state file that reflects the prevailing state of the infrastructure so, in theory, it can be managed by Terraform right away. At the same time, it doesn’t aim at the reproducibility of the infrastructure. Reaching that reproducibility required additional adjustments to the outputted code.

The snippet below represents code generated for the Application Insights along with its alert rule, configured to track anomalies. It pictures some of the encountered issues that needed to be resolved. For the sake of the example, some sensitive values were replaced with dummy ones.

resource "azurerm_resource_group" "res-0" {
  location = "northeurope"
  name     = "resource-group-name"
}
resource "azurerm_monitor_smart_detector_alert_rule" "res-219" {
  detector_type       = "FailureAnomaliesDetector"
  frequency           = "PT1M"
  name                = "alert-rule"
  resource_group_name = "resource-group-name"
  scope_resource_ids  = ["id-of-res-220-application-insights-in-plain-text"]
  severity            = "Sev3"
  action_group {
    ids = ["action-group-resource-id"]
  }
  depends_on = [
    azurerm_resource_group.res-0,
  ]
}
resource "azurerm_application_insights" "res-220" {
  application_type    = "web"
  location            = "northeurope"
  name                = "application-insights"
  resource_group_name = "resource-group-name"
  sampling_percentage = 0
  workspace_id        = "some-resource-id"
  depends_on = [
    azurerm_resource_group.res-0,
  ]
}
As you may notice, the dependencies between resources are not sufficient. There are some – both resources refer to the res-0 resource group, using the depends_on clause. However, the dependent code of the alert rule (res-219) precedes the code of the Application Insights that it should refer to (res-220). No reference is present though – neither the depends_on clause pointing to the Application Insights, nor usage of its attribute. In line 11, the alert rule uses the current id of the Application Insights by id as plain text. If attempting to reproduce resources from that configuration, the id of the newly created application insights would change, and the value from line 11 would no longer be valid. As a result, the process would fail.
 
Another inconvenience was that all code was flattened into a single file and made no use of modules. Also, resource naming was hard to maintain – all of the resources were named in convention res-0, res-1 … and so on. In such a form, the configuration does not support scaling of infrastructure and is hard to understand. In the case of renaming or introducing modules, the generated state file becomes unusable – it can not be modified manually and instead needs to be altered with actions on terraform state to move each resource one by one, which is a problem when dealing with a significant number.
 

Adopted approach

  1. Generate a configuration for the desired resource group. It should be outputted into a separate directory and not pushed into a remote repository right away, as the output contains sensitive data and secrets.
  2. Recreate necessary dependencies and remove sensitive data. Pick a configuration of a high-level resource you want to track (like Function App), and then include the configuration of all resources it depends on (e.g. Storage Account, Service Plan). In the meantime, hide exposed sensitive data by using Terraform variables. This step often involves checking the Azure Portal to determine which resource property is being referenced. For example, given a connection string as plain text, decide whether it’s a database’s primary or secondary connection string.
  3. Organise connected resources into modules. For example, group resources of a single Function App into a module. Rename resources Within a separate module, the resource naming could be more straightforward and shorter compared to everything gathered in a single file, where you need to differentiate resources of several Function Apps.
  4. Manually import each resource into Terraform state with the terraform import command. This was the most laborious step. Luckily, aztfexport outputs a mapping of generated resource names and their ids, which speeds up the import process – while you need to figure out the new name, the id is already provided.
That approach was repeated several times for all resource groups containing resources that needed to be managed. After applying it to the example mentioned before, it results in the following form:

resource "azurerm_resource_group" "this" {
  location = "northeurope"
  name     = "resource-group-name"
}
resource "azurerm_application_insights" "this" {
  name                = "application-insights"
  resource_group_name = azurerm_resource_group.this.name
  location            = azurerm_resource_group.this.location
  application_type    = "web"
  sampling_percentage = 0
  workspace_id        = "some-resource-id"
}
resource "azurerm_monitor_smart_detector_alert_rule" "this" {
  name                = "alert-rule"
  resource_group_name = azurerm_resource_group.this.name
  severity            = "Sev3"
  scope_resource_ids  = [azurerm_application_insights.this.id]
  frequency           = "PT1M"
  detector_type       = "FailureAnomaliesDetector"
  action_group {
    ids = ["action-group-resource-id"]
  }
}
 
Now the resources are kept in the correct hierarchy and the depends_on clauses are replaced by references to the attributes (compare scope_resource_ids property of the alert rule in both examples).
 

Verification method

 
To verify if the adjusted configuration matches exactly the infrastructure, we agreed that the plan outputted by Terraform must match the existing infrastructure and indicate no changes to apply.
 

Minor failures

 
In general, the verification method was a reasonable approach.
 
There is one exception when it failed – it turned out that aztfexport produces configuration for default resources that are created automatically by the Azure provider, as a part of creating higher-level resources. We stumbled upon such a case for a custom hostname binding resource of a Function App, that represented the default hostname. It came out only when reproducing infrastructure from the configuration. Terraform apply failed because of an attempt to duplicate an already created resource, however, terraform plan did not indicate that problem.
To avoid such failures, the final configuration should be tested by recreating the infrastructure from scratch, for example using a separate subscription.
 

🎉 Success

 
In the end, the import process was successful! The Terraform plan was applied with no changes and further modifications of the infrastructure are no longer performed manually.

🛡️ 5 Strategies to Improve
Database Security on AWS

Krzysztof Szromek CEO

Safeguarding company data and maintaining a reliable infrastructure are key responsibilities for IT managers. This article presents five strategies to enhance AWS database security, from secrets rotation to adopting serverless computing. Let’s dive into fortifying your AWS database security!

TL;DR 📝

  • Fundamentals first: encryption, access control, backup, logging & monitoring
  • Secure credentials with Secrets Manager
  • Adopt serverless computing for reduced vulnerabilities
  • Detect leaks using honeypots
  • Attend AWS RDS security webinar for more insights


📚 Start with the Fundamentals – Encryption, Access Control, Backup, Logging & Monitoring

Though not unusual, the essentials—encryption, access control, backups, logging, and monitoring—are often overlooked. Use robust encryption algorithms like AES-256 for data encryption at rest or in transit. Implement proper access controls to ensure only authorized users access the data. Conduct regular backups to protect your data in case of system failures or breaches. Lastly, employ logging and monitoring to detect any suspicious activity within your environment and take swift action when needed.

🗝️Secure RDS Credentials with Secrets Manager


Securing your database credentials using a secrets manager, such as AWS Secrets Manager or similar alternatives, is highly effective. This tool securely stores and manages your application credentials, preventing the need to hard code them into your code. Instead, store the credentials as secrets and securely access them through AWS. This way, if a resource is compromised, the rest of your data remains safe. You may want to read our CTO’s article, “An Introduction to Secrets Rotation.

☁️ Embrace Serverless Computing for Simplified Vulnerability Management


One of the key benefits of serverless computing is that it eliminates the need for infrastructure maintenance. By shifting the responsibility of managing servers and their associated vulnerabilities to the cloud provider, you can focus on writing code and implementing application-level security measures. This approach reduces the attack surface and minimizes potential vulnerabilities that may arise from server misconfigurations or outdated software. As a result, serverless computing enables you to maintain a secure and robust environment while enjoying the benefits of simplified infrastructure management.

👀 Monitor for Credential Leaks Using Honeypots


One effective method to detect leaks is through the use of honeypots. Honeypots are decoy systems or resources designed to attract attackers and gather information about their methods and techniques. By setting up honeypots with fake credentials, you can monitor for unauthorized access attempts and gain valuable insights into potential vulnerabilities in your system. This approach enables you to identify and mitigate security threats proactively, ensuring your actual credentials and sensitive data remain well-protected.

📢 Join Our Webinar to Learn More


Don’t miss our upcoming webinar on AWS RDS database security best practices! Gain insights from our expert speakers on critical topics, including encryption, access controls, credential rotation, backups, and monitoring. Discover practical tips for enhancing your database security posture. This informative and engaging session is perfect for AWS RDS newcomers and seasoned pros alike. Register now to secure your spot!

Conclusion


Enhancing your AWS database security necessitates a multifaceted approach that combines Amazon’s built-in security features with your own robust policies. Establishing a strong foundation for security through encryption, access control, backups, logging, and monitoring is vital. Moreover, leveraging tools like secrets managers and serverless computing can further improve database security. Lastly, consistently monitoring potential credential leaks and unauthorized access enables swift mitigation of security threats.

🔒 The Essential Areas of AWS Security 🔒
What You Need to Know

Krzysztof Szromek CEO

Navigating the AWS security landscape is crucial for IT leaders. Let’s explore vital areas you need to focus on to protect your organization’s data and infrastructure in the cloud.

TL;DR 📝

  • 🚪 IAM: Control access and permissions
  • 🛡️ Network Security: Secure your cloud environment
  • 🗃️ Database Security: Protect sensitive data
  • 🔑 Secrets Rotation: Minimize unauthorized access risks
  • 💻 Code Quality: Deploy vulnerability-free applications
  • 🎯 CIS Benchmark: Adhere to industry-standard best practices


🚪 IAM: Your Gateway to Secure Access


IAM is central to any AWS security strategy. It manages access to resources in the AWS environment, letting users securely administer their accounts and permissions. IAM enables organizations to establish distinct identities for each user, offering granular control over access and actions.

IAM helps maintain compliance by assigning role-based permissions, ensuring only authorized users can access sensitive data or carry out tasks within the AWS environment. To reinforce security, implement best practices like creating unique user identities, rotating passwords, enabling multi-factor authentication (MFA), and using role-based access control (RBAC). Regularly reviewing IAM policies and audit logs is also vital.

🛡️ Network Security: Building a Fortified Cloud Environment


To ensure network security on AWS, you should use Amazon Virtual Private Cloud (VPC), security groups, network access control lists (ACLs), encryption, and monitor network activity. VPC enables you to create a private network in the AWS cloud – something that is of an essence for database security – while security groups and ACLs provide virtual firewalls to control traffic flow and restrict access to resources. Encryption helps protect data in transit and at rest. Monitoring network activity with AWS tools, such as Amazon CloudWatch and Amazon GuardDuty, can help detect potential security threats. By utilizing these networking tools, you can help protect your AWS network from potential threats and keep your data and applications secure.

🗃️ Database Security: Safeguarding Your Data


Database security is critical for AWS. Familiarize yourself with Amazon Relational Database Service (RDS) security features, including authentication and encryption. Use encryption at rest and in transit to protect sensitive data. Enhance database security by monitoring and logging activity, setting up alerts for suspicious behavior, and having a backup and disaster recovery plan in place.


🔑 Secrets Rotation: Keep ‘Em Moving, Keep ‘Em Safe


Rotating secrets like access keys, passwords, and certificates is crucial to minimize the risk of unauthorized access. Implement a regular rotation system, either manually or using tools like AWS Secrets Manager. Incorporate change monitoring for alerts on unexpected secret changes or unauthorized usage. Learn more on how to automate the secrets rotation process.

💻 Code Quality: Crafting Ironclad Applications


Deploying high-quality code free of vulnerabilities is essential. Adopt a “shift-left” approach to code review and testing, and consider automating risk analysis with static application security testing (SAST) tools. These actions help identify and resolve vulnerabilities before they become threats.

🎯 CIS Benchmark: Hitting the Security Bullseye


The Center for Internet Security (CIS) Benchmarks provide best practices for securely configuring cloud infrastructure in line with industry standards. Regularly use CIS Benchmarks to keep your environment secure and compliant with evolving guidelines. Leveraging CIS Benchmarks helps protect data and reduce security breach risks.

Need a helping hand? Do not hesitate to Contact Us

🌊 Riding the Wave: What Does a 23.45% CAGR for Serverless Mean for Business? 🌊

Krzysztof Szromek CEO

As businesses strive to stay ahead of the competition, they are increasingly looking for technology solutions that can save them time and money while offering increased agility. Serverless technology is an attractive option as it enables users to focus on core activities and minimize the costs associated with running traditional IT infrastructure. Serverless with a compound annual growth rate (CAGR) of 23.45% between 2022 and 2030, is rapidly becoming the go-to solution for businesses looking for more efficient ways to manage their IT resources.

  • Serverless technology with a CAGR of 23.45% between 2022 and 2030.
  • Advantages of serverless technology include reduced costs, scalability, and faster deployment times.
  • Companies successfully implementing serverless solutions include Adobe Creative Cloud, Netflix, iRobot, Capital One, Coca-Cola, and The New York Times.
  • Alternatives include Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS), but they have higher upfront costs and require dedicated teams of engineers and designers to manage.
This article aims to provide business IT decision-makers with a comprehensive understanding of serverless technology, its advantages over other solutions, and real-life examples of businesses successfully implementing serverless solutions.

What is serverless? 💻

Serverless is a cloud computing architecture that allows businesses to access and manage IT resources without having to maintain or provision any physical or virtual servers. It is an increasingly popular solution for many companies seeking more efficient ways to manage their IT resources. The impressive CAGR of 23.45% between 2023 and 2030 further proves that this technology is here to stay.

Why do companies choose serverless solutions? 🏆

There are numerous advantages to using serverless computing that make it an attractive solution for businesses.

The first is cost efficiency. With its “pay-per-use” billing model, companies only pay for the resources they actually use, making it a more economical option than other cloud solutions. It also has lower upfront costs as there is no need to purchase hardware or software licenses. Additionally, since serverless applications can be quickly deployed in most cloud environments without needing to adjust infrastructure or configure servers, companies save time and money on setup and maintenance costs.

Another advantage of serverless is scalability. The cloud automatically provisions and scales resources according to demand, so businesses don’t need to worry about their applications crashing due to high traffic. Plus, they don’t need dedicated teams of engineers to monitor server performance and availability.

Finally, companies benefit from faster deployment times, as they can deploy serverless applications in minutes instead of hours or days. This enables rapid testing and iteration without having to provision hardware or configure servers each time.

serverless-architecture-market-synopsis
Source: Serverless Architecture Market Size, Growth | Forecast 2030 (reportsanddata.com)

How are businesses leveraging serverless technology? 💼

Numerous companies have adopted serverless technology to power their operations, demonstrating its potential for cost efficiency, scalability, and faster deployment times.

  • Adobe Creative Cloud uses serverless computing to deliver its products and services to customers quickly and efficiently. They are able to quickly deploy new features and applications in minutes instead of days thanks to the scalability of serverless technology. See more: Adobe + AWS and the Digital Experience Journey | by Ben Tepfer | Adobe Tech Blog
  • Netflix also relies on serverless architecture for its streaming services. By leveraging the cost efficiency and scalability of serverless computing, Netflix is able to rapidly scale their resources according to demand without having to provision any physical or virtual servers.
  • iRobot is another company that is taking advantage of serverless technology. The company uses it for a range of applications from analytics platforms and data pipelines, to image recognition services. By using serverless, iRobot is able to quickly deploy applications and scale resources according to demand.
  • Capital One also relies on serverless technology for its operations. They use it to power their customer-facing applications that require low latency, high throughput, and rapid scalability. More: The serverless-first strategy experience | Capital One
  • Coca-Cola has also adopted serverless computing to create a new “smart vending” system powered by artificial intelligence (AI) and machine learning (ML). This enabled them to rapidly develop the platform without having to purchase or maintain any hardware or software licenses. More: Serverless Computing – A Cola Cola Company Case-Study | LinkedIn
  • The New York Times uses serverless computing for its content delivery network (CDN). Serverless enables The New York Times’ to quickly deploy content and optimize performance in order to provide a seamless user experience for millions of readers from around the world. See details: New York Times CTO Looks Beyond Cloud to Serverless Computing – WSJ

What are the alternatives to serverless computing? 🔄

While computing offers advantages, there are also other alternatives available. Traditional IT solutions such as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a Service (SaaS) are some of the alternatives.

However, these traditional solutions require more upfront setup costs and a longer time to deploy applications than serverless computing. Plus, they require businesses to maintain dedicated teams of engineers and designers to manage them. This can be expensive in the long run for companies that don’t have large budgets to invest in IT infrastructure.

In comparison, serverless technologies offer a much lower-cost solution that can be deployed quickly and easily. This makes it a great choice for businesses that need to move fast and keep costs low.

Why should businesses invest in serverless technology? 💡

In conclusion, serverless computing is a powerful technology that offers businesses numerous advantages. It provides cost efficiency, scalability and fast deployment times. Companies adopt serverless technology to power their operations – its impressive CAGR of 23.45% between 2022 and 2030 shows clearly that serverless computing is here to stay. Businesses looking to stay competitive in the market should strongly consider investing in this technology.

How to start with Serverless?🚀

Need help with your serverless journey? Our team of experts has extensive experience in serverless architecture and can provide consulting, architecture design, and implementation services to help your business achieve its goals. Do not hesitate to Contact Us.

🎯 5 things every Junior Java Developer
should learn to find the dream job

📚 Core Java concepts, 🔄 Version Control Systems
🌉 Java Frameworks, ✅ Testing, 🛠️ IDEs & Tools 

Damian Pawłowski Head of Frontend

Starting a career as a junior developer is not easy. You have to face a huge amount of knowledge and choose what is crucial to start with. Of course, as a beginner, you won’t know everything, but general knowledge is necessary to know how to find your way around a project and where to look for further information. Here are five things you should try to familiarise yourself with as a junior java developer!

 

1. 📚 Core Java concepts

Before you dive into the more complex aspects of Java programming, it’s important to have a solid understanding of the core concepts and fundamentals. This includes things like variables, data types, operators, control statements, and loops. Java is a strongly-typed language, which means that variables must be declared with a specific data type. Almost everything in Java is an object – You should also understand the basics of object-oriented programming, including classes, objects, methods and encapsulation. As if that wasn’t enough – don’t forget about inheritance and polymorphism.

2. 🔄 Version Control Systems

Version control systems like Git are essential tools for software development, and it’s important for every developer to be proficient in using them. Version control systems allow you to keep track of changes to your code and collaborate with other developers. You should learn how to use Git to create and manage branches, commit changes, merge code, and resolve conflicts. This skill is important regardless of the programming language.

3. 🌉 Java Frameworks

To build applications more quickly and efficiently you have to use frameworks. There are many Java frameworks available, including Spring, Spring Boot, Hibernate, and Struts. At the beginning of your career, you should pick up one and start learning it. The most popular are SpringBoot and Hibernate so it should be a good choice in terms of finding the first job. Many developers say that SpringBoot is a good choice to start with because of the clear documentation and mechanisms that make work easier.

4. ✅ Testing

Testing is a critical part of software development. As a Java developer, you should be familiar with testing frameworks like JUnit5 and Mockito. These frameworks allow you to write automated code tests that can help detect errors early and ensure that your code works as intended. By learning to write effective tests, you can improve the quality of your code and reduce the risk of introducing new bugs – this will help you become a better developer and you will have the opportunity to find a good job

5. 🛠️ IDEs & Tools

Integrated Development Environments (IDEs) are essential tools for developers. IDEs such as Eclipse, IntelliJ IDEA, and NetBeans provide a range of features such as code completion, debugging, and project management. Learning to use an IDE effectively will save you time and help you write better code. You can also take a look at build tools like Maven or Gradle – these tools will help you automate the process of building and managing Java projects to simplify the whole development process.

The amount of knowledge at the beginning may be overwhelming, but with each new tool or functionality learned, everything will become clearer. In addition to hard skills as a junior, attitude is also important – if you want to develop and are ambitious – finding your dream job is only a matter of time.

Looking for an internship may also be a good idea – take a look at our Careers Page, maybe we are looking for you!