Header image

Top 10 Design Tools For UX And UI (2025 GUIDE)

13/12/2022

817

Selecting software for UX and UI design is never easy. You want to get something that enables you to flex the full extent of your creative muscle, but you also need a tool that will open your mind to new ideas and approaches you’d have missed.

And then there’s the issue of how well the tool coalesces with a team’s administrative procedures, its integration capabilities, and the returns on investment for each pricing plan, among other factors. But don’t worry, we are going to list the top ten UX and UI design tools to consider using in 2025 and highlight their standout attributes.

Sketch

Sketch is impressive because it allows you to easily adapt your UI designs to different target device screens and their respective dimensions, thanks to custom grids. It will also let you easily reuse various components to maintain consistency in your designs, which is very important for branding.

Besides the presets and artboards, Sketch offers pixel-level accuracy with a snapping mode and smart guides, so there are no blemishes in your work. You’ll also benefit from its Boolean editable operations when introducing changes at different stages. Unfortunately, Sketch is only available on macOS, which complicates collaboration.

Sketch

Source: Sketch

Adobe XD

One standout feature of Adobe XD is the 3D Transforms, which allows you to represent different elements from specific perspectives (angles) and varying depths. This makes it ideal for designs intended for augmented and virtual reality systems.

Additionally, Adobe XD offers expansive prototyping capabilities, enabling designers to publish and share interactive designs. With multiple animation options for the smallest components and voice prototyping, you can quickly realize a lively design.

You’ll have a prototype you can speak to, one that speaks back and makes every action feel like an event of its own but still in a family. This applies to Google Material Design, Apple Design, Amazon Alexa, and many others, thanks to Adobe XD’s assortment of UI kits.

Adobe XD

Source: Toptal

Figma

Figma’s browser-based wireframing capabilities make it a go-to tool for designers who want to quickly put down the skeleton for their designs and share them with colleagues. It also enhances collaboration by allowing you to place comments in your wireframes and get real-time feedback.

While Figma may come off as a tool best suited for presentations and brainstorming thanks to extensions like FigJam and its drag-and-drop approach, it allows you to convert wireframes into clickable prototypes to get a taste of the intended experience.

Figma

Source: Digidop

Balsamiq

This tool offers a much leaner take on wireframing, going easy on the add-ons and keeping users focused on channeling their whiteboard or notepad workflow. However, it has numerous built-in components that you can drag and drop into your project’s workspace with minimal learning time. Lastly, Balsamiq works on both PC and Mac.

Balsamiq

Source: Balsamiq

>>> Related articles:

Overflow

Overflow helps you combine designs made in various tools like Adobe XD, Sketch, and Figma to create coherent user flows when envisioning the journey through your app. You’ll also be able to add device skins.

As you draw your user flow diagrams, you can use different shapes and colors to lay out a process’s logic. Those viewing the diagram can easily follow it and see what happens when a particular condition is met or not, and what the screen looks like. Overflow can also convert your prototype links into connectors in the diagram, so you don’t have to redo that work.

Overflow

Source: Overflow

FlowMapp

FlowMapp offers a more stripped-down approach to creating user flow diagrams. This makes it perfect for designers still in the strategizing phase and don’t have that many complete screens to put in the diagram.

While it may seem like a rudimental tool, FlowMapp can help you make important discoveries. For instance, some screens may need to be split, with one accessed using a button on another, while others need to be condensed into one because the functionality is highly related.

FlowMapp gives a more comprehensive view, such that other stakeholders like copywriters and sales executives can also contribute to the UX plan with a greater understanding of the opportunities and boundaries present in the journey. It’s great for choosing where to insert CTAs and additional messages like warnings at checkout for combating fraud or user feedback collection.

FlowMapp

Framer

Framer’s code approach origins and compatibility with React suit designers focused on the latest web design technologies. Nonetheless, it offers more user-friendly UI design tools and usability testing features.

More importantly, Framer has several plugins that designers can use to embed media players, grids, and other elements into designs to capture content from services like Twitter, Snapchat, Spotify, Soundcloud, and Vimeo, among others. It also has a variety of template categories, ranging from landing pages to startups, splash pages, photography and agency pages, etc.

Framer

Source: Goodgrad

Proto.io

Thousands of templates and digital assets and hundreds of UI components. That is one of the starting points Proto gives you to make your designs come alive within your web browser. Secondly, you can start your prototyping journey by importing files from Adobe XD, Figma, Photoshop, and Sketch.

You’ll also be able to explore different results for touch events, play with many screen transitions, and utilize gestures, sound, video, and dynamic icons. Proto.io comes with mobile, web, and offline modes.

Proto.io

Source: Proto.io

Axure

Axure helps you make prototypes easier to follow by inserting conditional logic. This tool also encourages documenting as you work on high-fidelity prototypes rich in detail. Coupled with the ability to test functions and generate code for handoff to developers, Axure enables team members to comb through work swiftly with minimal oversight, having ready releases much faster.

Axure

Source: Axure

InVision

InVision incorporates digital whiteboarding into the journey to a working prototype, which makes it great for projects where a team wants to keep ideation running concurrently with actual design work for as long as possible.

It comes with a decent list of integration capabilities, ranging from project management tools like Jira and Trello to communication tools like Zoom and Slack. You can even hook up Spotify to provide a soundtrack for members doing freehand brainstorming.

InVision

Source: Invisionapp

Wrapping Up

Every tool has pros and cons, so always consider what phase of the project a specific tool fits into, how well it brings everyone together, and how much creativity it supports. While we’ve focused on Atomic Design In Software Developmentthese top ten picks, many other tools could dominate top UI design trends in 2025, such as Marvel, Origami Studio, Webflow, and more. For professional help in selecting the right UX and UI design tools, contact us for a free consultation.

Related Blog

Differences in UX demands of a desktop and mobile app for a SaaS product (1)

Software Development

+0

    Differences In UX Demands Of A Desktop And Mobile App For A SaaS Product

    While it was more common for individuals and institutions to buy software in the earlier days, the concept of software as a service isn’t that new either. And as smartphones get smarter and more accessible, many product companies are shifting their focus to this ballooning market to sustain and increase profit. But even though many have increased revenue by enhancing their mobile apps, some companies are excelling thanks to a good desktop app UX. Mobile apps often shine when it comes to daily life products for the individual end user while desktop apps encapsulate stunning collaboration and productivity solutions. A recent StatCounter study put desktop traffic at 56.51%, with mobile traffic at 50.48%. Many other reports show that there’s still a roughly 60-40 split in mobile and desktop traffic. Both market segments are here to stay, so let’s examine the differences between UX design for desktop and UX design for mobile: UI Details One of the significant differences is that desktop users are more comfortable having plenty of items fixed on a single UI screen/window. In contrast, mobile users have limited screen space and may use their thumbs more than any other finger, so you can hardly get away with a cluttered UI. Not only does it look overwhelming, but it also increases the chances of a user tapping the wrong button/option. Unfortunately, there are no straightforward solutions to this challenge. You're likely to tuck a feature/function two or more screens away, which users won't be so happy about. Luckily, some designs enable you to have retractable menus that slide into place and then slide away. You also have the option to create circular icon menus that appear when you hold down a button for a while. Ultimately, you should have a navigation option that makes it easy to go to the previous page or return to the general menu. Source: Freepik Source: Freepik You’ll also need to include a button for the most important action a user can take at that stage in their journey. If it's the opening page, this could be a signup button; if it's a category page, it could be an "add to cart" button or a "buy" button if it's the checkout page. Whatever the CTA is, it should be visible. The user shouldn't have to first scroll down the page. It should also be within the thumb zone, so ensure it's wide enough. Source: Freepik UX design for mobile should also consider the unique gestures like swiping, tilting and shaking that can make a mobile app more fun to use, not forgetting the use of haptic feedback to respond to a user’s command. >>> Explore more articles about UI and UX design: Top 10 Design Tools For UX And UI (2025 GUIDE)Top Emerging Trends In App UI Design (2025 OUTLOOK)Atomic Design In Software Development Performance Ideally, both desktop and mobile app versions should be as smooth and fast as possible. However, when you consider the context in which they operate and the behind-the-scenes work involved in making apps faster, you realize that you might need to put more emphasis on one of them. Mobile apps are more likely to be run on devices with limited RAM, storage space and processing power. Additionally, users are more likely to travel with mobile devices to remote areas where internet connectivity may be poorer. Photo by ThisisEngineering RAEng on Unsplash This is why it is essential to optimize mobile apps so they can still work fine when low on resources. From memory allocation to caching, reliance on CDNs and compression for lighter media file versions, offline modes, variable streaming bitrates and data template reuse, there are various techniques you can use to achieve higher mobile app performance. Additionally, don’t forget to test on as many devices and OS versions as possible. Personalization Many software users want to feel like the product was made just for them and it deeply understands them. In the past, personalization came in the form of changeable skins, fonts and colors. Later, it advanced to more important features like changing languages, currencies and measurement systems. However, personalization has to evolve even further. For instance, if the user has enabled your mobile app to access their location, can it suggest the perfect playlist when it detects that they are by the beach or at a riverside campsite or safari lodge. Source: Unsplash Can your shopping app switch to suggestions for sweaters and cold-weather clothes when the user is in a cold region? Will your food app point them to the places with the best hot beverages and confectioneries? Personalization covers several areas, including the way a person types and uses emojis, the order in which they browse pages, how they use search bars and more. Unlike desktop apps that run on devices like work computers that stay in the same place and are shared, or laptops that usually move between work and home, a mobile app often runs on a device that spends most of its time with one person, going everywhere. This is why making mobile app versions as adaptable to the user as possible is crucial. Security and Customer Support On the security front, mobility creates more headaches since it increases the chances of a user losing a device or connecting to an unsecured public network, among other scenarios. This means you should augment mobile apps with more security options, such as fingerprint locks, face ID and other approaches that a mobile device's native hardware can allow. On a deeper level, developers can look into code obfuscation, "root," and "jailbreak detection " to further protect against attack techniques that take advantage of the mobile app-specific architectural and operational characteristics. When it comes to customer support, mobile app UX designers can look into things like the ability to screenshot an error message page and quickly submit it via live chat or tap a call button to speak to an agent. Image by Freepik Another vital customer support area is self-help. Remember, desktop app versions have the advantage since there's more space to display a help article column alongside the actual screen/dashboard where the user is working. They can also properly display video demos and offer an Info view where you see what a button or other element does by hovering the cursor over it. That said, mobile app UX designers need to find ways to condense knowledge bases and other self-help materials within the app to simplify the journey from learning to applying. They can also use GIFs to strike a middle-ground between heavy videos and static images when delivering demos. Wrapping Up All-in-all, it's prudent not to look at the desktop as outdated. Instead, focus more on what they easily accommodate, then figure out how to emulate that on mobile devices. As always, it helps to work with a team of professionals conversant with the nuances of developing and delivering desktop and mobile SaaS apps. You can start this journey by contacting the SupremeTech team for a free consultation on how we bring software ideas to life for our clients.

    25/11/2022

    1.2k

    Software Development

    +0

      Differences In UX Demands Of A Desktop And Mobile App For A SaaS Product

      25/11/2022

      1.2k

      Feature (Web) - Top emerging trends in app UI design (2023 OUTLOOK)

      Software Development

      +0

        Top Emerging Trends In App UI Design (2025 OUTLOOK)

        While an app is made with a specific group of people in mind, that is, people who have a problem that the app solves, its user interface has to consider the existence of several sub-groups within that group. UI designers have to ask themselves a wide range of questions, such as: “Does everyone understand what a certain symbol means?” “Could there be an end-user who is blind?” “Will everyone be able to see this button or read this language?” On top of that, they also have to consider business interests like branding and cost efficiency. So, how are they getting better at harmonizing all this? To answer that, let’s discuss the emerging trends in app UI design through the SupremeTech article. Augmented Reality (AR) AR is gradually becoming a more common aspect of various app UIs, particularly because of its wide range of possibilities when using real graphics to communicate. This technology shows that you can communicate quickly and induce different responses by superimposing extra graphics onto an image or video of an actual entity captured. For example, you can create something that’s funny because it's not real, like showing yourself with dog ears or a flower crown. On the contrary, you can also create something that's captivating because it's almost real, like a view of your living room with a couch or your face with makeup. Source: Unsplash AR gives you a chance to visualize elements you'd otherwise have to put together physically and does so with unprecedented accuracy such that the imaginary representation is as close to the real thing as possible. Some examples of excellent AR usage include Modiface, See My Fit/Virtual Catwalk, IKEA Studio, Amazon Salon, Snapchat, Gucci Sneaker Garage, View in Room and Asian Paints. Voice UI Technically, Voice UI isn't entirely new. For a while, many software tools could respond to commands with something like an error message or instruction in audio form. However, what's changed recently is that thanks to artificial intelligence, users can converse with the software on a device. This is already in use with Google Assistant, Siri and Alexa, but there’s still room to expand. For example, designers can create interfaces that automatically pick up ambient noise in a room and use it as a guideline for adjusting music volume or as a trigger for something else, like a display of birthday party graphics and lighting when a crowd yells “Surprise.” Virtual Reality (VR) VR takes the concept of visualization one step further by immersing you into the space you’re viewing rather than simply pasting it onto a screen. It enables you to perceive dimensional changes when you move within a space, like an object getting closer or farther away. Source: Unsplash It's one thing seeing an object at the end of a room on a screen while being told the length and width of the room. However, it’s totally different when you’re actually in the room. You’re no longer trying to extrapolate from a smaller image on a screen that is also a certain distance away from your eyes. VR's capabilities are handy when trying to remotely touring a house. Moreover, it is about more than just viewing objects. VR can be used to relay commands that involve body movements, which makes it ideal for use cases like rehearsing a surgical procedure or assembling and repairing an intricate machine. Some good VR apps include Provata VR, Space Explorers, Tilt Brush, and Gravity Sketch. VR is also common in the gaming world. >>> Maybe you are interested: Top 10 Design Tools For UX And UI (2025 GUIDE)Differences In UX Demands Of A Desktop And Mobile App For A SaaS ProductAtomic Design In Software Development Haptic Feedback Haptic Feedback is designed to address a user through their sense of touch. In that sense, haptic feedback messages are usually conveyed as vibrations within the device a user handles. Initially, this technology was used in a basic manner, like notifying someone that they are being called if their phone is in silent mode or that they've chosen the right or wrong option on a screen. Later, it advanced into an exciting way to keep a user engaged by trying to simulate what it’s like to be in a particular situation, like the rattle in a car when it leaves a smooth tarmac track and goes off-road onto a rough and bumpy Murram strip. This use case has been prevalent in gaming controllers. Nevertheless, haptic feedback continues to evolve, with companies like NewHaptic using this technology to create fluid Braille touch screens that use tactile pixels (taxes). Clearly, haptic feedback could be a great tool for making apps more accessible to people with disabilities. Additional trends Many other UI trends are impressive, even though they may not have the most significant impact on user behavior. These include a dark mode, flat UI, glass morphism, metamorphism, animated illustrations, buttonless design and minimalism, asymmetrical layouts, and more. Ultimately, UI is an intersection of expression and technology, which means many designers will come across the same concepts, but the difference will be in execution. On that note, here are a few questions to answer before you jump onto a UI trend: Does it make life any easier for the user, or is it merely a fancy nice-to-have?What does it say about your brand? (futuristic, sleek, nostalgic, sexy, young and vibrant, sophisticated etc.)How much computing resources does it require? (Will it end up slowing down the app and making it heavier, or will everything still run smoothly)Is it inclusive, or does it speak to the strengths of a few while sidelining many who have a specific weakness?How much money will it cost to install and maintain? Lastly, remember that UI design goes hand-in-hand with many other elements of a software product. For instance, an e-commerce app's item display may require a slider to see different angles of a product, while a fitness app may only need a thumbnail for each workout. There are other considerations, like whether the subtle tones of neomorphic buttons would work well for a CTA, which usually needs to stand out. Wrapping Up UI design is a far-reaching aspect of app development that often requires various team members’ input. This can be tricky to execute while responding to changes in user demands and other project challenges during the development lifecycle. If you need professional guidance on addressing every facet of app UI design, contact us for a free consultation.

        08/11/2022

        755

        Software Development

        +0

          Top Emerging Trends In App UI Design (2025 OUTLOOK)

          08/11/2022

          755

          Knowledge

          +0

            Best Practices for Building Reliable AWS Lambda Functions

            Welcome back to the "Mastering AWS Lambda with Bao" series! The previous episode explored how AWS Lambda connects to the world through AWS Lambda triggers and events. Using S3 and DynamoDB Streams triggers, we demonstrated how Lambda automates workflows by processing events from multiple sources. This example provided a foundation for understanding Lambda’s event-driven architecture. However, building reliable Lambda functions requires more than understanding how triggers work. To create AWS lambda functions that can handle real-world production workloads, you need to focus on optimizing performance, implementing robust error handling, and enforcing strong security practices. These steps optimize your Lambda functions to be scalable, efficient, and secure. In this episode, SupremeTech will explore the best practices for building reliable AWS Lambda functions, covering two essential areas: Optimizing Performance: Reducing latency, managing resources, and improving runtime efficiency.Error Handling and Logging: Capturing meaningful errors, logging effectively with CloudWatch, and setting up retries. Adopting these best practices, you’ll be well-equipped to optimize Lambda functions that thrive in production environments. Let’s dive in! Optimizing Performance Optimize the Lambda function's performance to run efficiently with minimal latency and cost. Let's focus first on Cold Starts, a critical area of concern for most developers. Understanding Cold Starts What Are Cold Starts? A Cold Start occurs when AWS Lambda initializes a new execution environment to handle an incoming request. This happens under the following circumstances: When the Lambda function is invoked for the first time.After a period of inactivity (execution environments are garbage collected after a few minutes of no activity – meaning it will be shut down automatically).When scaling up to handle additional concurrent requests. Cold starts introduce latency because AWS needs to set up a new execution environment from scratch. Steps Involved in a Cold Start: Resource Allocation:AWS provisions a secure and isolated container for the Lambda function.Resources like memory and CPU are allocated based on the function's configuration.Execution Environment Initialization:AWS sets up the sandbox environment, including:The /tmp directory is for temporary storage.Networking configurations, such as Elastic Network Interfaces (ENI), for VPC-based Lambdas.Runtime Initialization:The specified runtime (e.g., Node.js, Python, Java) is initialized.For Node.js, this involves loading the JavaScript engine (V8) and runtime APIs.Dependency Initialization:AWS loads the deployment package (your Lambda code and dependencies).Any initialization code in your function (e.g., database connections, library imports) is executed.Handler Invocation:Once the environment is fully set up, AWS invokes your Lambda function's handler with the input event. Cold Start Latency Cold start latency varies depending on the runtime, deployment package size, and whether the function runs inside a VPC: Node.js and Python: ~200ms–500ms for non-VPC functions.Java or .NET: ~500ms–2s due to heavier runtime initialization.VPC-Based Functions: Add ~500ms–1s due to ENI initialization. Warm Starts In contrast to cold starts, Warm Starts reuse an already-initialized execution environment. AWS keeps environments "warm" for a short time after a function is invoked, allowing subsequent requests to bypass initialization steps. Key Differences: Cold Start: New container setup → High latency.Warm Start: Reused container → Minimal latency (~<100ms). Reducing Cold Starts Cold starts can significantly impact the performance of latency-sensitive applications. Below are some actionable strategies to reduce cold starts, each with good and bad practice examples for clarity. 1. Use Smaller Deployment Packages to optimize lambda function Good Practice: Minimize the size of your deployment package by including only the required dependencies and removing unnecessary files.Use bundlers like Webpack, ESBuild, or Parcel to optimize your package size.Example: const DynamoDB = require('aws-sdk/clients/dynamodb'); // Only loads DynamoDB, not the entire SDK Bad Practice: Bundling the entire AWS SDK or other large libraries without considering modular imports.Example: const AWS = require('aws-sdk'); // Loads the entire SDK, increasing package size Why It Matters: Smaller deployment packages load faster during the initialization phase, reducing cold start latency. 2. Move Heavy Initialization Outside the Handler Good Practice: Place resource-heavy operations, such as database or SDK client initialization, outside the handler function so they are executed only once per container lifecycle – a cold start.Example: const DynamoDB = new AWS.DynamoDB.DocumentClient(); exports.handler = async (event) => {     const data = await DynamoDB.get({ Key: { id: '123' } }).promise();     return data; }; Bad Practice: Reinitializing resources inside the handler for every invocation.Example: exports.handler = async (event) => {     const DynamoDB = new AWS.DynamoDB.DocumentClient(); // Initialized on every call     const data = await DynamoDB.get({ Key: { id: '123' } }).promise();     return data; }; Why It Matters: Reinitializing resources for every invocation increases latency and consumes unnecessary computing power. 3. Enable Provisioned Concurrency1 Good Practice: Use Provisioned Concurrency to pre-initialize a set number of environments, ensuring they are always ready to handle requests.Example:AWS CLI: aws lambda put-provisioned-concurrency-config \ --function-name myFunction \ --provisioned-concurrent-executions 5 AWS Management Console: Why It Matters: Provisioned concurrency ensures a constant pool of pre-initialized environments, eliminating cold starts entirely for latency-sensitive applications. 4. Reduce Dependencies to optimize the lambda function Good Practice: Evaluate your libraries and replace heavy frameworks with lightweight alternatives or native APIs.Example: console.log(new Date().toISOString()); // Native JavaScript API Bad Practice: Using heavy libraries for simple tasks without considering alternatives.Example: const moment = require('moment'); console.log(moment().format()); Why It Matters: Large dependencies increase the deployment package size, leading to slower initialization during cold starts. 5. Avoid Unnecessary VPC Configurations Good Practice: Place Lambda functions outside a VPC unless necessary. If a VPC is required (e.g., to access private resources like RDS), optimize networking using VPC endpoints.Example:Use DynamoDB and S3 directly without placing the Lambda inside a VPC. Bad Practice: Deploying Lambda functions inside a VPC unnecessarily, such as accessing services like DynamoDB or S3, which do not require VPC access.Why It’s Bad: Placing Lambda in a VPC introduces additional latency due to ENI setup during cold starts. Why It Matters: Functions outside a VPC initialize faster because they skip ENI setup. 6. Choose Lightweight Runtimes to optimize lambda function Good Practice: Use lightweight runtimes like Node.js or Python for faster initialization than heavier runtimes like Java or .NET.Why It’s Good: Lightweight runtimes require fewer initialization resources, leading to lower cold start latency. Why It Matters: Heavier runtimes have higher cold start latency due to the complexity of their initialization process. Summary of Best Practices for Cold Starts AspectGood PracticeBad PracticeDeployment PackageUse small packages with only the required dependencies.Bundle unused libraries, increasing the package size.InitializationPerform heavy initialization (e.g., database connections) outside the handler.Initialize resources inside the handler for every request.Provisioned ConcurrencyEnable provisioned concurrency for latency-sensitive applications.Ignore provisioned concurrency for high-traffic functions.DependenciesUse lightweight libraries or native APIs for simple tasks.Use heavy libraries like moment.js without evaluating lightweight alternatives.VPC ConfigurationAvoid unnecessary VPC configurations; use VPC endpoints when required.Place all Lambda functions inside a VPC, even when accessing public AWS services.Runtime SelectionChoose lightweight runtimes like Node.js or Python for faster initialization.Use heavy runtimes like Java or .NET for simple, lightweight workloads. Error Handling and Logging Error handling and logging are critical for optimizing your Lambda functions are reliable and easy to debug. Effective error handling prevents cascading failures in your architecture, while good logging practices help you monitor and troubleshoot issues efficiently. Structured Error Responses Errors in Lambda functions can occur due to various reasons: invalid input, AWS service failures, or unhandled exceptions in the code. Properly structured error handling ensures that these issues are captured, logged, and surfaced effectively to users or downstream services. 1. Define Consistent Error Structures Good Practice: Use a standard error format so all errors are predictable and machine-readable.Example: {   "errorType": "ValidationError",   "message": "Invalid input: 'email' is missing",   "requestId": "12345-abcd" } Bad Practice: Avoid returning vague or unstructured errors that make debugging difficult. { "message": "Something went wrong", "error": true } Why It Matters: Structured errors make debugging easier by providing consistent, machine-readable information. They also improve communication with clients or downstream systems by conveying what went wrong and how it should be handled. 2. Use Custom Error Classes Good Practice: In Node.js, define custom error classes for clarity: class ValidationError extends Error {   constructor(message) {     super(message);     this.name = "ValidationError";     this.statusCode = 400; // Custom property   } } // Throwing a custom error if (!event.body.email) {   throw new ValidationError("Invalid input: 'email' is missing"); } Bad Practice: Use generic errors for everything, making identifying or categorizing issues hard.Example: throw new Error("Error occurred"); Why It Matters: Custom error classes make error handling more precise and help segregate application errors (e.g., validation issues) from system errors (e.g., database failures). 3. Include Contextual Information in Logs Good Practice: Add relevant information like requestId, timestamp, and input data (excluding sensitive information) when logging errors.Example: console.error({     errorType: "ValidationError",     message: "The 'email' field is missing.",     requestId: context.awsRequestId,     input: event.body,     timestamp: new Date().toISOString(), }); Bad Practice: Log errors without any context, making debugging difficult.Example: console.error("Error occurred"); Why It Matters: Contextual information in logs makes it easier to identify what triggered the error and where it happened, improving the debugging experience. Retry Logic Across AWS SDK and Other Services Retrying failed operations is critical when interacting with external services, as temporary failures (e.g., throttling, timeouts, or transient network issues) can disrupt workflows. Whether you’re using AWS SDK, third-party APIs, or internal services, applying retry logic effectively can ensure system reliability while avoiding unnecessary overhead. 1. Use Exponential Backoff and Jitter Good Practice: Apply exponential backoff with jitter to stagger retry attempts. This avoids overwhelming the target service, especially under high load or rate-limiting scenarios.Example (General Implementation): async function retryWithBackoff(fn, retries = 3, delay = 100) {     for (let attempt = 1; attempt <= retries; attempt++) {         try {             return await fn();         } catch (error) {             if (attempt === retries) throw error; // Rethrow after final attempt             const backoff = delay * 2 ** (attempt - 1) + Math.random() * delay; // Add jitter             console.log(`Retrying in ${backoff.toFixed()}ms...`);             await new Promise((res) => setTimeout(res, backoff));         }     } } // Usage Example const result = await retryWithBackoff(() => callThirdPartyAPI()); Bad Practice: Retrying without delays or jitter can lead to cascading failures and amplify the problem. for (let i = 0; i < retries; i++) {     try {         return await callThirdPartyAPI();     } catch (error) {         console.log("Retrying immediately...");     } } Why It Matters: Exponential backoff reduces pressure on the failing service, while jitter randomizes retry times, preventing synchronized retry storms from multiple clients. 2. Leverage Built-In Retry Mechanisms Good Practice: Use the built-in retry logic of libraries, SDKs, or APIs whenever available. These are typically optimized for the specific service.Example (AWS SDK): const DynamoDB = new AWS.DynamoDB.DocumentClient({     maxRetries: 3, // Number of retries     retryDelayOptions: { base: 200 }, // Base delay in ms }); Example (Axios for Third-Party APIs):Use libraries like axios-retry to integrate retry logic for HTTP requests. const axios = require('axios'); const axiosRetry = require('axios-retry'); axiosRetry(axios, {     retries: 3, // Retry 3 times     retryDelay: (retryCount) => retryCount * 200, // Exponential backoff     retryCondition: (error) => error.response.status >= 500, // Retry only for server errors }); const response = await axios.get("https://example.com/api"); Bad Practice: Writing your own retry logic unnecessarily when built-in mechanisms exist, risking suboptimal implementation. Why It Matters: Built-in retry mechanisms are often optimized for the specific service or library, reducing the likelihood of bugs and configuration errors. 3. Configure Service-Specific Retry Limits Good Practice: Set retry limits based on the service's characteristics and criticality.Example (AWS S3 Upload): const s3 = new AWS.S3({ maxRetries: 5, // Allow more retries for critical operations retryDelayOptions: { base: 300 }, // Slightly longer base delay }); Example (Database Queries): async function queryDatabaseWithRetry(queryFn) {     await retryWithBackoff(queryFn, 5, 100); // Retry with custom backoff logic } Bad Practice: Allowing unlimited retries can cause resource exhaustion and increase costs. while (true) {     try {         return await callService();     } catch (error) {         console.log("Retrying...");     } } Why It Matters: Excessive retries can lead to runaway costs or cascading failures across the system. Always define a sensible retry limit. 4. Handle Transient vs. Persistent Failures Good Practice: Retry only transient failures (e.g., timeouts, throttling, 5xx errors) and handle persistent failures (e.g., invalid input, 4xx errors) immediately.Example: const isTransientError = (error) =>     error.code === "ThrottlingException" || error.code === "TimeoutError"; async function callServiceWithRetry() {     await retryWithBackoff(() => {         if (!isTransientError(error)) throw error; // Do not retry persistent errors         return callService();     }); } Bad Practice: Retrying all errors indiscriminately, including persistent failures like ValidationException or 404 Not Found. Why It Matters: Persistent failures are unlikely to succeed with retries and can waste resources unnecessarily. 5. Log Retry Attempts Good Practice: Log each retry attempt with relevant context, such as the retry count and delay. async function retryWithBackoff(fn, retries = 3, delay = 100) {     for (let attempt = 1; attempt <= retries; attempt++) {         try {             return await fn();         } catch (error) {             if (attempt === retries) throw error;             console.log(`Attempt ${attempt} failed. Retrying in ${delay}ms...`);             await new Promise((res) => setTimeout(res, delay));         }     } } Bad Practice: Failing to log retries makes debugging or understanding the retry behavior difficult. Why It Matters: Logs provide valuable insights into system behavior and help diagnose retry-related issues. Summary of Best Practices for Retry logic AspectGood PracticeBad PracticeRetry LogicUse exponential backoff with jitter to stagger retries.Retry immediately without delays, causing retry storms.Built-In MechanismsLeverage AWS SDK retry options or third-party libraries like axios-retry.Write custom retry logic unnecessarily when optimized built-in solutions are available.Retry LimitsDefine a sensible retry limit (e.g., 3–5 retries).Allow unlimited retries, risking resource exhaustion or runaway costs.Transient vs PersistentRetry only transient errors (e.g., timeouts, throttling) and fail fast for persistent errors.Retry all errors indiscriminately, including persistent failures like validation or 404 errors.LoggingLog retry attempts with context (e.g., attempt number, delay,  error) to aid debugging.Fail to log retries, making it hard to trace retry behavior or diagnose problems. Logging Best Practices Logs are essential for debugging and monitoring Lambda functions. However, unstructured or excessive logging can make it harder to find helpful information. 1. Mask or Exclude Sensitive Data Good Practice: Avoid logging sensitive information like:User credentialsAPI keys, tokens, or secretsPersonally Identifiable Information (PII)Use tools like AWS Secrets Manager for sensitive data management.Example: Mask sensitive fields before logging: const sanitizedInput = {     ...event,     password: "***", }; console.log(JSON.stringify({     level: "info",     message: "User login attempt logged.",     input: sanitizedInput, })); Bad Practice: Logging sensitive data directly can cause security breaches or compliance violations (e.g., GDPR, HIPAA).Example: console.log(`User logged in with password: ${event.password}`); Why It Matters: Logging sensitive data can expose systems to attackers, breach compliance rules, and compromise user trust. 2.  Set Log Retention Policies Good Practice: Set a retention policy for CloudWatch log groups to prevent excessive log storage costs.AWS allows you to configure retention settings (e.g., 7, 14, or 30 days). Bad Practice: Using the default “Never Expire” retention policy unnecessarily stores logs indefinitely. Why It Matters: Unmanaged logs increase costs and make it harder to find relevant data. Retaining logs only as long as needed reduces costs and keeps logs manageable. 3. Avoid Excessive Logging Good Practice: Log only what is necessary to monitor, troubleshoot, and analyze system behavior.Use info, debug, and error levels to prioritize logs appropriately. console.info("Function started processing..."); console.error("Failed to fetch data from DynamoDB: ", error.message); Bad Practice: Logging every detail (e.g., input payloads, execution steps) unnecessarily increases log volume.Example: console.log(`Received event: ${JSON.stringify(event)}`); // Avoid logging full payloads unnecessarily Why It Matters: Excessive logging clutters log storage, increases costs, and makes it harder to isolate relevant logs. 4. Use Log Levels (Info, Debug, Error) Good Practice: Use different log levels to differentiate between critical and non-critical information.info: For general execution logs (e.g., function start, successful completion).debug: For detailed logs during development or troubleshooting.error: For failure scenarios requiring immediate attention. Bad Practice: Using a single log level (e.g., console.log() everywhere) without prioritization. Why It Matters: Log levels make it easier to filter logs based on severity and focus on critical issues in production. Conclusion In this episode of "Mastering AWS Lambda with Bao", we explored critical best practices for building reliable AWS Lambda functions, focusing on optimizing performance, error handling, and logging. Optimizing Performance: By reducing cold starts, using smaller deployment packages, lightweight runtimes, and optimizing VPC configurations, you can significantly lower latency and optimize Lambda functions. Strategies like moving initialization outside the handler and leveraging Provisioned Concurrency ensure smoother execution for latency-sensitive applications.Error Handling: Implementing structured error responses and custom error classes makes troubleshooting easier and helps differentiate between transient and persistent issues. Handling errors consistently improves system resilience.Retry Logic: Applying exponential backoff with jitter, using built-in retry mechanisms, and setting sensible retry limits optimizes that Lambda functions gracefully handle failures without overwhelming dependent services.Logging: Effective logging with structured formats, contextual information, log levels, and appropriate retention policies enables better visibility, debugging, and cost control. Avoiding sensitive data in logs ensures security and compliance. Following these best practices, you can optimize lambda function performance, reduce operational costs, and build scalable, reliable, and secure serverless applications with AWS Lambda. In the next episode, we’ll dive deeper into "Handling Failures with Dead Letter Queues (DLQs)", exploring how DLQs act as a safety net for capturing failed events and ensuring no data loss occurs in your workflows. Stay tuned! Note: 1. Provisioned Concurrency is not a universal solution. While it eliminates cold starts, it also incurs additional costs since pre-initialized environments are billed regardless of usage. When to Use:Latency-sensitive workloads like APIs or real-time applications where even a slight delay is unacceptable.When Not to Use:Functions with unpredictable or low invocation rates (e.g., batch jobs, infrequent triggers). For such scenarios, on-demand concurrency may be more cost-effective.

            13/01/2025

            91

            Bao Dang D. Q.

            Knowledge

            +0

              Best Practices for Building Reliable AWS Lambda Functions

              13/01/2025

              91

              Bao Dang D. Q.

              Knowledge

              +0

                Triggers and Events: How AWS Lambda Connects with the World

                Welcome back to the “Mastering AWS Lambda with Bao” series! In the previous episode, SupremeTech explored how to create an AWS Lambda function triggered by AWS EventBridge to fetch data from DynamoDB, process it, and send it to an SQS queue. That example gave you the foundational skills for building serverless workflows with Lambda. In this episode, we’ll dive deeper into AWS lambda triggers and events, the backbone of AWS Lambda’s event-driven architecture. Triggers enable Lambda to respond to specific actions or events from various AWS services, allowing you to build fully automated, scalable workflows. This episode will help you: Understand how triggers and events work.Explore a comprehensive list of popular AWS Lambda triggers.Implement a two-trigger example to see Lambda in action Our example is simplified for learning purposes and not optimized for production. Let’s get started! Prerequisites Before we begin, ensure you have the following prerequisites in place: AWS Account: Ensure you have access to create and manage AWS resources.Basic Knowledge of Node.js: Familiarity with JavaScript and Node.js will help you understand the Lambda function code. Once you have these prerequisites ready, proceed with the workflow setup. Understanding AWS Lambda Triggers and Events What are the Triggers in AWS Lambda? AWS lambda triggers are configurations that enable the Lambda function to execute in response to specific events. These events are generated by AWS services (e.g., S3, DynamoDB, API Gateway, etc) or external applications integrated through services like Amazon EventBridge. For example: Uploading a file to an S3 bucket can trigger a Lambda function to process the file.Changes in a DynamoDB table can trigger Lambda to perform additional computations or send notifications. How do Events work in AWS Lambda? When a trigger is activated, it generates an event–a structured JSON document containing details about what occurred Lambda receives this event as input to execute its function. Example event from an S3 trigger: { "Records": [ { "eventSource": "aws:s3", "eventName": "ObjectCreated:Put", "s3": { "bucket": { "name": "demo-upload-bucket" }, "object": { "key": "example-file.txt" } } } ] } Popular Triggers in AWS Lambda Here’s a list of some of the most commonly used triggers: Amazon S3:Use case: Process file uploads.Example: Resize images, extract metadata, or move files between buckets.Amazon DynamoDB Streams:Use case: React to data changes in a DynamoDB table.Example: Propagate updates or analyze new entries.Amazon API Gateway:Use case: Build REST or WebSocket APIs.Example: Process user input or return dynamic data.Amazon EventBridge:Use case: React to application or AWS service events.Example: Trigger Lambda for scheduled jobs or custom events. Amazon SQS:Use case: Process messages asynchronously.Example: Decouple microservices with a message queue.Amazon Kinesis:Use case: Process real-time streaming data.Example: Analyze logs or clickstream data.AWS IoT Core:Use case: Process messages from IoT devices.Example: Analyze sensor readings or control devices. By leveraging triggers and events, AWS Lambda enables you to automate complex workflows seamlessly. Setting Up IAM Roles (Optional) Before setting up Lambda triggers, we need to configure an IAM role with the necessary permissions. Step 1: Create an IAM Role Go to the IAM Console and click Create role.Select AWS Service → Lambda and click Next.Attach the following managed policies: AmazonS3ReadOnlyAccess: For reading files from S3.AmazonDynamoDBFullAccess: For writing metadata to DynamoDB and accessing DynamoDB Streams.AmazonSNSFullAccess: For publishing notifications to SNS.CloudWatchLogsFullAccess: For logging Lambda function activity.Click Next and enter a name (e.g., LambdaTriggerRole).Click Create role. Setting Up the Workflow For this episode, we’ll create a simplified two-trigger workflow: S3 Trigger: Processes uploaded files and stores metadata in DynamoDB.DynamoDB Streams Triggers: Sends a notification via SNS when new metadata is added. Step 1: Create an S3 Bucket Open the S3 Console in AWS.Click Create bucket and configure:Bucket name: Enter a unique name (e.g., upload-csv-lambda-st)Region: Choose your preferred region. (I will go with ap-southeast-1)Click Create bucket. Step 2: Create a DynamoDB Table Navigate to the DynamoDB Console.Click Create table and configure:Table name: DemoFileMetadata.Partition key: FileName (String).Sort key: UploadTimestamp (String). Click Create table.Enable DynamoDB Streams with the option New and old images. Step 3: Create an SNS Topic Navigate to the SNS Console.Click Create topic and configure: Topic type: Standard.Name: DemoFileProcessingNotifications. Click Create topic. Create a subscription. Confirm (in my case will be sent to my email). Step 4: Create a Lambda Function Navigate to the Lambda Console and click Create function.Choose Author from scratch and configure:Function name: DemoFileProcessing.Runtime: Select Node.js 20.x (Or your preferred version).Execution role: Select the LambdaTriggerRole you created earlier. Click Create function. Step 5: Configure Triggers Add S3 Trigger:Scroll to the Function overview section and click Add trigger. Select S3 and configure:Bucket: Select upload-csv-lambda-st.Event type: Choose All object create events.Suffix: Specify .csv to limit the trigger to CSV files. Click Add. Add DynamoDB Streams Trigger:Scroll to the Function overview section and click Add trigger. Select DynamoDB and configure:Table: Select DemoFileMetadata. Click Add. Writing the Lambda Function Below is the detailed breakdown of the Node.js Lambda function that handles events from S3 and DynamoDB Streams triggers (Source code). const AWS = require("aws-sdk"); const S3 = new AWS.S3(); const DynamoDB = new AWS.DynamoDB.DocumentClient(); const SNS = new AWS.SNS(); const SNS_TOPIC_ARN = "arn:aws:sns:region:account-id:DemoFileProcessingNotifications"; exports.handler = async (event) => { console.log("Event Received:", JSON.stringify(event, null, 2)); try { if (event.Records[0].eventSource === "aws:s3") { // Process S3 Trigger for (const record of event.Records) { const bucketName = record.s3.bucket.name; const objectKey = decodeURIComponent(record.s3.object.key.replace(/\+/g, " ")); console.log(`File uploaded: ${bucketName}/${objectKey}`); // Save metadata to DynamoDB const timestamp = new Date().toISOString(); await DynamoDB.put({ TableName: "DemoFileMetadata", Item: { FileName: objectKey, UploadTimestamp: timestamp, Status: "Processed", }, }).promise(); console.log(`Metadata saved for file: ${objectKey}`); } } else if (event.Records[0].eventSource === "aws:dynamodb") { // Process DynamoDB Streams Trigger for (const record of event.Records) { if (record.eventName === "INSERT") { const newItem = record.dynamodb.NewImage; // Construct notification message const message = `File ${newItem.FileName.S} uploaded at ${newItem.UploadTimestamp.S} has been processed.`; console.log("Sending notification:", message); // Send notification via SNS await SNS.publish({ TopicArn: SNS_TOPIC_ARN, Message: message, }).promise(); console.log("Notification sent successfully."); } } } return { statusCode: 200, body: "Event processed successfully!", }; } catch (error) { console.error("Error processing event:", error); throw error; } }; Detailed Explanation Importing Required AWS SDK Modules const AWS = require("aws-sdk"); const S3 = new AWS.S3(); const DynamoDB = new AWS.DynamoDB.DocumentClient(); const SNS = new AWS.SNS(); AWS SDK: Provides tools to interact with AWS services.S3 Module: Used to interact with the S3 bucket and retrieve file details.DynamoDB Module: Used to store metadata in the DynamoDB table.SNS Module: Used to publish messages to the SNS topic. Defining the SNS Topic ARN const SNS_TOPIC_ARN = "arn:aws:sns:region:account-id:DemoFileProcessingNotifications"; This is the ARN of the SNS topic where notification will be sent. Replace it with the ARN of your actual topic. Handling the Lambda Event exports.handler = async (event) => { console.log("Event Received:", JSON.stringify(event, null, 2)); The event parameter contains information about the trigger that activated the Lambda function.The event can be from S3 or DynamoDB Streams.The event is logged for debugging purposes. Processing the S3 Trigger if (event.Records[0].eventSource === "aws:s3") { for (const record of event.Records) { const bucketName = record.s3.bucket.name; const objectKey = decodeURIComponent(record.s3.object.key.replace(/\+/g, " ")); console.log(`File uploaded: ${bucketName}/${objectKey}`); Condition: Checks if the event source is S3.Loop: Iterates over all records in the S3 event.Bucket Name and Object Key: Extracts the bucket name and object key from the event.decodeURIComponent() is used to handle special characters in the object key. Saving Metadata to DynamoDB const timestamp = new Date().toISOString(); await DynamoDB.put({ TableName: "DemoFileMetadata", Item: { FileName: objectKey, UploadTimestamp: timestamp, Status: "Processed", }, }).promise(); console.log(`Metadata saved for file: ${objectKey}`); Timestamp: Captures the current time as the upload timestamp.DynamoDB Put Operation:Writes the file metadata to the DemoFileMetadata table.Includes the FileName, UploadTimestamp, and Status.Promise: The put method returns a promise, which is awaited to ensure the operation is completed. Processing the DynamoDB Streams Trigger } else if (event.Records[0].eventSource === "aws:dynamodb") { for (const record of event.Records) { if (record.eventName === "INSERT") { const newItem = record.dynamodb.NewImage; Condition: Checks if the event source is DynamoDB Streams.Loop: Iterates over all records in the DynamoDB Streams event.INSERT Event: Filters only for INSERT operations in the DynamoDB table. Constructing and Sending the SNS Notification const message = `File ${newItem.FileName.S} uploaded at ${newItem.UploadTimestamp.S} has been processed.`; console.log("Sending notification:", message); await SNS.publish({ TopicArn: SNS_TOPIC_ARN, Message: message, }).promise(); console.log("Notification sent successfully."); Constructing the Message:Uses the file name and upload timestamp from the DynamoDB Streams event.SNS Publish Operation:Send the constructed message to the SNS topic.Promise: The publish method returns a promise, which is awaited. to ensure the message is sent. Error Handling } catch (error) { console.error("Error processing event:", error); throw error; } Any errors during event processing are caught and logged.The error is re-thrown to ensure it’s recorded in CloudWatch Logs. Lambda Function Response return {     statusCode: 200,     body: "Event processed successfully!", }; After processing all events, the function returns a successful response. Test The Lambda Function Upload the code into AWS Lambda. Navigate to the S3 Console and choose the bucket you linked to the Lambda Function. Upload a random.csv file to the bucket. Check the result:DynamoDB Table Entry SNS Notifications CloudWatch Logs So, we successfully created a Lambda function that triggered based on 2 triggers. It's pretty simple. Just remember to delete any services after use to avoid incurring unnecessary costs! Conclusion In this episode, we explored AWS Lambda's foundational concepts of triggers and events. Triggers allow Lambda functions to respond to specific actions or events, such as file uploads to S3 or changes in a DynamoDB table. In contrast, events are structured data passed to the Lambda function containing details about what triggered it. We also implemented a practical example to demonstrate how a single Lambda function can handle multiple triggers: An S3 trigger processed uploaded files by extracting metadata and saving it to DynamoDB.A DynamoDB Streams trigger sent notifications via SNS when new metadata was added to the table. This example illustrated the flexibility of Lambda’s event-driven architecture and how it integrates seamlessly with AWS services to automate workflows. In the next episode, we’ll discuss Best practices for Optimizing AWS Lambda Functions, optimizing performance, handling errors effectively, and securing your Lambda functions. Stay tuned to continue enhancing your serverless expertise!

                10/01/2025

                148

                Bao Dang D. Q.

                Knowledge

                +0

                  Triggers and Events: How AWS Lambda Connects with the World

                  10/01/2025

                  148

                  Bao Dang D. Q.

                  Customize software background

                  Want to customize a software for your business?

                  Meet with us! Schedule a meeting with us!