Header image

Interesting things about Blockchain from a non-tech view perspective

29/03/2023

896

Introduction

Blockchain – the keyword is popular in Covid-time as known as much along with cryptocurrency, however, that is not enough when talking about the purpose of Blockchain. So, what is this? does and how it impacts our business value in the future, and what we should prepare for now, that is much question we should clarify.

Let’s explore the Blockchain with me from a non-tech perspective

Blockchain definition & benefits

Blockchain definition & benefits
Source: https://www.freepik.com

The definition of Blockchain is not a long story but that is not easy to understand, because some content has been defined in tech that we can’t think of as simple. That is why we are here just thinking about Blockchain without any technical content.

Looking back at some traditional businesses.

Finance industry

Finance industry
Source: https://www.freepik.com

Present

Imagine a financial system or use it for existing purposes. However, printing how much, how, and whether that number is really accurate or not, can only be stored and reported on government databases.

Since this database is developed and operated by the host government, is the information really reliable? Whether there are problems related to editing, cheating occurs in the system or not?

That’s the problem, trust in published data is not really reliable, as it is stored entirely in a database and centralized in a national agency. So, if there is a technology that can make information more transparent, all money printing and publishing data cannot be edited, just like what was announced before. That’s what we’re looking for – Blockchain technology.

Solution

With Blockchain technology, we don’t just use the traditional database, we can publish the data which should be transparent with citizens into a blockchain database where anyone can scan the data and trust it.

We say trust means that anyone doesn’t need to verify or take care to audit this data because Blockchain technology is designed for truth and transparency.

The data will be published on Blockchain, and it isn’t only stored in one place, the data will be shared in all nodes in the Blockchain network with the same data and not editable.

We call it a decentralized database.

If you need to fix the wrong issue that you have published on the chain, you can only publish with new data. However, the old data is still present. Anyone who wants to see the change can do it.

We found the main key in this story: trust, transparent and decentralized database.

Retailing business

Retailing business
Source: https://www.freepik.com

Present

In the retailing business, we still have many problems, one of which is the fake item and warranty cross country.

The root cause of a fake item or warranty is that the customer can’t identify the source trust of an item. The database of the item might be stored in headquarters and when the item has been distributed in another country, we have no way to verify that item or we can’t open the headquarter database for connection from the branch. So, that is a bad thing with a centralized system. And, Blockchain technology saves us a lot.

Solution

We think about the database as a decentralized system, and we want to make it more accessible in some places in the world. But, we also know that some security or policy does not accept it. And, how we can resolve this issue.

As designed, Blockchain technology highly respects security in its concept. Though that database is public and can be accessed in many places in the world, each node is managed by a physical machine system in one place.

In case one node has been attacked by a hacker, the data is still alive in another node and the hacker can’t edit anything on the chain. Because the data is shared with the same content between nodes and it always keeps syncing up anytime opens.

In this story, we can see the strengths of Blockchain are decentralized and open. However, being open does not mean that it is not secure.

That’s why we need to understand Blockchain Technology as much as possible and adapt to this can open new challenges in the future.

Your business

We don’t know you as much as your business issue. However, learning and adapting technology is highly recommended for business. If you need help, don’t forget to contact us.

Blockchain Technology is the new tech that might change when migrating from Web2 to Web3 ecosystem, and that technology is growing day by day.

What are the types of blockchain networks and how do businesses pick one?

blockchain networks
Source: https://www.freepik.com

Blockchain technology is designed for decentralized networks, however, depending on the Business model, we can pick one of these types to start.

We have four main types of Blockchain:

  • Public blockchain networks are permissionless and allow everyone to join them. All members of the blockchain have equal rights to read, edit, and validate the blockchain. People primarily use public blockchains to exchange and mine cryptocurrencies like Bitcoin, Ethereum, and Litecoin.
  • Private blockchain networks: A single organization controls private blockchains, also called managed blockchains. The authority determines who can be a member and what rights they have in the network. Private blockchains are only partially decentralized because they have access restrictions. Ripple, a digital currency exchange network for businesses, is an example of a private blockchain. If your business just wants to start with a private inside, we can look at it.
  • Hybrid blockchain networks: combine elements from both private and public networks. Companies can set up private, permission-based systems alongside a public system. In this way, they control access to specific data stored in the blockchain while keeping the rest of the data public. They use smart contracts to allow public members to check if private transactions have been completed. For example, hybrid blockchains can grant public access to digital currency while keeping bank-owned currency private.
  • Consortium blockchain networks: A group of organizations governs consortium blockchain networks. Preselected organizations share the responsibility of maintaining the blockchain and determining data access rights. Industries in which many organizations have common goals and benefit from shared responsibility often prefer consortium blockchain networks. For example, the Global Shipping Business Network Consortium is a not-for-profit blockchain consortium that aims to digitize the shipping industry and increase collaboration between maritime industry operators.

How to determine the cost?

the cost
Source: https://www.freepik.com

The migration from the traditional system to the blockchain is very welcome if we identify the correct issue. However, the main thing that we also take care of is the cost of implementation and does it effectively after migration.

The cost will be calculated depending on many things listed here:

  • The purpose and the features needed.
  • Blockchain type and which platform we are targeting.
  • Technology stacks.
  • Especially the transaction count in the system.
  • … and more.

Though we can’t provide exactly the cost, we are defined and broken it down into an estimation report and can consult if you are interested.

The following estimation includes the cost for:

  • Analysis.
  • Consulting.
  • Development (Implementation and Testing).
  • Delivery and maintenance.
  • Cost for services in charge.

Based on the above conditional, we can provide the detailing of proposals based on your interest and help you closer than your expectation.

Conclusion

conclusion
Source: https://www.freepik.com

From a business perspective, Blockchain can really help us scale and solve the backlog problems in the business model.

Although it is not always advisable to apply technology in the business, being open in business, specifically here, sharing problems can help us find the best adaptive solutions and effective measurement instead of being subjective in the old model, because everything happens very quickly and if we do not adapt in time we can be left behind.

The main keys for Blockchain in this article are Decentralization, Immutability, Consensus, and Transparency. So, don’t miss anything to adapt to the blockchain when the new big things might come soon in Web3.

Related Blog

Our success stories

+0

    SupremeTech Partners with AWS and MegazoneCloud to Drive AI-Powered Business Growth

    SupremeTech is pleased to announce our collaboration with AWS and MegazoneCloud for an upcoming event, Harnessing AI on AWS: Transforming Software Builders for the Future, scheduled for March 20, 2025, in Da Nang. This event is about giving software companies the tools, strategies, and real-world solutions they need to spark innovation, boost performance, and even take their businesses global with AI. The software world is changing fast, and Artificial Intelligence is leading the charge. Companies that tap into AI on AWS are finding new ways to grow, streamline their workflows, and stay ahead of the game. This event offers software firms in Da Nang and beyond the chance to see how AI can level up your business and prepare them for the future. SupremeTech’s Contribution to the Event In partnership with AWS and MegazoneCloud, SupremeTech will share valuable insights from our experience mastering complex technical challenges, such as Aurora MySQL upgrades. We will break it down into practical tips and solutions that software companies can use to fine-tune their infrastructure and grow smarter. The SupremeTech partners with AWS and MegazoneCloud event is structured into two key sessions: Morning Session: Accelerating Technical Performance Enhancing Product Value with AI/ML Services: Attendees will learn how AWS’s advanced tools, including Amazon SageMaker and Bedrock, can optimize infrastructure, improve performance, and reduce time to market.Real-World Solutions: Megazone will present hands-on demonstrations of AI services on AWS, offering insights into seamless integration and proven strategies derived from their own expertise. Afternoon Session: Strategic Growth Expansion Scaling with AWS Programs: Discover how AWS initiatives such as the ISV Accelerate and Workload Migration Program (WMP) can accelerate market expansion and support rapid business growth.Global Opportunities with MegazoneCloud: Explore how MegazoneCloud’s extensive partner network and the AWS ecosystem can help software companies bring their products to international markets. This event is more than a technical gathering; it represents a strategic opportunity for software businesses to advance their capabilities and adopt AI effectively on AWS. Why You Should Attend Actionable Strategies: Gain practical knowledge to integrate AI into your business operations.Expert Insights: Benefit from the expertise of SupremeTech, AWS, and MegazoneCloud leaders with proven success in the AI landscape.Networking: Connect with industry peers and potential partners within Da Nang’s growing tech community.Global Expansion: Access tools and programs to scale your business internationally. Event Details Date: March 20, 2025Location: Voco Ma Belle Danang - 168 Vo Nguyen Giap Street, Son Tra Da Nang, Vietnam Register Today to Stay Ahead in the AI Era Do not miss this opportunity to leverage AI on AWS and position your software business for success. Join the event that SupremeTech partners with AWS and MegazoneCloud on March 20, 2025, to grab the insights and tools you need to lead the charge in innovation and growth. Click Here to Register Now and take the first step toward a future of innovation and growth. Related articles about AWS: Mastering AWS Lambda: An Introduction to Serverless ComputingCreate Your First AWS Lambda Function (Node.js, Python, and Go)Triggers and Events: How AWS Lambda Connects with the WorldBest Practices for Building Reliable AWS Lambda Functions

    11/03/2025

    22

    Ngan Phan

    Our success stories

    +0

      SupremeTech Partners with AWS and MegazoneCloud to Drive AI-Powered Business Growth

      11/03/2025

      22

      Ngan Phan

      E-commerce (Shopify)

      +0

        How to Build a High-Performing E-commerce Store

        E-commerce is growing every year and is set to make heavy profits, with more and more people opting for it. However, with ever-growing competition in this domain, you need to stand out to stay afloat. You have to compete with giants like Amazon, eBay, and Alibaba, which can give you a tough time. In this blog, we will discuss building a high-performing e-commerce store in detail. So, let’s get started. See more: Exploring 7 Top Online Food Ordering Systems for Small BusinessesSmooth Sailing: How to Migrate Website to Shopify? Step-by-Step Procedure for Building an E-commerce Store E-commerce stores are becoming increasingly popular due to their range of products on a single platform, short delivery time, and easy payment options. Let’s go through a step-by-step procedure for building an attractive and high-performing e-commerce store. Step 1: Choose Your Platform Each e-commerce store is unique and has its own goals and target audience. Hence, you need to choose an ideal e-commerce platform for your needs. There are a number of options available including Shopify, WooCommerce, Squarespace, WordPress and more.  These eCommerce platforms have their own features, so you should discuss each one with your development team and pick the most suitable one for your needs. Step 2: Create an Account on the Chosen Platform Once you have chosen your eCommerce store platform, you need to purchase the subscription, in case it is not open source. These platforms offer a variety of plans, uptime guarantees, and security features.  If you have chosen an all-in-one e-commerce platform, all you need to do is go to the provider’s website and create an account. Then, you select a plan and pay for it if it is not free. Step 3: Choose a Template After you have decided on your eCommerce platform, you will have a variety of options regarding templates. Each has its own colors, fonts, and layouts, which gives an e-commerce store a consistent look and feel. Templates can be free or premium ones, for which you need to pay. Generally speaking, paid ones offer more features and designs. This saves time which will be spent on coming up with designs from scratch. Step 4: Build Your Webpages & Product Pages You must not use the template as it is; you should customize it according to your requirements. Some common customizations include adding your logo and contact details.  Other changes could be adding product images, configuring your site navigation, and building check-out and returns pages. Step 5: Write Product Descriptions As it is not a brick-and-mortar store where customers can view the products or feel them, you need to work on your product descriptions along with images. They need to be very accurate, easy to understand, and detailed. It should include basic details, along with information about who the product is meant for and where it can be used.  Product images should be high-definition and of the same size, and should show the product from all possible angles.  Step 6: Set Up Payment Gateway As most of your customers will opt for online payments and not Cash on Delivery, you need to have a secure payment gateway. So, integrate secure payment options on your e-Commerce store, which are hassle-free, fast and secure at the same time.  If you redirect your customers to other platforms like PayPal, ensure that data is fully encrypted.  If your payment options are not convenient, no matter how good your product catalog is, customers will not come back. Step 7: Integrate Shipping  The platform you have chosen for your e-Commerce store may allow integrated shipping along with selling. This creates a seamless customer experience and lets you focus on selling products.  You also need to decide on your shipping policy, such as free shipping, flat rates, variable fees, etc. Along with this, you need to decide on your returns and refund policy to make the situation clear for your customers. Step 8: Test & Launch Your e-Commerce Store After all the hard work, it is time to test your e-commerce store before launching it. You should do rigorous testing to ensure there are no bottlenecks and pain points. You can do the testing in-house or outsource the task.  Check all links, buttons, and navigation options and ensure they work. Payment processing should also be thoroughly checked. Your e-commerce store should be compatible with desktop, mobile devices, and all browsers. Once everything has been checked, you are ready for launch. Popular eCommerce Platforms To Consider Several options are available for e-commerce platforms. Let’s discuss some popular ones to help you choose. Shopify Shopify is an eCommerce platform suitable for Web, iOS, and Android. It is easy to set up and has all the necessary tools. The support and resources provided are best in class, which makes it popular and effective.  The only constraint of Shopify is that it can be expensive if you add many extra apps. Many eCommerce stores currently run on Shopify, which has been around for 18 years and is ideal for small businesses that want to go online. BigCommerce BigCommerce is an e-commerce platform suitable for Web, iOS, and Android devices. It is the SMB version of a very popular enterprise eCommerce platform. It has integrated features like shipping and taxes to encourage established businesses online. However, it is not recommended for small retailers setting up their businesses.   It also enables listing your products on e-commerce giants like eBay, Walmart, and Amazon.  As a result, customers do not necessarily have to buy from your store.   BigCommerce has 12 free themes, all of which have a great look. They also have a drag-and-drop site builder that can be used to customize the look.  WooCommerce   WooCommerce is an e-commerce platform for the Web, iOS, and Android. It provides all the flexibility of WordPress, is widely supported, and has many apps and integrations. Installing WooCommerce on your website is very easy, similar to installing any  other plugin on WordPress. If you use WordPress for your eCommerce store, you should ideally go for WooCommerce. Wrapping Up E-commerce is still nascent and will continue to grow over the decade. It provides a range of products and the convenience of buying them from your home from across the world. So, whether you are starting or have an established business, you should consider going online, as it will increase your reach. After considering everything, you should choose your e-commerce store platform. This will ensure that you have a high-performing e-commerce store. For building your e-Commerce store, get in touch with SupremeTech. 

        10/03/2025

        26

        E-commerce (Shopify)

        +0

          How to Build a High-Performing E-commerce Store

          10/03/2025

          26

          Knowledge

          +0

            Best Practices for Building Reliable AWS Lambda Functions

            Welcome back to the "Mastering AWS Lambda with Bao" series! The previous episode explored how AWS Lambda connects to the world through AWS Lambda triggers and events. Using S3 and DynamoDB Streams triggers, we demonstrated how Lambda automates workflows by processing events from multiple sources. This example provided a foundation for understanding Lambda’s event-driven architecture. However, building reliable Lambda functions requires more than understanding how triggers work. To create AWS lambda functions that can handle real-world production workloads, you need to focus on optimizing performance, implementing robust error handling, and enforcing strong security practices. These steps optimize your Lambda functions to be scalable, efficient, and secure. In this episode, SupremeTech will explore the best practices for building reliable AWS Lambda functions, covering two essential areas: Optimizing Performance: Reducing latency, managing resources, and improving runtime efficiency.Error Handling and Logging: Capturing meaningful errors, logging effectively with CloudWatch, and setting up retries. Adopting these best practices, you’ll be well-equipped to optimize Lambda functions that thrive in production environments. Let’s dive in! Optimizing Performance Optimize the Lambda function's performance to run efficiently with minimal latency and cost. Let's focus first on Cold Starts, a critical area of concern for most developers. Understanding Cold Starts What Are Cold Starts? A Cold Start occurs when AWS Lambda initializes a new execution environment to handle an incoming request. This happens under the following circumstances: When the Lambda function is invoked for the first time.After a period of inactivity (execution environments are garbage collected after a few minutes of no activity – meaning it will be shut down automatically).When scaling up to handle additional concurrent requests. Cold starts introduce latency because AWS needs to set up a new execution environment from scratch. Steps Involved in a Cold Start: Resource Allocation:AWS provisions a secure and isolated container for the Lambda function.Resources like memory and CPU are allocated based on the function's configuration.Execution Environment Initialization:AWS sets up the sandbox environment, including:The /tmp directory is for temporary storage.Networking configurations, such as Elastic Network Interfaces (ENI), for VPC-based Lambdas.Runtime Initialization:The specified runtime (e.g., Node.js, Python, Java) is initialized.For Node.js, this involves loading the JavaScript engine (V8) and runtime APIs.Dependency Initialization:AWS loads the deployment package (your Lambda code and dependencies).Any initialization code in your function (e.g., database connections, library imports) is executed.Handler Invocation:Once the environment is fully set up, AWS invokes your Lambda function's handler with the input event. Cold Start Latency Cold start latency varies depending on the runtime, deployment package size, and whether the function runs inside a VPC: Node.js and Python: ~200ms–500ms for non-VPC functions.Java or .NET: ~500ms–2s due to heavier runtime initialization.VPC-Based Functions: Add ~500ms–1s due to ENI initialization. Warm Starts In contrast to cold starts, Warm Starts reuse an already-initialized execution environment. AWS keeps environments "warm" for a short time after a function is invoked, allowing subsequent requests to bypass initialization steps. Key Differences: Cold Start: New container setup → High latency.Warm Start: Reused container → Minimal latency (~<100ms). Reducing Cold Starts Cold starts can significantly impact the performance of latency-sensitive applications. Below are some actionable strategies to reduce cold starts, each with good and bad practice examples for clarity. 1. Use Smaller Deployment Packages to optimize lambda function Good Practice: Minimize the size of your deployment package by including only the required dependencies and removing unnecessary files.Use bundlers like Webpack, ESBuild, or Parcel to optimize your package size.Example: const DynamoDB = require('aws-sdk/clients/dynamodb'); // Only loads DynamoDB, not the entire SDK Bad Practice: Bundling the entire AWS SDK or other large libraries without considering modular imports.Example: const AWS = require('aws-sdk'); // Loads the entire SDK, increasing package size Why It Matters: Smaller deployment packages load faster during the initialization phase, reducing cold start latency. 2. Move Heavy Initialization Outside the Handler Good Practice: Place resource-heavy operations, such as database or SDK client initialization, outside the handler function so they are executed only once per container lifecycle – a cold start.Example: const DynamoDB = new AWS.DynamoDB.DocumentClient(); exports.handler = async (event) => {     const data = await DynamoDB.get({ Key: { id: '123' } }).promise();     return data; }; Bad Practice: Reinitializing resources inside the handler for every invocation.Example: exports.handler = async (event) => {     const DynamoDB = new AWS.DynamoDB.DocumentClient(); // Initialized on every call     const data = await DynamoDB.get({ Key: { id: '123' } }).promise();     return data; }; Why It Matters: Reinitializing resources for every invocation increases latency and consumes unnecessary computing power. 3. Enable Provisioned Concurrency1 Good Practice: Use Provisioned Concurrency to pre-initialize a set number of environments, ensuring they are always ready to handle requests.Example:AWS CLI: aws lambda put-provisioned-concurrency-config \ --function-name myFunction \ --provisioned-concurrent-executions 5 AWS Management Console: Why It Matters: Provisioned concurrency ensures a constant pool of pre-initialized environments, eliminating cold starts entirely for latency-sensitive applications. 4. Reduce Dependencies to optimize the lambda function Good Practice: Evaluate your libraries and replace heavy frameworks with lightweight alternatives or native APIs.Example: console.log(new Date().toISOString()); // Native JavaScript API Bad Practice: Using heavy libraries for simple tasks without considering alternatives.Example: const moment = require('moment'); console.log(moment().format()); Why It Matters: Large dependencies increase the deployment package size, leading to slower initialization during cold starts. 5. Avoid Unnecessary VPC Configurations Good Practice: Place Lambda functions outside a VPC unless necessary. If a VPC is required (e.g., to access private resources like RDS), optimize networking using VPC endpoints.Example:Use DynamoDB and S3 directly without placing the Lambda inside a VPC. Bad Practice: Deploying Lambda functions inside a VPC unnecessarily, such as accessing services like DynamoDB or S3, which do not require VPC access.Why It’s Bad: Placing Lambda in a VPC introduces additional latency due to ENI setup during cold starts. Why It Matters: Functions outside a VPC initialize faster because they skip ENI setup. 6. Choose Lightweight Runtimes to optimize lambda function Good Practice: Use lightweight runtimes like Node.js or Python for faster initialization than heavier runtimes like Java or .NET.Why It’s Good: Lightweight runtimes require fewer initialization resources, leading to lower cold start latency. Why It Matters: Heavier runtimes have higher cold start latency due to the complexity of their initialization process. Summary of Best Practices for Cold Starts AspectGood PracticeBad PracticeDeployment PackageUse small packages with only the required dependencies.Bundle unused libraries, increasing the package size.InitializationPerform heavy initialization (e.g., database connections) outside the handler.Initialize resources inside the handler for every request.Provisioned ConcurrencyEnable provisioned concurrency for latency-sensitive applications.Ignore provisioned concurrency for high-traffic functions.DependenciesUse lightweight libraries or native APIs for simple tasks.Use heavy libraries like moment.js without evaluating lightweight alternatives.VPC ConfigurationAvoid unnecessary VPC configurations; use VPC endpoints when required.Place all Lambda functions inside a VPC, even when accessing public AWS services.Runtime SelectionChoose lightweight runtimes like Node.js or Python for faster initialization.Use heavy runtimes like Java or .NET for simple, lightweight workloads. Error Handling and Logging Error handling and logging are critical for optimizing your Lambda functions are reliable and easy to debug. Effective error handling prevents cascading failures in your architecture, while good logging practices help you monitor and troubleshoot issues efficiently. Structured Error Responses Errors in Lambda functions can occur due to various reasons: invalid input, AWS service failures, or unhandled exceptions in the code. Properly structured error handling ensures that these issues are captured, logged, and surfaced effectively to users or downstream services. 1. Define Consistent Error Structures Good Practice: Use a standard error format so all errors are predictable and machine-readable.Example: {   "errorType": "ValidationError",   "message": "Invalid input: 'email' is missing",   "requestId": "12345-abcd" } Bad Practice: Avoid returning vague or unstructured errors that make debugging difficult. { "message": "Something went wrong", "error": true } Why It Matters: Structured errors make debugging easier by providing consistent, machine-readable information. They also improve communication with clients or downstream systems by conveying what went wrong and how it should be handled. 2. Use Custom Error Classes Good Practice: In Node.js, define custom error classes for clarity: class ValidationError extends Error {   constructor(message) {     super(message);     this.name = "ValidationError";     this.statusCode = 400; // Custom property   } } // Throwing a custom error if (!event.body.email) {   throw new ValidationError("Invalid input: 'email' is missing"); } Bad Practice: Use generic errors for everything, making identifying or categorizing issues hard.Example: throw new Error("Error occurred"); Why It Matters: Custom error classes make error handling more precise and help segregate application errors (e.g., validation issues) from system errors (e.g., database failures). 3. Include Contextual Information in Logs Good Practice: Add relevant information like requestId, timestamp, and input data (excluding sensitive information) when logging errors.Example: console.error({     errorType: "ValidationError",     message: "The 'email' field is missing.",     requestId: context.awsRequestId,     input: event.body,     timestamp: new Date().toISOString(), }); Bad Practice: Log errors without any context, making debugging difficult.Example: console.error("Error occurred"); Why It Matters: Contextual information in logs makes it easier to identify what triggered the error and where it happened, improving the debugging experience. Retry Logic Across AWS SDK and Other Services Retrying failed operations is critical when interacting with external services, as temporary failures (e.g., throttling, timeouts, or transient network issues) can disrupt workflows. Whether you’re using AWS SDK, third-party APIs, or internal services, applying retry logic effectively can ensure system reliability while avoiding unnecessary overhead. 1. Use Exponential Backoff and Jitter Good Practice: Apply exponential backoff with jitter to stagger retry attempts. This avoids overwhelming the target service, especially under high load or rate-limiting scenarios.Example (General Implementation): async function retryWithBackoff(fn, retries = 3, delay = 100) {     for (let attempt = 1; attempt <= retries; attempt++) {         try {             return await fn();         } catch (error) {             if (attempt === retries) throw error; // Rethrow after final attempt             const backoff = delay * 2 ** (attempt - 1) + Math.random() * delay; // Add jitter             console.log(`Retrying in ${backoff.toFixed()}ms...`);             await new Promise((res) => setTimeout(res, backoff));         }     } } // Usage Example const result = await retryWithBackoff(() => callThirdPartyAPI()); Bad Practice: Retrying without delays or jitter can lead to cascading failures and amplify the problem. for (let i = 0; i < retries; i++) {     try {         return await callThirdPartyAPI();     } catch (error) {         console.log("Retrying immediately...");     } } Why It Matters: Exponential backoff reduces pressure on the failing service, while jitter randomizes retry times, preventing synchronized retry storms from multiple clients. 2. Leverage Built-In Retry Mechanisms Good Practice: Use the built-in retry logic of libraries, SDKs, or APIs whenever available. These are typically optimized for the specific service.Example (AWS SDK): const DynamoDB = new AWS.DynamoDB.DocumentClient({     maxRetries: 3, // Number of retries     retryDelayOptions: { base: 200 }, // Base delay in ms }); Example (Axios for Third-Party APIs):Use libraries like axios-retry to integrate retry logic for HTTP requests. const axios = require('axios'); const axiosRetry = require('axios-retry'); axiosRetry(axios, {     retries: 3, // Retry 3 times     retryDelay: (retryCount) => retryCount * 200, // Exponential backoff     retryCondition: (error) => error.response.status >= 500, // Retry only for server errors }); const response = await axios.get("https://example.com/api"); Bad Practice: Writing your own retry logic unnecessarily when built-in mechanisms exist, risking suboptimal implementation. Why It Matters: Built-in retry mechanisms are often optimized for the specific service or library, reducing the likelihood of bugs and configuration errors. 3. Configure Service-Specific Retry Limits Good Practice: Set retry limits based on the service's characteristics and criticality.Example (AWS S3 Upload): const s3 = new AWS.S3({ maxRetries: 5, // Allow more retries for critical operations retryDelayOptions: { base: 300 }, // Slightly longer base delay }); Example (Database Queries): async function queryDatabaseWithRetry(queryFn) {     await retryWithBackoff(queryFn, 5, 100); // Retry with custom backoff logic } Bad Practice: Allowing unlimited retries can cause resource exhaustion and increase costs. while (true) {     try {         return await callService();     } catch (error) {         console.log("Retrying...");     } } Why It Matters: Excessive retries can lead to runaway costs or cascading failures across the system. Always define a sensible retry limit. 4. Handle Transient vs. Persistent Failures Good Practice: Retry only transient failures (e.g., timeouts, throttling, 5xx errors) and handle persistent failures (e.g., invalid input, 4xx errors) immediately.Example: const isTransientError = (error) =>     error.code === "ThrottlingException" || error.code === "TimeoutError"; async function callServiceWithRetry() {     await retryWithBackoff(() => {         if (!isTransientError(error)) throw error; // Do not retry persistent errors         return callService();     }); } Bad Practice: Retrying all errors indiscriminately, including persistent failures like ValidationException or 404 Not Found. Why It Matters: Persistent failures are unlikely to succeed with retries and can waste resources unnecessarily. 5. Log Retry Attempts Good Practice: Log each retry attempt with relevant context, such as the retry count and delay. async function retryWithBackoff(fn, retries = 3, delay = 100) {     for (let attempt = 1; attempt <= retries; attempt++) {         try {             return await fn();         } catch (error) {             if (attempt === retries) throw error;             console.log(`Attempt ${attempt} failed. Retrying in ${delay}ms...`);             await new Promise((res) => setTimeout(res, delay));         }     } } Bad Practice: Failing to log retries makes debugging or understanding the retry behavior difficult. Why It Matters: Logs provide valuable insights into system behavior and help diagnose retry-related issues. Summary of Best Practices for Retry logic AspectGood PracticeBad PracticeRetry LogicUse exponential backoff with jitter to stagger retries.Retry immediately without delays, causing retry storms.Built-In MechanismsLeverage AWS SDK retry options or third-party libraries like axios-retry.Write custom retry logic unnecessarily when optimized built-in solutions are available.Retry LimitsDefine a sensible retry limit (e.g., 3–5 retries).Allow unlimited retries, risking resource exhaustion or runaway costs.Transient vs PersistentRetry only transient errors (e.g., timeouts, throttling) and fail fast for persistent errors.Retry all errors indiscriminately, including persistent failures like validation or 404 errors.LoggingLog retry attempts with context (e.g., attempt number, delay,  error) to aid debugging.Fail to log retries, making it hard to trace retry behavior or diagnose problems. Logging Best Practices Logs are essential for debugging and monitoring Lambda functions. However, unstructured or excessive logging can make it harder to find helpful information. 1. Mask or Exclude Sensitive Data Good Practice: Avoid logging sensitive information like:User credentialsAPI keys, tokens, or secretsPersonally Identifiable Information (PII)Use tools like AWS Secrets Manager for sensitive data management.Example: Mask sensitive fields before logging: const sanitizedInput = {     ...event,     password: "***", }; console.log(JSON.stringify({     level: "info",     message: "User login attempt logged.",     input: sanitizedInput, })); Bad Practice: Logging sensitive data directly can cause security breaches or compliance violations (e.g., GDPR, HIPAA).Example: console.log(`User logged in with password: ${event.password}`); Why It Matters: Logging sensitive data can expose systems to attackers, breach compliance rules, and compromise user trust. 2.  Set Log Retention Policies Good Practice: Set a retention policy for CloudWatch log groups to prevent excessive log storage costs.AWS allows you to configure retention settings (e.g., 7, 14, or 30 days). Bad Practice: Using the default “Never Expire” retention policy unnecessarily stores logs indefinitely. Why It Matters: Unmanaged logs increase costs and make it harder to find relevant data. Retaining logs only as long as needed reduces costs and keeps logs manageable. 3. Avoid Excessive Logging Good Practice: Log only what is necessary to monitor, troubleshoot, and analyze system behavior.Use info, debug, and error levels to prioritize logs appropriately. console.info("Function started processing..."); console.error("Failed to fetch data from DynamoDB: ", error.message); Bad Practice: Logging every detail (e.g., input payloads, execution steps) unnecessarily increases log volume.Example: console.log(`Received event: ${JSON.stringify(event)}`); // Avoid logging full payloads unnecessarily Why It Matters: Excessive logging clutters log storage, increases costs, and makes it harder to isolate relevant logs. 4. Use Log Levels (Info, Debug, Error) Good Practice: Use different log levels to differentiate between critical and non-critical information.info: For general execution logs (e.g., function start, successful completion).debug: For detailed logs during development or troubleshooting.error: For failure scenarios requiring immediate attention. Bad Practice: Using a single log level (e.g., console.log() everywhere) without prioritization. Why It Matters: Log levels make it easier to filter logs based on severity and focus on critical issues in production. Conclusion In this episode of "Mastering AWS Lambda with Bao", we explored critical best practices for building reliable AWS Lambda functions, focusing on optimizing performance, error handling, and logging. Optimizing Performance: By reducing cold starts, using smaller deployment packages, lightweight runtimes, and optimizing VPC configurations, you can significantly lower latency and optimize Lambda functions. Strategies like moving initialization outside the handler and leveraging Provisioned Concurrency ensure smoother execution for latency-sensitive applications.Error Handling: Implementing structured error responses and custom error classes makes troubleshooting easier and helps differentiate between transient and persistent issues. Handling errors consistently improves system resilience.Retry Logic: Applying exponential backoff with jitter, using built-in retry mechanisms, and setting sensible retry limits optimizes that Lambda functions gracefully handle failures without overwhelming dependent services.Logging: Effective logging with structured formats, contextual information, log levels, and appropriate retention policies enables better visibility, debugging, and cost control. Avoiding sensitive data in logs ensures security and compliance. Following these best practices, you can optimize lambda function performance, reduce operational costs, and build scalable, reliable, and secure serverless applications with AWS Lambda. In the next episode, we’ll dive deeper into "Handling Failures with Dead Letter Queues (DLQs)", exploring how DLQs act as a safety net for capturing failed events and ensuring no data loss occurs in your workflows. Stay tuned! Note: 1. Provisioned Concurrency is not a universal solution. While it eliminates cold starts, it also incurs additional costs since pre-initialized environments are billed regardless of usage. When to Use:Latency-sensitive workloads like APIs or real-time applications where even a slight delay is unacceptable.When Not to Use:Functions with unpredictable or low invocation rates (e.g., batch jobs, infrequent triggers). For such scenarios, on-demand concurrency may be more cost-effective.

            13/01/2025

            237

            Bao Dang D. Q.

            Knowledge

            +0

              Best Practices for Building Reliable AWS Lambda Functions

              13/01/2025

              237

              Bao Dang D. Q.

              Knowledge

              +0

                Triggers and Events: How AWS Lambda Connects with the World

                Welcome back to the “Mastering AWS Lambda with Bao” series! In the previous episode, SupremeTech explored how to create an AWS Lambda function triggered by AWS EventBridge to fetch data from DynamoDB, process it, and send it to an SQS queue. That example gave you the foundational skills for building serverless workflows with Lambda. In this episode, we’ll dive deeper into AWS lambda triggers and events, the backbone of AWS Lambda’s event-driven architecture. Triggers enable Lambda to respond to specific actions or events from various AWS services, allowing you to build fully automated, scalable workflows. This episode will help you: Understand how triggers and events work.Explore a comprehensive list of popular AWS Lambda triggers.Implement a two-trigger example to see Lambda in action Our example is simplified for learning purposes and not optimized for production. Let’s get started! Prerequisites Before we begin, ensure you have the following prerequisites in place: AWS Account: Ensure you have access to create and manage AWS resources.Basic Knowledge of Node.js: Familiarity with JavaScript and Node.js will help you understand the Lambda function code. Once you have these prerequisites ready, proceed with the workflow setup. Understanding AWS Lambda Triggers and Events What are the Triggers in AWS Lambda? AWS lambda triggers are configurations that enable the Lambda function to execute in response to specific events. These events are generated by AWS services (e.g., S3, DynamoDB, API Gateway, etc) or external applications integrated through services like Amazon EventBridge. For example: Uploading a file to an S3 bucket can trigger a Lambda function to process the file.Changes in a DynamoDB table can trigger Lambda to perform additional computations or send notifications. How do Events work in AWS Lambda? When a trigger is activated, it generates an event–a structured JSON document containing details about what occurred Lambda receives this event as input to execute its function. Example event from an S3 trigger: { "Records": [ { "eventSource": "aws:s3", "eventName": "ObjectCreated:Put", "s3": { "bucket": { "name": "demo-upload-bucket" }, "object": { "key": "example-file.txt" } } } ] } Popular Triggers in AWS Lambda Here’s a list of some of the most commonly used triggers: Amazon S3:Use case: Process file uploads.Example: Resize images, extract metadata, or move files between buckets.Amazon DynamoDB Streams:Use case: React to data changes in a DynamoDB table.Example: Propagate updates or analyze new entries.Amazon API Gateway:Use case: Build REST or WebSocket APIs.Example: Process user input or return dynamic data.Amazon EventBridge:Use case: React to application or AWS service events.Example: Trigger Lambda for scheduled jobs or custom events. Amazon SQS:Use case: Process messages asynchronously.Example: Decouple microservices with a message queue.Amazon Kinesis:Use case: Process real-time streaming data.Example: Analyze logs or clickstream data.AWS IoT Core:Use case: Process messages from IoT devices.Example: Analyze sensor readings or control devices. By leveraging triggers and events, AWS Lambda enables you to automate complex workflows seamlessly. Setting Up IAM Roles (Optional) Before setting up Lambda triggers, we need to configure an IAM role with the necessary permissions. Step 1: Create an IAM Role Go to the IAM Console and click Create role.Select AWS Service → Lambda and click Next.Attach the following managed policies: AmazonS3ReadOnlyAccess: For reading files from S3.AmazonDynamoDBFullAccess: For writing metadata to DynamoDB and accessing DynamoDB Streams.AmazonSNSFullAccess: For publishing notifications to SNS.CloudWatchLogsFullAccess: For logging Lambda function activity.Click Next and enter a name (e.g., LambdaTriggerRole).Click Create role. Setting Up the Workflow For this episode, we’ll create a simplified two-trigger workflow: S3 Trigger: Processes uploaded files and stores metadata in DynamoDB.DynamoDB Streams Triggers: Sends a notification via SNS when new metadata is added. Step 1: Create an S3 Bucket Open the S3 Console in AWS.Click Create bucket and configure:Bucket name: Enter a unique name (e.g., upload-csv-lambda-st)Region: Choose your preferred region. (I will go with ap-southeast-1)Click Create bucket. Step 2: Create a DynamoDB Table Navigate to the DynamoDB Console.Click Create table and configure:Table name: DemoFileMetadata.Partition key: FileName (String).Sort key: UploadTimestamp (String). Click Create table.Enable DynamoDB Streams with the option New and old images. Step 3: Create an SNS Topic Navigate to the SNS Console.Click Create topic and configure: Topic type: Standard.Name: DemoFileProcessingNotifications. Click Create topic. Create a subscription. Confirm (in my case will be sent to my email). Step 4: Create a Lambda Function Navigate to the Lambda Console and click Create function.Choose Author from scratch and configure:Function name: DemoFileProcessing.Runtime: Select Node.js 20.x (Or your preferred version).Execution role: Select the LambdaTriggerRole you created earlier. Click Create function. Step 5: Configure Triggers Add S3 Trigger:Scroll to the Function overview section and click Add trigger. Select S3 and configure:Bucket: Select upload-csv-lambda-st.Event type: Choose All object create events.Suffix: Specify .csv to limit the trigger to CSV files. Click Add. Add DynamoDB Streams Trigger:Scroll to the Function overview section and click Add trigger. Select DynamoDB and configure:Table: Select DemoFileMetadata. Click Add. Writing the Lambda Function Below is the detailed breakdown of the Node.js Lambda function that handles events from S3 and DynamoDB Streams triggers (Source code). const AWS = require("aws-sdk"); const S3 = new AWS.S3(); const DynamoDB = new AWS.DynamoDB.DocumentClient(); const SNS = new AWS.SNS(); const SNS_TOPIC_ARN = "arn:aws:sns:region:account-id:DemoFileProcessingNotifications"; exports.handler = async (event) => { console.log("Event Received:", JSON.stringify(event, null, 2)); try { if (event.Records[0].eventSource === "aws:s3") { // Process S3 Trigger for (const record of event.Records) { const bucketName = record.s3.bucket.name; const objectKey = decodeURIComponent(record.s3.object.key.replace(/\+/g, " ")); console.log(`File uploaded: ${bucketName}/${objectKey}`); // Save metadata to DynamoDB const timestamp = new Date().toISOString(); await DynamoDB.put({ TableName: "DemoFileMetadata", Item: { FileName: objectKey, UploadTimestamp: timestamp, Status: "Processed", }, }).promise(); console.log(`Metadata saved for file: ${objectKey}`); } } else if (event.Records[0].eventSource === "aws:dynamodb") { // Process DynamoDB Streams Trigger for (const record of event.Records) { if (record.eventName === "INSERT") { const newItem = record.dynamodb.NewImage; // Construct notification message const message = `File ${newItem.FileName.S} uploaded at ${newItem.UploadTimestamp.S} has been processed.`; console.log("Sending notification:", message); // Send notification via SNS await SNS.publish({ TopicArn: SNS_TOPIC_ARN, Message: message, }).promise(); console.log("Notification sent successfully."); } } } return { statusCode: 200, body: "Event processed successfully!", }; } catch (error) { console.error("Error processing event:", error); throw error; } }; Detailed Explanation Importing Required AWS SDK Modules const AWS = require("aws-sdk"); const S3 = new AWS.S3(); const DynamoDB = new AWS.DynamoDB.DocumentClient(); const SNS = new AWS.SNS(); AWS SDK: Provides tools to interact with AWS services.S3 Module: Used to interact with the S3 bucket and retrieve file details.DynamoDB Module: Used to store metadata in the DynamoDB table.SNS Module: Used to publish messages to the SNS topic. Defining the SNS Topic ARN const SNS_TOPIC_ARN = "arn:aws:sns:region:account-id:DemoFileProcessingNotifications"; This is the ARN of the SNS topic where notification will be sent. Replace it with the ARN of your actual topic. Handling the Lambda Event exports.handler = async (event) => { console.log("Event Received:", JSON.stringify(event, null, 2)); The event parameter contains information about the trigger that activated the Lambda function.The event can be from S3 or DynamoDB Streams.The event is logged for debugging purposes. Processing the S3 Trigger if (event.Records[0].eventSource === "aws:s3") { for (const record of event.Records) { const bucketName = record.s3.bucket.name; const objectKey = decodeURIComponent(record.s3.object.key.replace(/\+/g, " ")); console.log(`File uploaded: ${bucketName}/${objectKey}`); Condition: Checks if the event source is S3.Loop: Iterates over all records in the S3 event.Bucket Name and Object Key: Extracts the bucket name and object key from the event.decodeURIComponent() is used to handle special characters in the object key. Saving Metadata to DynamoDB const timestamp = new Date().toISOString(); await DynamoDB.put({ TableName: "DemoFileMetadata", Item: { FileName: objectKey, UploadTimestamp: timestamp, Status: "Processed", }, }).promise(); console.log(`Metadata saved for file: ${objectKey}`); Timestamp: Captures the current time as the upload timestamp.DynamoDB Put Operation:Writes the file metadata to the DemoFileMetadata table.Includes the FileName, UploadTimestamp, and Status.Promise: The put method returns a promise, which is awaited to ensure the operation is completed. Processing the DynamoDB Streams Trigger } else if (event.Records[0].eventSource === "aws:dynamodb") { for (const record of event.Records) { if (record.eventName === "INSERT") { const newItem = record.dynamodb.NewImage; Condition: Checks if the event source is DynamoDB Streams.Loop: Iterates over all records in the DynamoDB Streams event.INSERT Event: Filters only for INSERT operations in the DynamoDB table. Constructing and Sending the SNS Notification const message = `File ${newItem.FileName.S} uploaded at ${newItem.UploadTimestamp.S} has been processed.`; console.log("Sending notification:", message); await SNS.publish({ TopicArn: SNS_TOPIC_ARN, Message: message, }).promise(); console.log("Notification sent successfully."); Constructing the Message:Uses the file name and upload timestamp from the DynamoDB Streams event.SNS Publish Operation:Send the constructed message to the SNS topic.Promise: The publish method returns a promise, which is awaited. to ensure the message is sent. Error Handling } catch (error) { console.error("Error processing event:", error); throw error; } Any errors during event processing are caught and logged.The error is re-thrown to ensure it’s recorded in CloudWatch Logs. Lambda Function Response return {     statusCode: 200,     body: "Event processed successfully!", }; After processing all events, the function returns a successful response. Test The Lambda Function Upload the code into AWS Lambda. Navigate to the S3 Console and choose the bucket you linked to the Lambda Function. Upload a random.csv file to the bucket. Check the result:DynamoDB Table Entry SNS Notifications CloudWatch Logs So, we successfully created a Lambda function that triggered based on 2 triggers. It's pretty simple. Just remember to delete any services after use to avoid incurring unnecessary costs! Conclusion In this episode, we explored AWS Lambda's foundational concepts of triggers and events. Triggers allow Lambda functions to respond to specific actions or events, such as file uploads to S3 or changes in a DynamoDB table. In contrast, events are structured data passed to the Lambda function containing details about what triggered it. We also implemented a practical example to demonstrate how a single Lambda function can handle multiple triggers: An S3 trigger processed uploaded files by extracting metadata and saving it to DynamoDB.A DynamoDB Streams trigger sent notifications via SNS when new metadata was added to the table. This example illustrated the flexibility of Lambda’s event-driven architecture and how it integrates seamlessly with AWS services to automate workflows. In the next episode, we’ll discuss Best practices for Optimizing AWS Lambda Functions, optimizing performance, handling errors effectively, and securing your Lambda functions. Stay tuned to continue enhancing your serverless expertise!

                10/01/2025

                596

                Bao Dang D. Q.

                Knowledge

                +0

                  Triggers and Events: How AWS Lambda Connects with the World

                  10/01/2025

                  596

                  Bao Dang D. Q.

                  Customize software background

                  Want to customize a software for your business?

                  Meet with us! Schedule a meeting with us!