Header image

Những Concepts Cốt Lõi Để Trở Thành Bậc Thầy Của Mọi FrontEnd Frameworks

18/04/2022

751

FrontEnd Frameworks

Ngày nay, Front-end web development không chỉ giới hạn ở HTML, CSS và JavaScript. Nhu cầu trải nghiệm web của người dùng ngày một cao, điều đó đòi hỏi FE phải mở rộng thêm nhiều libraries, frameworks để đáp ứng được công nghệ mới và gia tăng UI/UX. Vì thế rất nhiều frameworks được ra đời, và danh sách này sẽ còn gia tăng và phát triển trong rất nhiều năm tới. Đi kèm với đó, rất nhiều cuộc tranh cãi về việc đâu là framework tốt nhất, đâu là framework được developers chọn nhiều nhất, và dĩ nhiên, chưa bao giờ có câu trả lời thoả đáng bởi vì mỗi frameworks đều có những ưu và nhược điểm khác nhau. Cuộc chiến giành thị phần của 3 ông lớn Reactjs, Angular, Vuejs ngày một nóng hơn, bên cạnh đó những Ember.js, Foundation, jQuery, Svelte,… vẫn đang trên đường tìm sự công nhận.

Như lịch sử đã chứng minh, web development là một lĩnh vực phát triển với tốc độ mạnh mẽ, bạn sẽ ngụp lặn trong thế giới web nếu như không chuẩn bị sẵn sàng để chuyển từ framework này sang framework khác. Việc tiếp cận framework mới thực sự sẽ rất dễ dàng khi bạn nắm vững một vài concept chính trong bài viết này, và bạn sẽ trở thành một FE developer được săn đón trong nhiều năm tới, bất kể bạn đang chọn framework nào.

1. Components – cấu thành cơ bản nhất

Component là khái niệm cơ bản nhất mà một FE developer phải nắm, nó là nền tảng của sự phát triển giao diện người dùng nhưng cũng là khái niệm quan trọng nhất thúc đẩy sự phát triển của Frontend frameworks hiện đại ngày nay. Nếu không thực sự hiểu về components, ứng dụng web của bạn có thể sẽ phình to rất nhanh.

Vậy component là gì ?

Một component là 1 tổ hợp của các phần tử HTML đi kèm với markup (styling) và các script xoay quanh các phần tử đó. Tất cả các thành phần nhỏ nhất của web sẽ nhóm lại theo các tính năng tương ứng, và quy về component.

Lấy ví dụ: để show ra Todo list có 5 items, thì chúng ta sẽ phải xây dựng 2 components:

Components - cấu thành cơ bản nhất

  • Todo item: chứa giao diện của 1 item và các chức năng cần thiết như show content, edit & delete chính mình.
  • Todo list: chứa 5 components Todo item và có chức năng show ra các items, phân trang nếu như có quá nhiều items,….

3 thứ tạo nên components là Template, Styling, Script,và để có thể viết đc component thì FE developer phải nắm vững HTML, CSS, JS.

Tại sao phải cần component ?

Nếu như ko có component Todo item, và clients thay đổi specs, không có tính năng edit item nữa, thì developer phải thực hiện xoá phần code liên quan đến việc edit 5 lần. Điều này thực sự rất tốn thời gian so với việc chia component Todo Item và sử dụng lại, chỉ cần thay đổi content của component Todo Item 1 lần là sẽ áp dụng cho tất cả items.

Vì vậy bắt buộc phải chia component khi lập trình web ở bất cứ framework nào. Việc chia components giúp chúng ta có thể tái sử dụng, dễ dàng sửa đổi và mở rộng code. Code sẽ sạch sẽ và tường minh hơn rất nhiều khi components đc chia hợp lý.

2. Lifecycle Hooks – những sự kiện trong vòng đời của 1 component

Mỗi component instance sẽ trải qua 1 loạt các sự kiện khi nó đang tạo ra. Đồng thời, nó cũng chạy các function được gọi là life cycle hooks, cho phép người dùng có cơ hội thêm code ở các giai đoạn cụ thể.

Cũng giống như tất cả lifecycle khác, lifecycle hooks của 1 component bao gồm các sự kiện quan trọng diễn ra trong quá trình sống của component đó, lấy ví dụ, component được sinh ra (init), component setup data, compile template, mount instance đó vào DOM (mounted), và update DOM khi data thay đổi, và cuối cùng là component mất đi (unmounted).

Mỗi Framework sẽ có cách tiếp cận lifecycle hooks khác nhau, nên FE developer cần phải nắm được Lifecycle Diagram của từng framework.

Để dễ hiểu hơn, mình sẽ đưa ra 1 bài toán cụ thể, bài toán yêu cầu chỉ gọi API get về danh sách news khi vừa mở ra page News – page News được thiết kế là 1 component và API sẽ phải gọi khi nó init xong, dựa trên Lifecycle Diagram của từng framework chúng ta sẽ đặt code gọi API lần lượt ở trong các hooks ngOnInit (Angular), componentDidMount (Reactjs), mounted (Vuejs),….

3. Data Binding – Ràng buộc dữ liệu

Mỗi framework sẽ có 1 cơ chế data binding khác nhau, nhưng nhìn chung là chỉ cần nắm được syntax của những mục bên dưới.

  • Text interpolation – show value của variables trên UI

Angular, Vuejs: sử dụng “Mustache” syntax (double curly braces)

<span>Message: {{ msg }}</span>

React: sử dụng special syntax của JSX (curly braces)

<span>Message: { msg }</span>
  • Attribute Bindings – thao tác đến attribute của elements Để input value cho attribute thì mỗi framework đều có sử dụng 1 cách khác nhau, hãy xem ví dụ bên dưới: Angular – sử dụng [] trên attribute

<a [href]="url">...</a>

Vuejs – sử dụng : trên attribute

<a :href="url">...</a>

Reactjs – sử dụng curly braces {} trên variable

<a href={url}>...</a>

  • Class & Style Binding – thao tác đến class và inline style của elements

classstyle là attributes đặc biệt của element, một số frameworks support cách binding với Object, Array, sẽ tạo ra code tường minh hơn.

4. Components interaction – Tương tác giữa các components

Sự tương tác giữa các component có liên hệ với nhau là rất cần thiết trong modern frameworks.

Components interaction - Tương tác giữa các components

Các component lồng ghép nhau tạo nên component tree, và mỗi node trong tree sẽ là 1 component. node này có thể là parent của node kia, và cũng có thể là 1 child của 1 node khác. Mối quan hệ chủ yếu là parent – child. Nếu thực hiện tương tác giữa 2 node cách quá xa nhau có thể rơi vào case props drilling.

parent - child

Các cách để interact giữa parent & child mà frameworks thường hay sử dụng:

  • Truyền data từ parent vào child: truyền value vào cho child sử dụng.
  • Parent lắng nghe events từ child: tạo function và truyền vào cho child, child sẽ execute function và gửi kèm data ra cho parent khi một event xác định diễn ra.
  • Parent truy cập trực tiếp đến child: parent có thể truy cập đến tất cả variable và function ở bên trong child, thậm chí là thay đổi value của variable, hoặc thực thi function của child.

5. Routing – Điều hướng

Client-side routing là chìa khóa của SPA frameworks mà chúng ta thấy ngày nay. Rất dễ để implement nhưng cần nắm vững routing hierarchy.
Đầu tiên hãy tìm hiểu về dynamic routing . Trở lại với ví dụ todos ở mục 1, trang list ra tất cả todos sẽ có route /todos, khi user nhấp vào 1 todo, họ sẽ được đưa đến một trang dành riêng cho todo đó (Todo Detail), route của trang này sẽ phải có dạng /todos/:id. id là 1 dynamic variable, dù id=1, id=2 cũng sẽ show ra trang Todo Detail.

Tiếp theo, hãy tìm hiểu về cách để nested routing – trong 1 route sẽ có nhiều routes con phụ thuộc, nó cho phép thay đổi các fragment cụ thể của view dựa trên current route.

  • Ví dụ: trên trang user profile, sẽ hiển thị nhiều tab (ví dụ: Profile, Account) để điều hướng qua user info. Bằng cách nhấp vào các tab này, URL trong trình duyệt sẽ thay đổi, nhưng thay vì thay thế toàn bộ trang, chỉ nội dung của tab được thay thế.

Routing - Điều hướng

Cuối cùng, hãy luôn lưu ý đến việc sử dụng lazy loading cho route, nó sẽ giúp cho webapp nhẹ hơn khi chỉ cần load những đoạn code liên quan đến route đang mở, thay vì load hết tất cả code ngay lần đầu tiên truy cập đến trang web.

lazy loading

Theo diagram, có thể thấy, StockTable được render nhanh hơn với 1546ms của lazy loading so sánh với 2470ms của eager loading.

6. State Management – Phương thức quản lý các state của app

Khi một ứng dụng phát triển, sự phức tạp của việc quản lý state cũng tăng lên theo cấp số nhân. Trong một ứng dụng lớn, nơi chúng ta có rất nhiều components, việc quản lý state của chúng là một vấn đề lớn. Công việc chính của tất cả libraries hoặc frameworks là lấy state của ứng dụng, xử lý và biến nó thành DOM nodes, đó là lý do tại sao việc sắp xếp, quản lý state theo mô hình hợp lý sẽ nâng cao được chất lượng của ứng dụng.

Hãy tưởng tượng state management như một tầng cloud, bên dưới là component tree. Tất cả component trong tree có thể truy xuất lên cloud để lấy về những state mà nó mong muốn. ĐIều này giúp cho việc giao tiếp dữ liệu trở nên thuận tiện hơn rất nhiều, đặc biệt là trong trường hợp 2 components không có quan hệ cụ thể trong tree.

Phần lớn các thư viện về state management được xây dựng dựa trên kiến trúc Flux.

kiến trúc Flux.

Cách hoạt động

  • State được truyền vào view
  • Actions được triggered từ user trên view
  • Actions update state trong store
  • State thay đổi value và truyền lại vào view

Kiến trúc Flux bao gồm 3 đối tượng: Action & Store & Dispatcher.

Kiến trúc Flux bao gồm 3 đối tượng: Action & Store & Dispatcher.

Cách làm việc này kết hợp với cấu trúc của Actions, Dispatchers, Reducer, đã tạo ra một tiêu chuẩn sẽ được áp dụng cho các thư viện tiếp theo cho rất nhiều ứng dụng. Từ đó, chúng ta có Redux, đây có lẽ là thư viện nổi tiếng nhất. Tuy nhiên, ngày nay, tùy thuộc vào loại dự án và frameworks chúng ta đang sử dụng, chúng ta sẽ lựa chọn những thư viện quản lý state phù hợp nhất.

  • Angular : NGRX : Ứng dụng Redux hướng tới Angular framework.
  • ReactJS : Redux, thư viện nổi tiếng nhất về quản lý state khi code Reactjs.
  • VueJS : Vuex, một thư viện giống Redux khi code VueJS.

Vì vậy nếu nắm vững được ý tưởng của kiến trúc Flux, thì việc tiếp cận đến các libraries chuyên quản lý state rất dễ dàng.

7. Kết luận

Lựa chọn framework vẫn là một chủ đề được bao quanh bởi các cuộc tranh luận. Không có đúng hay sai, chúng ta cần chọn những gì phù hợp nhất với dự án và con người trong team. Trong thực tế, có nhiều điểm khác biệt lớn giữa các frameworks, nhưng nhìn chung nắm bắt được các concepts trên sẽ giúp chúng ta làm chủ được công nghệ. Suy cho cùng, đó là mục tiêu của bài viết này, để mọi người học nhiều FrontEnd frameworks khác nhau ko còn là câu chuyện khó nữa!

Author: Quan Do

Related Blog

prd-thumb-draft-product

Software Development

+0

    TypeScript And “Any” Type

    TypeScript is a strongly typed programming language that builds on JavaScript, giving you a better ability to detect errors and describe your code. But sometimes you don't know the exact type of value that you're using because it comes from user input or a third-party API. In this case, you want to skip the type checking and allow the value to pass through the compile check. The TypeScript any type is the perfect solution for you because if you use it, the TypeScript compiler will not complain about the type issue. This blog will help you understand the any type in TypeScript, but before doing that, let's begin with some basic concepts! What is TypeScript? TypeScript checks a program for errors before execution and does so based on the kinds of values; it’s a static type checker. Superset of JavaScript TypeScript is a language that is a superset of JavaScript: JS syntax is, therefore, legal TS. However, TypeScript is a typed superset that adds rules about how different kinds of values can be used. Runtime Behavior TypeScript is also a programming language that preserves JavaScript's runtime behavior. This means that if you move code from JavaScript to TypeScript, it is guaranteed to run the same way, even if TypeScript thinks the code has type errors. Erased Types Roughly speaking, once TypeScript’s compiler is done with checking your code, it erases the types to produce the resulting compiled code. This means that once your code is compiled, the resulting plain JS code has no type information. An easy way of understanding TypeScript A languageA superset of JavaScriptPreserver the runtime behavior of JavaScriptType checker layer JavaScript + Types = TypeScript Basic typing Type annotations TypeScript uses type annotations to explicitly specify types for identifiers such as variables, functions, objects, etc. // Syntax : type Once an identifier is annotated with a type, it can be used as that type only. If the identifier is used as a different type, the TypeScript compiler will issue an error. let counter: number; counter = 1; counter = 'Hello'; // Error: Type '"Hello"' is not assignable to type 'number'. The following shows other examples of type annotations: let name: string = 'John'; let age: number = 25; let active: boolean = true; // Array let names: string[] = ['John', 'Jane', 'Peter', 'David', 'Mary']; // Object let person: { name: string; age: number }; person = { name: 'John', age: 25 }; // Valid // Function let sayHello : (name: string) => string; sayHello = (name: string) => { return `Hello ${name}`; }; Type inference Type inference describes where and how TypeScript infers types when you don’t explicitly annotate them. For example: // Annotations let counter: number; // Inference: TypeScript will infer the type the `counter` to be `number` let counter = 1; Likewise, when you assign a function parameter a value, TypeScript infers the type of the parameter to the type of the default value. For example: // TypeScript infers type of the `max` parameter to be `number` const setCounter = (max = 100) => { // ... } Similarly, TypeScript infers the return type to the type of the return value: const increment = (counter: number) => { return counter++; } // It is the same as: const increment = (counter: number) : number => { return counter++; } The following shows other examples of type inference: const items = [0, 1, null, 'Hi']; // (number | string)[] const mixArr = [new Date(), new RegExp('\d+')]; // (RegExp | Date)[] const increase = (counter: number, max = 100) => { return counter++; }; // (counter: number, max?: number) => number Contextual typing TypeScript uses the locations of variables to infer their types. This mechanism is known as contextual typing. For example: document.addEventListener('click', (event) => { console.log(event.button); // Valid }); In this example, TypeScript knows that the event the parameter is an instance of MouseEvent because of the click event. However, when you change the click event to the scroll the event, TypeScript will issue an error: document.addEventListener('scroll', (event) => { console.log(event.button); // Compile error }); // Property 'button' does not exist on type 'Event'. TypeScript knows that the event in this case, is an instance of UIEvent, not a MouseEvent. And UIEvent does not have the button property, therefore, TypeScript throws an error. Other examples of contextual typing // Array members const names = ['John', 'Jane', 'Peter', 'David', 'Mary']; // string[] names.map(name => name.toUpperCase()); // (name: string) => string // Type assertions const myCanvas = document.getElementById('main-canvas') as HTMLCanvasElement; Type inference vs Type annotations Type inferenceType annotationsTypeScript guesses the typeYou explicitly tell TypeScript the type What exactly is TypeScript any? When you don’t explicitly annotate and TypeScript can't infer exactly the type, that means you declare a variable without specifying a type, TypeScript assumes that you use the any type. This practice is called implicit typing. For example: let result; // Variable 'result' implicitly has an 'any' type. So, what exactly is any? TypeScript any is a particular type that you can use whenever you don't want a particular value to cause type-checking errors. That means the TypeScript compiler doesn't complain or issue any errors. When a value is of type any, you can access any properties of it, call it like a function, assign it to (or from) a value of any type, or pretty much anything else that’s syntactically legal: let obj: any = { x: 0 }; // None of the following lines of code will throw compiler errors. // Using `any` disables all further type checking, and it is assumed // you know the environment better than TypeScript. obj.foo(); obj(); obj.bar = 100; obj = 'hello'; const n: number = obj; Looking back at an easier-to-understand any: A special type.Skip/Disable type-checking.TypeScript doesn't complain or issue any errors.Default implicit typing is any. Note that to disable implicit typing to the any type, you change the noImplicitAny option in the tsconfig.json file to true. Why does TypeScript provide any type? As described above, while TypeScript is a type checker, any type tells TypeScript to skip/disable type-checking. Whether TypeScript has made a mistake here and why it provides any type? In fact, sometimes the developer can't determine the type of value or can't determine the return value from the 3rd party. In most cases they use any type or implicit typing as any. So they seem to think that TypeScript provides any to do those things. So, is that the root reason that TypeScript provides any? Actually, I think there is a more compelling reason for TypeScript providing any that the any type provides you with a way to work with the existing JavaScript codebase. It allows you to gradually opt-in and opt out of type checking during compilation. Therefore, you can use the any type for migrating a JavaScript project over to TypeScript. Conclusion TypeScript is a Type checker layer. The TypeScript any type allows you to store a value of any type. It instructs the compiler to skip type-checking. Use the any type to store a value when you migrate a JavaScript project over to a TypeScript project. In the next blog, I will show you more about the harmful effects of any and how to avoid them. Hope you like it! See you in the next blog! Reference TypeScript handbookTypeScript tutorial Author: Anh Nguyen

    07/09/2022

    980

    Software Development

    +0

      TypeScript And “Any” Type

      07/09/2022

      980

      React application rs

      How-to

      Software Development

      +0

        A Better Way To Use Services In React Application

        Sometimes you have/want to create and use a service in your application for the DRY principle. But do you know exactly what is a service? And what is the best way to use services in React applications? Hi everyone! It's nice to meet you again in React related series. In the previous article, we discussed about effective state management in React, we learned about Context: what is it? when to use it? One of the use cases is using a context to share a collection of services through the components tree, but what kind of things can be called a service? And how is it integrated into React application? Let's discover the following sections: What is a Service? Service is just a group of helper functions that handle something particularly and can be reused across application, for example: Authentication Service that includes authentication status checking, signing in, signing out, getting user info …Http Service that handles request header, request method, request body … How to use? There are different kinds of implementation and usage, I just provide my favorite one that follows Angular module/services provider concepts. Create a service A service is a JavaScript regular class, nothing related to React. export default class ApiService { constructor() {} _setInterceptors() {} _handleResponse_() {} _handleError_() {} get() {} post() {} put() {} delete() {} } export default class AuthService { constructor() {} isAuthenticated() {} getUserInfo() {} signIn() {} signOut() {} } Create a Context Create a context with provider and pass services as props. import { createContext, useState } from 'react'; import { ApiService, AuthService } from '@services'; const services = { apiService: new ApiService(), authService: new AuthService() }; const AppContext = createContext(); const { Provider } = AppContext; const AppProvider = ({ children }) => { return <Provider value={{ services }}>{children}</Provider>; }; export { AppContext, AppProvider } Use a context Place a context provider at an appropriate scope, all consumers should be wrapped in it. import { AppProvider } from './context/AppContext'; import ComponentA from './components/ComponentA' const App = () => { return ( <AppProvider> <div className="app"> <ComponentA /> </div> </AppProvider> ); }; export default App; Inside a child component, use React hooks useContext to access services as a context value. import { useContext, useEffect } from 'react'; import { AppContext } from '../context/AppContext'; const ChildComponent = () => { const { services: { apiService, authService } } = useContext(AppContext); useEffect(() => { if (authService.isAuthenticated()) { apiService.get(); } }, []); return <></>; } export default ComponentA; Above are simple steps to integrate services with context. But it would be complicated in the actual work. Consider organizing context with approriate level/scope. My recommendation is to have two kinds of service contexts in your application: App Services Context to put all singleton services. Bootstrap it once at root, so only one instance for each is created while having them shared over the component tree.For others, create separate contexts and use them anywhere you want and whenever you need. Alternatives Service is not the only one way to deal with reusable code, it depends on what you intend to do, highly recommend to: Use a custom hook, if you want to do reusable logic and hook into a state.Use a Higher-order component, if you want to do the same logic and then render UI differently. Reference Does React Context replace React Redux?React RetiX - another approach to state management Author: Vi Nguyen

        31/08/2022

        4.98k

        Vix Nguyen

        How-to

        +1

        • Software Development

        A Better Way To Use Services In React Application

        31/08/2022

        4.98k

        Vix Nguyen

        context-or-redux

        Software Development

        +0

          Does React Context Replace React Redux?

          Table of contents What is Context?How and when to use a Context?When to use Redux?Does Context replace Redux? context-or-redux-1024x768 As you all know, nowadays, React has become one of the most famous JavaScript libraries in the world. It's easy to learn and use. It comes with a good supply of documentation, tutorials, and training resources. Any developer who comes from a JavaScript background can easily understand and start creating web apps using React in a few days with the main concepts: JSXRendering ElementsComponents and PropsState and LifecycleHandling Events But to become an expert in React, it would take more time. You have to deep dive into advanced things: Higher-Order componentsContextCode-SplittingAvoiding unexpected rerenderEspecially, Effective state management When it comes to state management in React applications, we instantly think about React Redux. This is a favorite and widespread tool across developer's communities and applications. Recently, a lot of questions have come to us about using React Context as an alternative for React Redux. To answer those questions, have a brief of React Context first. What is the context? The context was originally introduced for passing props through the component tree, but since newer versions, React provided the powerful feature that allows updating context from a nested component. Meanwhile, Context can now be a state management tool. Let's see how easy it works? Context with Hooks There are many ways to integrate Context, in this article, I just demo the simplest one Create a context import { createContext } from 'react'; export const ThemeContext = createContext(defaultTheme); So simple, createContext is the built-in function that allows you to create a context with default value. Create a provider A provider allow consuming components to subscribe to context changes const { Provider } = ThemeContext; const ThemeProvider = ( { children } ) => { const theme = { ... }; return <Provider value={{ theme }}>{children}</Provider>; }; Create a consumer The simplest way to subscribe to the context value is to use the useContext hook. export const ThemeInfo = () => { const { theme } = useContext(ThemeContext); return ( <div>backgroundColor: {theme?.background}</div> ); } A consumer must be wrapped inside a Provider if not it would not be noticed and then re-render when context data changes. <App> <ThemeProvider> <ThemeInfo></ThemeInfo> </ThemeProvider> </App> Update the context from a consumer It is sometimes necessary to update the context from consumers, context itself doesn't provide any method to do that, but together with hooks, it can be easy: Use useState to manage context valuesetTime is passed as a callback const { Provider } = ThemeContext; const ThemeProvider = ( { children } ) => { const [theme, setTheme] = useState(defaultTheme); return <Provider value={{ theme, setTheme }}>{children}</Provider>; }; From a consuming component, we can update the context value by executing the callback provided by context export const ButtonChangeTheme = () => { const { setTheme } = useContext(ThemeContext); return ( <button onClick={() => setTheme(newTheme)}>Change</button> ); } Again, don't forget to wrap ButtonChangeTheme consumer inside ThemeProvider <App> <ThemeProvider> <ButtonChangeTheme></ButtonChangeTheme> </ThemeProvider> </App> When to use context? As official guideline Context is designed to share data that can be considered global for a tree of React components. Below are common examples of data that Context can cover: Application configurationsUser settingsAuthenticated userA collection of services How to use context in a better way? As mentioned before, when the context value changes, all consumers that use the context will be immediately re-rendered. So putting it all into one context is not a good decision. Instead, create multi-contexts for different kinds of data. EX: one context for Application configurations, one for Authenticated user and others.Besides separating unrelated contexts, using each in its proper place in the component tree is equally significant.Integrate context adds complexity to our app by creating, providing, and consuming the context, so drilling props through 2-3 levels in a tree component is not a problem(Recommended to use instead of context)Restrictively apply context for high-frequency data updated, because context is not designed for that.​ What about React Redux? Unlike Context, React Redux is mainly designed for state management purposes, and covers it well. However, take a look at some highlights from Redux team: Not all apps need Redux. It's important to understand the kind of application you're building, the kinds of problems that you need to solve, and what tools can best solve the problems you're facing. You'll know when you need Flux. If you aren't sure if you need it, you don't need it. Don't use Redux until you have problems with vanilla React. Those recommendations clearly answer the question: When to use Redux? Conclusion So, Does React Context replace React Redux? The short answer is no, it doesn’t. React Redux and React Context can be used together instead of as alternatives. But how to use them together? It depends. If you understand the main concepts for each: React Context is designed for props drilling, it also has ways to update context data, but is recommended to deal with the simple follow of data changes.Redux is built for state management, especially for the frequently updated states used at many places in the application. Based on that you can easily decide what should be implemented in your application. In the end, try to use Context first, if something is not covered by Context, it's time to use Redux. Enjoy coding! Reference Context official docsRedux FAQ: generalIf you are not interested in using Redux, you can check out React RetiX for a different approach to state management.I can not remember all the sources I have read, but it helps a lot in having comparison between Context and Redux. Thank you and sorry for not listing them all in this article!​ Author: Vi Nguyen

          17/06/2022

          1.5k

          Software Development

          +0

            Does React Context Replace React Redux?

            17/06/2022

            1.5k

            Knowledge

            +0

              Best Practices for Building Reliable AWS Lambda Functions

              Welcome back to the "Mastering AWS Lambda with Bao" series! The previous episode explored how AWS Lambda connects to the world through AWS Lambda triggers and events. Using S3 and DynamoDB Streams triggers, we demonstrated how Lambda automates workflows by processing events from multiple sources. This example provided a foundation for understanding Lambda’s event-driven architecture. However, building reliable Lambda functions requires more than understanding how triggers work. To create AWS lambda functions that can handle real-world production workloads, you need to focus on optimizing performance, implementing robust error handling, and enforcing strong security practices. These steps optimize your Lambda functions to be scalable, efficient, and secure. In this episode, SupremeTech will explore the best practices for building reliable AWS Lambda functions, covering two essential areas: Optimizing Performance: Reducing latency, managing resources, and improving runtime efficiency.Error Handling and Logging: Capturing meaningful errors, logging effectively with CloudWatch, and setting up retries. Adopting these best practices, you’ll be well-equipped to optimize Lambda functions that thrive in production environments. Let’s dive in! Optimizing Performance Optimize the Lambda function's performance to run efficiently with minimal latency and cost. Let's focus first on Cold Starts, a critical area of concern for most developers. Understanding Cold Starts What Are Cold Starts? A Cold Start occurs when AWS Lambda initializes a new execution environment to handle an incoming request. This happens under the following circumstances: When the Lambda function is invoked for the first time.After a period of inactivity (execution environments are garbage collected after a few minutes of no activity – meaning it will be shut down automatically).When scaling up to handle additional concurrent requests. Cold starts introduce latency because AWS needs to set up a new execution environment from scratch. Steps Involved in a Cold Start: Resource Allocation:AWS provisions a secure and isolated container for the Lambda function.Resources like memory and CPU are allocated based on the function's configuration.Execution Environment Initialization:AWS sets up the sandbox environment, including:The /tmp directory is for temporary storage.Networking configurations, such as Elastic Network Interfaces (ENI), for VPC-based Lambdas.Runtime Initialization:The specified runtime (e.g., Node.js, Python, Java) is initialized.For Node.js, this involves loading the JavaScript engine (V8) and runtime APIs.Dependency Initialization:AWS loads the deployment package (your Lambda code and dependencies).Any initialization code in your function (e.g., database connections, library imports) is executed.Handler Invocation:Once the environment is fully set up, AWS invokes your Lambda function's handler with the input event. Cold Start Latency Cold start latency varies depending on the runtime, deployment package size, and whether the function runs inside a VPC: Node.js and Python: ~200ms–500ms for non-VPC functions.Java or .NET: ~500ms–2s due to heavier runtime initialization.VPC-Based Functions: Add ~500ms–1s due to ENI initialization. Warm Starts In contrast to cold starts, Warm Starts reuse an already-initialized execution environment. AWS keeps environments "warm" for a short time after a function is invoked, allowing subsequent requests to bypass initialization steps. Key Differences: Cold Start: New container setup → High latency.Warm Start: Reused container → Minimal latency (~<100ms). Reducing Cold Starts Cold starts can significantly impact the performance of latency-sensitive applications. Below are some actionable strategies to reduce cold starts, each with good and bad practice examples for clarity. 1. Use Smaller Deployment Packages to optimize lambda function Good Practice: Minimize the size of your deployment package by including only the required dependencies and removing unnecessary files.Use bundlers like Webpack, ESBuild, or Parcel to optimize your package size.Example: const DynamoDB = require('aws-sdk/clients/dynamodb'); // Only loads DynamoDB, not the entire SDK Bad Practice: Bundling the entire AWS SDK or other large libraries without considering modular imports.Example: const AWS = require('aws-sdk'); // Loads the entire SDK, increasing package size Why It Matters: Smaller deployment packages load faster during the initialization phase, reducing cold start latency. 2. Move Heavy Initialization Outside the Handler Good Practice: Place resource-heavy operations, such as database or SDK client initialization, outside the handler function so they are executed only once per container lifecycle – a cold start.Example: const DynamoDB = new AWS.DynamoDB.DocumentClient(); exports.handler = async (event) => {     const data = await DynamoDB.get({ Key: { id: '123' } }).promise();     return data; }; Bad Practice: Reinitializing resources inside the handler for every invocation.Example: exports.handler = async (event) => {     const DynamoDB = new AWS.DynamoDB.DocumentClient(); // Initialized on every call     const data = await DynamoDB.get({ Key: { id: '123' } }).promise();     return data; }; Why It Matters: Reinitializing resources for every invocation increases latency and consumes unnecessary computing power. 3. Enable Provisioned Concurrency1 Good Practice: Use Provisioned Concurrency to pre-initialize a set number of environments, ensuring they are always ready to handle requests.Example:AWS CLI: aws lambda put-provisioned-concurrency-config \ --function-name myFunction \ --provisioned-concurrent-executions 5 AWS Management Console: Why It Matters: Provisioned concurrency ensures a constant pool of pre-initialized environments, eliminating cold starts entirely for latency-sensitive applications. 4. Reduce Dependencies to optimize the lambda function Good Practice: Evaluate your libraries and replace heavy frameworks with lightweight alternatives or native APIs.Example: console.log(new Date().toISOString()); // Native JavaScript API Bad Practice: Using heavy libraries for simple tasks without considering alternatives.Example: const moment = require('moment'); console.log(moment().format()); Why It Matters: Large dependencies increase the deployment package size, leading to slower initialization during cold starts. 5. Avoid Unnecessary VPC Configurations Good Practice: Place Lambda functions outside a VPC unless necessary. If a VPC is required (e.g., to access private resources like RDS), optimize networking using VPC endpoints.Example:Use DynamoDB and S3 directly without placing the Lambda inside a VPC. Bad Practice: Deploying Lambda functions inside a VPC unnecessarily, such as accessing services like DynamoDB or S3, which do not require VPC access.Why It’s Bad: Placing Lambda in a VPC introduces additional latency due to ENI setup during cold starts. Why It Matters: Functions outside a VPC initialize faster because they skip ENI setup. 6. Choose Lightweight Runtimes to optimize lambda function Good Practice: Use lightweight runtimes like Node.js or Python for faster initialization than heavier runtimes like Java or .NET.Why It’s Good: Lightweight runtimes require fewer initialization resources, leading to lower cold start latency. Why It Matters: Heavier runtimes have higher cold start latency due to the complexity of their initialization process. Summary of Best Practices for Cold Starts AspectGood PracticeBad PracticeDeployment PackageUse small packages with only the required dependencies.Bundle unused libraries, increasing the package size.InitializationPerform heavy initialization (e.g., database connections) outside the handler.Initialize resources inside the handler for every request.Provisioned ConcurrencyEnable provisioned concurrency for latency-sensitive applications.Ignore provisioned concurrency for high-traffic functions.DependenciesUse lightweight libraries or native APIs for simple tasks.Use heavy libraries like moment.js without evaluating lightweight alternatives.VPC ConfigurationAvoid unnecessary VPC configurations; use VPC endpoints when required.Place all Lambda functions inside a VPC, even when accessing public AWS services.Runtime SelectionChoose lightweight runtimes like Node.js or Python for faster initialization.Use heavy runtimes like Java or .NET for simple, lightweight workloads. Error Handling and Logging Error handling and logging are critical for optimizing your Lambda functions are reliable and easy to debug. Effective error handling prevents cascading failures in your architecture, while good logging practices help you monitor and troubleshoot issues efficiently. Structured Error Responses Errors in Lambda functions can occur due to various reasons: invalid input, AWS service failures, or unhandled exceptions in the code. Properly structured error handling ensures that these issues are captured, logged, and surfaced effectively to users or downstream services. 1. Define Consistent Error Structures Good Practice: Use a standard error format so all errors are predictable and machine-readable.Example: {   "errorType": "ValidationError",   "message": "Invalid input: 'email' is missing",   "requestId": "12345-abcd" } Bad Practice: Avoid returning vague or unstructured errors that make debugging difficult. { "message": "Something went wrong", "error": true } Why It Matters: Structured errors make debugging easier by providing consistent, machine-readable information. They also improve communication with clients or downstream systems by conveying what went wrong and how it should be handled. 2. Use Custom Error Classes Good Practice: In Node.js, define custom error classes for clarity: class ValidationError extends Error {   constructor(message) {     super(message);     this.name = "ValidationError";     this.statusCode = 400; // Custom property   } } // Throwing a custom error if (!event.body.email) {   throw new ValidationError("Invalid input: 'email' is missing"); } Bad Practice: Use generic errors for everything, making identifying or categorizing issues hard.Example: throw new Error("Error occurred"); Why It Matters: Custom error classes make error handling more precise and help segregate application errors (e.g., validation issues) from system errors (e.g., database failures). 3. Include Contextual Information in Logs Good Practice: Add relevant information like requestId, timestamp, and input data (excluding sensitive information) when logging errors.Example: console.error({     errorType: "ValidationError",     message: "The 'email' field is missing.",     requestId: context.awsRequestId,     input: event.body,     timestamp: new Date().toISOString(), }); Bad Practice: Log errors without any context, making debugging difficult.Example: console.error("Error occurred"); Why It Matters: Contextual information in logs makes it easier to identify what triggered the error and where it happened, improving the debugging experience. Retry Logic Across AWS SDK and Other Services Retrying failed operations is critical when interacting with external services, as temporary failures (e.g., throttling, timeouts, or transient network issues) can disrupt workflows. Whether you’re using AWS SDK, third-party APIs, or internal services, applying retry logic effectively can ensure system reliability while avoiding unnecessary overhead. 1. Use Exponential Backoff and Jitter Good Practice: Apply exponential backoff with jitter to stagger retry attempts. This avoids overwhelming the target service, especially under high load or rate-limiting scenarios.Example (General Implementation): async function retryWithBackoff(fn, retries = 3, delay = 100) {     for (let attempt = 1; attempt <= retries; attempt++) {         try {             return await fn();         } catch (error) {             if (attempt === retries) throw error; // Rethrow after final attempt             const backoff = delay * 2 ** (attempt - 1) + Math.random() * delay; // Add jitter             console.log(`Retrying in ${backoff.toFixed()}ms...`);             await new Promise((res) => setTimeout(res, backoff));         }     } } // Usage Example const result = await retryWithBackoff(() => callThirdPartyAPI()); Bad Practice: Retrying without delays or jitter can lead to cascading failures and amplify the problem. for (let i = 0; i < retries; i++) {     try {         return await callThirdPartyAPI();     } catch (error) {         console.log("Retrying immediately...");     } } Why It Matters: Exponential backoff reduces pressure on the failing service, while jitter randomizes retry times, preventing synchronized retry storms from multiple clients. 2. Leverage Built-In Retry Mechanisms Good Practice: Use the built-in retry logic of libraries, SDKs, or APIs whenever available. These are typically optimized for the specific service.Example (AWS SDK): const DynamoDB = new AWS.DynamoDB.DocumentClient({     maxRetries: 3, // Number of retries     retryDelayOptions: { base: 200 }, // Base delay in ms }); Example (Axios for Third-Party APIs):Use libraries like axios-retry to integrate retry logic for HTTP requests. const axios = require('axios'); const axiosRetry = require('axios-retry'); axiosRetry(axios, {     retries: 3, // Retry 3 times     retryDelay: (retryCount) => retryCount * 200, // Exponential backoff     retryCondition: (error) => error.response.status >= 500, // Retry only for server errors }); const response = await axios.get("https://example.com/api"); Bad Practice: Writing your own retry logic unnecessarily when built-in mechanisms exist, risking suboptimal implementation. Why It Matters: Built-in retry mechanisms are often optimized for the specific service or library, reducing the likelihood of bugs and configuration errors. 3. Configure Service-Specific Retry Limits Good Practice: Set retry limits based on the service's characteristics and criticality.Example (AWS S3 Upload): const s3 = new AWS.S3({ maxRetries: 5, // Allow more retries for critical operations retryDelayOptions: { base: 300 }, // Slightly longer base delay }); Example (Database Queries): async function queryDatabaseWithRetry(queryFn) {     await retryWithBackoff(queryFn, 5, 100); // Retry with custom backoff logic } Bad Practice: Allowing unlimited retries can cause resource exhaustion and increase costs. while (true) {     try {         return await callService();     } catch (error) {         console.log("Retrying...");     } } Why It Matters: Excessive retries can lead to runaway costs or cascading failures across the system. Always define a sensible retry limit. 4. Handle Transient vs. Persistent Failures Good Practice: Retry only transient failures (e.g., timeouts, throttling, 5xx errors) and handle persistent failures (e.g., invalid input, 4xx errors) immediately.Example: const isTransientError = (error) =>     error.code === "ThrottlingException" || error.code === "TimeoutError"; async function callServiceWithRetry() {     await retryWithBackoff(() => {         if (!isTransientError(error)) throw error; // Do not retry persistent errors         return callService();     }); } Bad Practice: Retrying all errors indiscriminately, including persistent failures like ValidationException or 404 Not Found. Why It Matters: Persistent failures are unlikely to succeed with retries and can waste resources unnecessarily. 5. Log Retry Attempts Good Practice: Log each retry attempt with relevant context, such as the retry count and delay. async function retryWithBackoff(fn, retries = 3, delay = 100) {     for (let attempt = 1; attempt <= retries; attempt++) {         try {             return await fn();         } catch (error) {             if (attempt === retries) throw error;             console.log(`Attempt ${attempt} failed. Retrying in ${delay}ms...`);             await new Promise((res) => setTimeout(res, delay));         }     } } Bad Practice: Failing to log retries makes debugging or understanding the retry behavior difficult. Why It Matters: Logs provide valuable insights into system behavior and help diagnose retry-related issues. Summary of Best Practices for Retry logic AspectGood PracticeBad PracticeRetry LogicUse exponential backoff with jitter to stagger retries.Retry immediately without delays, causing retry storms.Built-In MechanismsLeverage AWS SDK retry options or third-party libraries like axios-retry.Write custom retry logic unnecessarily when optimized built-in solutions are available.Retry LimitsDefine a sensible retry limit (e.g., 3–5 retries).Allow unlimited retries, risking resource exhaustion or runaway costs.Transient vs PersistentRetry only transient errors (e.g., timeouts, throttling) and fail fast for persistent errors.Retry all errors indiscriminately, including persistent failures like validation or 404 errors.LoggingLog retry attempts with context (e.g., attempt number, delay,  error) to aid debugging.Fail to log retries, making it hard to trace retry behavior or diagnose problems. Logging Best Practices Logs are essential for debugging and monitoring Lambda functions. However, unstructured or excessive logging can make it harder to find helpful information. 1. Mask or Exclude Sensitive Data Good Practice: Avoid logging sensitive information like:User credentialsAPI keys, tokens, or secretsPersonally Identifiable Information (PII)Use tools like AWS Secrets Manager for sensitive data management.Example: Mask sensitive fields before logging: const sanitizedInput = {     ...event,     password: "***", }; console.log(JSON.stringify({     level: "info",     message: "User login attempt logged.",     input: sanitizedInput, })); Bad Practice: Logging sensitive data directly can cause security breaches or compliance violations (e.g., GDPR, HIPAA).Example: console.log(`User logged in with password: ${event.password}`); Why It Matters: Logging sensitive data can expose systems to attackers, breach compliance rules, and compromise user trust. 2.  Set Log Retention Policies Good Practice: Set a retention policy for CloudWatch log groups to prevent excessive log storage costs.AWS allows you to configure retention settings (e.g., 7, 14, or 30 days). Bad Practice: Using the default “Never Expire” retention policy unnecessarily stores logs indefinitely. Why It Matters: Unmanaged logs increase costs and make it harder to find relevant data. Retaining logs only as long as needed reduces costs and keeps logs manageable. 3. Avoid Excessive Logging Good Practice: Log only what is necessary to monitor, troubleshoot, and analyze system behavior.Use info, debug, and error levels to prioritize logs appropriately. console.info("Function started processing..."); console.error("Failed to fetch data from DynamoDB: ", error.message); Bad Practice: Logging every detail (e.g., input payloads, execution steps) unnecessarily increases log volume.Example: console.log(`Received event: ${JSON.stringify(event)}`); // Avoid logging full payloads unnecessarily Why It Matters: Excessive logging clutters log storage, increases costs, and makes it harder to isolate relevant logs. 4. Use Log Levels (Info, Debug, Error) Good Practice: Use different log levels to differentiate between critical and non-critical information.info: For general execution logs (e.g., function start, successful completion).debug: For detailed logs during development or troubleshooting.error: For failure scenarios requiring immediate attention. Bad Practice: Using a single log level (e.g., console.log() everywhere) without prioritization. Why It Matters: Log levels make it easier to filter logs based on severity and focus on critical issues in production. Conclusion In this episode of "Mastering AWS Lambda with Bao", we explored critical best practices for building reliable AWS Lambda functions, focusing on optimizing performance, error handling, and logging. Optimizing Performance: By reducing cold starts, using smaller deployment packages, lightweight runtimes, and optimizing VPC configurations, you can significantly lower latency and optimize Lambda functions. Strategies like moving initialization outside the handler and leveraging Provisioned Concurrency ensure smoother execution for latency-sensitive applications.Error Handling: Implementing structured error responses and custom error classes makes troubleshooting easier and helps differentiate between transient and persistent issues. Handling errors consistently improves system resilience.Retry Logic: Applying exponential backoff with jitter, using built-in retry mechanisms, and setting sensible retry limits optimizes that Lambda functions gracefully handle failures without overwhelming dependent services.Logging: Effective logging with structured formats, contextual information, log levels, and appropriate retention policies enables better visibility, debugging, and cost control. Avoiding sensitive data in logs ensures security and compliance. Following these best practices, you can optimize lambda function performance, reduce operational costs, and build scalable, reliable, and secure serverless applications with AWS Lambda. In the next episode, we’ll dive deeper into "Handling Failures with Dead Letter Queues (DLQs)", exploring how DLQs act as a safety net for capturing failed events and ensuring no data loss occurs in your workflows. Stay tuned! Note: 1. Provisioned Concurrency is not a universal solution. While it eliminates cold starts, it also incurs additional costs since pre-initialized environments are billed regardless of usage. When to Use:Latency-sensitive workloads like APIs or real-time applications where even a slight delay is unacceptable.When Not to Use:Functions with unpredictable or low invocation rates (e.g., batch jobs, infrequent triggers). For such scenarios, on-demand concurrency may be more cost-effective.

              13/01/2025

              135

              Bao Dang D. Q.

              Knowledge

              +0

                Best Practices for Building Reliable AWS Lambda Functions

                13/01/2025

                135

                Bao Dang D. Q.

                Customize software background

                Want to customize a software for your business?

                Meet with us! Schedule a meeting with us!