Header image

Automate Your Git Workflow with Git Hooks for Efficiency

24/12/2024

746

Bao Dang D. Q.

Have you ever wondered how you can make your Git workflow smarter and more efficient? What if repetitive tasks like validating commit messages, enforcing branch naming conventions, or preventing sensitive data leaks could happen automatically? Enter Git Hooks—a powerful feature in Git that enables automation at every step of your development process.

If you’ve worked with webhooks, the concept of Git Hooks might already feel familiar. Like API events trigger webhooks, Git Hooks are scripts triggered by Git actions such as committing, pushing, or merging. These hooks allow developers to automate tasks, enforce standards, and improve the overall quality of their Git workflows.

By integrating Git Hooks into your project, you can gain numerous benefits, including clearer commit histories, fewer human errors, and smoother team collaboration. Developers can also define custom rules tailored to their Git flow, ensuring consistency and boosting productivity.

In this SupremeTech blog, I, Đang Đo Quang Bao, will introduce you to Git Hooks, explain how they work, and guide you through implementing them to transform your Git workflow. Let’s dive in!

What Are Git Hooks?

Git Hooks are customizable scripts that automatically execute when specific events occur in a Git repository. These events might include committing code, pushing changes, or merging branches. By leveraging Git Hooks, you can tailor Git’s behavior to your project’s requirements, automate repetitive tasks, and reduce the likelihood of human errors.

Imagine validating commit messages, running tests before a push, or preventing large file uploads—all without manual intervention. Git Hooks makes this possible, enabling developers to integrate useful automation directly into their workflows.

Type of Git Hooks

Git Hooks come in two main categories, each serving distinct purposes:

Client-Side Hooks

These hooks run on the user’s local machine and are triggered by actions like committing or pushing changes. They are perfect for automating tasks like linting, testing, or enforcing commit message standards.

  • Examples:
    • pre-commit: Runs before a commit is finalized.
    • pre-push: Executes before pushing changes to a remote repository.
    • post-merge: Triggers after merging branches.

Server-Side Hooks

These hooks operate on the server hosting the repository and are used to enforce project-wide policies. They are ideal for ensuring consistent workflows across teams by validating changes before they’re accepted into the central repository.

  • Examples:
  • pre-receive: Runs before changes are accepted by the remote repository.
  • update: Executes when a branch or tag is updated on the server.

My Journey to Git Hooks

When I was working on personal projects, Git management was fairly straightforward. There were no complex workflows, and mistakes were easy to spot and fix. However, everything changed when I joined SupremeTech and started collaborating on larger projects. Adhering to established Git flows across a team introduced new challenges. Minor missteps—like inconsistent commit messages, improper branch naming, accidental force pushes, or forgetting to run unit tests—quickly led to inefficiencies and avoidable errors.

That’s when I discovered the power of Git Hooks. By combining client-side Git Hooks with tools like Husky, ESLint, Jest, and commitlint, I could automate and streamline our Git processes. Some of the tasks I automated include:

  • Enforcing consistent commit message formats.
  • Validating branch naming conventions.
  • Automating testing and linting.
  • Preventing accidental force pushes and large file uploads.
  • Monitoring and blocking sensitive data in commits.

This level of automation was a game-changer. It improved productivity, reduced human errors, and allowed developers to focus on their core tasks while Git Hooks quietly enforced the rules in the background. It transformed Git from a version control tool into a seamless system for maintaining best practices.

Getting Started with Git Hooks

Setting up Git Hooks manually can be dull, especially in team environments where consistency is critical. Tools like Husky simplify the process, allowing you to manage Git Hooks and integrate them into your workflows easily. By leveraging Husky, you can unlock the full potential of Git Hooks with minimal setup effort.

I’ll use Bun as the JavaScript runtime and package manager in this example. If you’re using npm or yarn, replace Bun-specific commands with their equivalents.

Setup Steps

1. Initialize Git: Start by initializing a Git repository if one doesn’t already exist

git init

2. Install Husky: Use Bun to add Husky as a development dependency

bun add -D husky

3. Enable Husky Hooks: Initialize Husky to set up Git Hooks for your project

bunx husky init

4. Verify the Setup: At this point, a folder named .husky will be created, which already includes a sample of pre-commit hook. With this, the setup for Git Hooks is complete. Now, let’s customize it to optimize some simple processes.

verify the setup of husky git hooks

Examples of Git Hook Automation

Git Hooks empowers you to automate tedious yet essential tasks and enforce team-wide best practices. Below are four practical examples of how you can leverage Git Hooks to improve your workflow:

Commit Message Validation

Ensuring consistent and clear commit messages improves collaboration and makes Git history easier to understand. For example, enforce the following format:

pbi-203 – refactor – [description…]
[task-name] – [scope] – [changes]

Setup:

  1. Install Commitlint:
bun add -D husky @commitlint/{config-conventional,cli}
  1. Configure rules in commitlint.config.cjs:
module.exports = {
    rules: {
        'task-name-format': [2, 'always', /^pbi-\d+ -/],
        'scope-type-format': [2, 'always', /-\s(refactor|fix|feat|docs|test|chore|style)\s-\s[[^\]]+\]$/]
    },
    plugins: [
        {
            rules: {
                'task-name-format': ({ raw }) => {
                    const regex = /^pbi-\d+ -/;
                    return [regex.test(raw),
                        `❌ Commit message must start with "pbi-<number> -". Example: "pbi-1234 - refactor - [optimize function]"`
                    ];
                },
                'scope-type-format': ({ raw}) => {
                    const regex = /-\s(refactor|fix|feat|docs|test|chore|style)\s-\s[[^\]]+\]$/;
                    return [regex.test(raw),
                        `❌ Commit message must include a valid scope and description. Example: "pbi-1234 - refactor - [optimize function]".
                        \nValid scopes: refactor, fix, feat, docs, test, chore, style`
                    ];
                }
            }
        }
    ]
}
  1. Add Commitlint to the commit-msg hook:
echo "bunx commitlint --edit \$1" >> .husky/commit-msg
  1. With this, we have completed the commit message validation setup. Now, let’s test it to see how it works.
husky template git hooks

Now, developers will be forced to follow this committing rule, which increases the readability of the Git History.

Automate Branch Naming Conventions

Enforce branch names like feature/pbi-199/add-validation.

  1. First, we will create a script in the project directory named scripts/check-branch-name.sh.
#!/bin/bash

# Define allowed branch naming pattern
branch_pattern="^(feature|bugfix|hotfix|release)/pbi-[0-9]+/[a-zA-Z0-9._-]+$"

# Get the current branch name
current_branch=$(git symbolic-ref --short HEAD)

# Check if the branch name matches the pattern
if [[ ! "$current_branch" =~ $branch_pattern ]]; then
  echo "❌ Branch name '$current_branch' is invalid!"
  echo "✅ Branch names must follow this pattern:"
  echo "   - feature/pbi-<number>/<description>"
  echo "   - bugfix/pbi-<number>/<description>"
  echo "   - hotfix/pbi-<number>/<description>"
  echo "   - release/pbi-<number>/<description>"
  exit 1
fi

echo "✅ Branch name '$current_branch' is valid."
  1. Add the above script execution command into the pre-push hook.
echo "bash ./scripts/check-branch-name.sh" >> .husky/pre-push
  1. Grant execute permissions to the check-branch-name.sh file.
chmod +x ./scripts/check-branch-name.sh
  1. Let’s test the result by pushing our code to the server.

Invalid case:

git checkout main
git push

Output:

❌ Branch name 'main' is invalid!
✅ Branch names must follow this pattern:
  - feature/pbi-<number>/<description>
  - bugfix/pbi-<number>/<description>
  - hotfix/pbi-<number>/<description>
  - release/pbi-<number>/<description>
husky - pre-push script failed (code 1)

Valid case:

git checkout -b feature/pbi-100/add-new-feature
git push

Output:

✅ Branch name 'feature/pbi-100/add-new-feature' is valid.

Prevent Accidental Force Pushes

Force pushes can overwrite shared branch history, causing significant problems in collaborative projects. We will implement validation for the prior pre-push hook to prevent accidental force pushes to critical branches like main or develop.

  1. Create a script named scripts/prevent-force-push.sh.
#!/bin/bash

# Define the protected branches
protected_branches=("main" "develop")

# Get the current branch name
current_branch=$(git symbolic-ref --short HEAD)

# Check if the current branch is in the list of protected branches
if [[ " ${protected_branches[@]} " =~ " ${current_branch} " ]]; then
# Check if the push is a force push
for arg in "$@"; do
  if [[ "$arg" == "--force" || "$arg" == "-f" ]]; then
    echo "❌ Force pushing to the protected branch '${current_branch}' is not allowed!"
    exit 1
  fi
done
fi

echo "✅ Push to '${current_branch}' is valid."
  1. Add the above script execution command into the pre-push hook.
echo "bash ./scripts/prevent-force-push.sh" >> .husky/pre-push
  1. Grant execute permissions to the check-branch-name.sh file.
chmod +x ./scripts/prevent-force-push.sh
  1. Result:

Invalid case:

git checkout main
git push -f

Output:

❌ Force pushing to the protected branch 'main' is not allowed!
husky - pre-push script failed (code 1)

Valid case:

git checkout main
git push

Output:

✅ Push is valid.

Monitor for Secrets in Commits

Developers sometimes unexpectedly include sensitive data in commits. We will set up a pre-commit hook to scan files for sensitive patterns before committing to prevent accidental commits containing sensitive information (such as API keys, passwords, or other secrets).

  1. Create a script named scripts/monitor-secrets-with-values.sh.
#!/bin/bash

# Define sensitive value patterns
patterns=(
# Base64-encoded strings
"([A-Za-z0-9+/]{40,})={0,2}"
# PEM-style private keys
"-----BEGIN RSA PRIVATE KEY-----"
"-----BEGIN OPENSSH PRIVATE KEY-----"
"-----BEGIN PRIVATE KEY-----"
# AWS Access Key ID
"AKIA[0-9A-Z]{16}"
# AWS Secret Key
"[a-zA-Z0-9/+=]{40}"
# Email addresses (optional)
"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}"
# Others (e.g., passwords, tokens)
)

# Scan staged files for sensitive patterns
echo "🔍 Scanning staged files for sensitive values..."

# Get the list of staged files
staged_files=$(git diff --cached --name-only)

# Initialize a flag to track if any sensitive data is found
found_sensitive_data=false

# Loop through each file and pattern
for file in $staged_files; do
# Skip binary files
if [[ $(file --mime-type -b "$file") == "application/octet-stream" ]]; then
  continue
fi

# Scan each pattern using grep -E (extended regex)
for pattern in "${patterns[@]}"; do
  if grep -E -- "$pattern" "$file"; then
    echo "❌ Sensitive value detected in file '$file': Pattern '$pattern'"
    found_sensitive_data=true
    break
  fi
done
done

# If sensitive data is found, prevent the commit
if $found_sensitive_data; then
echo "❌ Commit aborted. Please remove sensitive values before committing."
exit 1
fi

echo "✅ No sensitive values detected. Proceeding with committing."
  1. Add the above script execution command into the pre-commit hook.
echo "bash ./scripts/monitor-secrets-with-values.sh" >> .husky/pre-commit
  1. Grant execute permissions to the monitor-secrets-with-values.sh file.
chmod +x ./scripts/monitor-secrets-with-values.sh
  1. Result:

Invalid case:

git add private
git commit -m “pbi-002 - chore - add unexpected private file”

Result:

🔍 Scanning staged files for sensitive values...
-----BEGIN OPENSSH PRIVATE KEY-----
❌ Sensitive value detected in file 'private': Pattern '-----BEGIN OPENSSH PRIVATE KEY-----'
❌ Commit aborted. Please remove sensitive values before committing.
husky - pre-commit script failed (code 1)

Valid case:

git reset private
git commit -m “pbi-002 - chore - remove unexpected private file”

Result:

🔍 Scanning staged files for sensitive values...
✅ No sensitive values detected. Proceeding with commit.
[main c575028] pbi-002 - chore - remove unexpected private file
4 files changed, 5 insertions(+)
create mode 100644 .env.example
create mode 100644 .husky/commit-msg
create mode 100644 .husky/pre-commit
create mode 100644 .husky/pre-push

Conclusion

“Humans make mistakes” in software development; even minor errors can disrupt workflows or create inefficiencies. That’s where Git Hooks come in. By automating essential checks and enforcing best practices, Git Hooks reduces the chances of errors slipping through and ensures a smoother, more consistent workflow.

Tools like Husky make it easier to set up Git Hooks, allowing developers to focus on writing code instead of worrying about process compliance. Whether it’s validating commit messages, enforcing branch naming conventions, or preventing sensitive data from being committed, Git Hooks acts as a safety net that ensures quality at every step.

If you want to optimize your Git workflow, now is the time to start integrating Git Hooks. With the proper setup, you can make your development process reliable but also effortless and efficient. Let automation handle the rules so your team can focus on building great software.

Related Blog

Sparking the Fire, Spreading the Passion

Our culture

+0

    Sparking the Fire, Spreading the Passion

    At SupremeTech, we believe growth isn’t something that happens in isolation. True success lies in helping others rise and evolve alongside you. That's why we call it "Sparking the Fire, Spreading the Passion". When Quang Hai joined SupremeTech five years ago, he was a young professional just beginning his career. He brought with him a curious mind and an eagerness to learn, though like many new hires, he faced a steep learning curve. d. Like many beginners, he faced challenges and had a lot to learn. Luckily, he had a mentor to supported him, gave honest feedback, solved problems together, and always believed in his potential. This journey was not just about learning new skills. It was about growing, building confidence, and sharing that growth with others. We talked with Mr. Duc Tai, the mentor who supported Hai from the beginning, and with Quang Hai, who is now ready to guide the next generation. Their stories show how one person’s support can help light a spark that keeps on spreading. Sharing From the Mentor - Mr. Duc Tai What made you believe Hai had the potential to go far? Mr. Tai: Right from the start, Hai showed that he could think clearly and always tried to understand problems deeply. He didn’t just fix things on the surface. He wanted to solve the real issue so that everything could work better in the long run. He was calm, listened well, and focused on finding solutions instead of complaining. He was also very responsible. I never had to worry about the tasks I gave him. When assigning roles, do you prioritize short-term results or long-term development? Mr. Tai: I always lean toward long-term growth. If someone is in a role where they feel both challenged and supported, the results will naturally follow, and they’ll last longer. It's not just about getting things done today but building a foundation that sustains growth in the future. What do you find to be the most challenging part of being a manager? Mr. Tai: It’s finding the right place for each person. I spend a lot of time watching and thinking about how people work. When someone is in a role that suits them, they can grow at their own pace, and the entire team becomes stronger. From the Mentee Turned Mentor - Quang Hai When you first became a leader, what were you afraid of? Hai: When I was first given a leadership position, I felt nervous and unsure of myself. I wondered if I was ready and if I could earn my teammates’ trust while I still had so much to learn. Later, I realized that being a leader doesn’t mean you have to be perfect. What matters is being there for your team, being willing to listen, taking responsibility, and continuing to learn. What is the most valuable lesson you’ve learned from Mr. Tai? Hai: I learned always to be ready to take on responsibility. Mr. Tai never says no to a task, whether it comes from the company or the team. He always takes action and faces problems directly. That attitude showed me that if you want to grow, you have to step out of your comfort zone and keep moving forward. Now that you're guiding others, when do you feel you’ve truly grown? Hai: I see it in the way I listen and ask questions. I used to think a mentor had to provide all the answers. But now I know that helping someone means guiding them to find their own answers. I often ask, “What do you think?” or “What’s making this hard for you?” To me, growth isn’t about being the most knowledgeable person in the room. It’s about walking alongside others and helping them grow in their own unique way. Final thought Quang Hai’s journey is more than a story of personal development. It reflects the broader spirit at SupremeTech—a place where everyone is given the opportunity to learn, face challenges, and eventually pass on their knowledge to the next wave of talent. His transformation from mentee to mentor is living proof that when someone is nurtured with care and trust, they can grow strong enough to lift others as well. Because at SupremeTech, growth is never just about one person. And as long as we continue to support and inspire each other, the fire will never go out. >>> Read more: From Seeking The Path to Leading The Way: Phuoc’s Journey at SupremeTechAnh Duong – A Journey of Rising Above to Shine Bright

    09/07/2025

    75

    Our culture

    +0

      Sparking the Fire, Spreading the Passion

      09/07/2025

      75

      How-to

      Knowledge

      +0

        Level Up Your Code: Transitioning to Validated Environment Variables

        Validated Environment variables play a critical role in software projects of all sizes. As projects grow, so does the number of environment variables—API keys, custom configurations, feature flags, and more. Managing these variables effectively becomes increasingly complex. If mismanaged, they can lead to severe bugs, server crashes, and even security vulnerabilities.  While there’s no one-size-fits-all solution, having some structure in how we manage environment variables can really help reduce mistakes and confusion down the road. In this article, I’ll share how I’ve been handling them in my own projects and what’s worked well for me so far. My Personal Story When I first started programming, environment variables were a constant source of headaches. I often ran into problems like: Misspelled variable names.Failure to retrieve variable values, even though I was sure they were set.Forgetting to define variables entirely, leading to runtime errors. These issues were tricky to detect. Typically, I wouldn’t notice anything was wrong until the application misbehaved or crashed. Debugging these errors was tedious—tracing back through the code to find that the root cause was a missing or misconfigured environment variable. For a long time, I struggled with managing environment variables. Eventually, I discovered a more effective approach: validating all required variables before running the application. This process has saved me countless hours of debugging and has become a core part of my workflow. Today, I want to share this approach with you. A Common Trap in Real Projects Beyond personal hiccups, I’ve also seen issues arise in real-world projects due to manual environment handling. One particular pitfall involves relying on if/else conditions to set or interpret environment variables like NODE_ENV. For example: if (process.env.NODE_ENV === "production") { // do something } else { // assume development } This type of conditional logic can seem harmless during development, but it often leads to incomplete coverage during testing. Developers typically test in development mode and may forget or assume things will "just work" in production. As a result, issues are only discovered after the application is deployed — when it's too late. In one of our team’s past projects, this exact scenario caused a production bug that slipped through all local tests. The root cause? A missing environment variable that was only required in production, and the conditional logic silently skipped it in development. This highlights the importance of failing fast and loudly—ideally before the application even starts. And that’s exactly what environment variable validation helps with. The Solution: Validating Environment Variables The secret to managing environment variables efficiently lies in validation. Instead of assuming all necessary variables are correctly set, validate them at the application’s startup. This prevents the application from running in an incomplete or misconfigured state, minimizing runtime errors and improving overall reliability. Benefits of Validating Environment Variables Error Prevention: Catch missing or misconfigured variables early.Improved Debugging: Clear error messages make it easier to trace issues.Security: Ensures sensitive variables like API keys are set correctly.Consistency: Establishes a standard for how environment variables are managed across your team. Implementation Here’s a simple and structured way to validate environment variables in a TypeScript project. Step 1: Define an Interface Define the expected environment variables using a TypeScript interface to enforce type safety. export interface Config { NODE_ENV: "development" | "production" | "test"; SLACK_SIGNING_SECRET: string; SLACK_BOT_TOKEN: string; SLACK_APP_TOKEN: string; PORT: number; } Step 2: Create a Config Loader Write a function to load and validate environment variables. This loader ensures that each variable is present and meets the expected type or format. Step 3: Export the Configuration Use the config loader to create a centralized configuration object that can be imported throughout your project. import { loadConfig } from "./loader"; export const config = loadConfig(); Conclusion Transitioning to validated environment variables is a straightforward yet powerful step toward building more reliable and secure applications. By validating variables during startup, you can catch misconfigurations early, save hours of debugging, and ensure your application is always running with the correct settings.

        09/07/2025

        31

        Bao Dang D. Q.

        How-to

        +1

        • Knowledge

        Level Up Your Code: Transitioning to Validated Environment Variables

        09/07/2025

        31

        Bao Dang D. Q.

        How-to

        Knowledge

        +0

          Build Smarter: Best Practices for Creating Optimized Dockerfile

          If you’ve been using Docker in your projects, you probably know how powerful it is for shipping consistent environments across teams and systems. It's time to learn how to optimize dockerfile. But here’s the thing: a poorly written Dockerfile can quickly become a hidden performance bottleneck. Making your images unnecessarily large, your build time painfully slow, or even causing unexpected behavior in production. I’ve seen this firsthand—from early projects where we just “made it work” with whatever Dockerfile we had, to larger systems where the cost of a bad image multiplied across services. My name is Bao. After working on several real-world projects and going through lots of trial and error. I’ve gathered a handful of practical best practices to optimize Dockerfile that I’d love to share with you. Whether you’re refining a production-grade image or just curious about what you might be missing. Let me walk you through how I approach Docker optimization. Hopefully it’ll save you time, headaches, and a few docker build rage moments 😅. Identifying Inefficiencies in Dockerfile: A Case Study Below is the Dockerfile we’ll analyze: Key Observations: 1. Base Image: The Dockerfile uses ubuntu:latest, which is a general-purpose image. While versatile, it is significantly larger compared to minimal images like ubuntu:slim or Node.js-specific images like node:20-slim, node:20-alpine. 2. Redundant Package Installation: Tools like vim, wget, and git are installed but may not be necessary for building or running the application. 3. Global npm Packages: Pages like nodemon, ESLint, and prettier are installed globally. These are typically used for development and are not required in a production image. 4. Caching Issues: COPY . . is placed before npm install, invalidating the cache whenever any application file changes, even if the dependencies remain the same. 5. Shell Customization: Setting up a custom shell prompt (PS1) is irrelevant for production environments, adding unnecessary steps. 6. Development Tool in Production: The CMD uses nodemon, which is a development tool, to run the application Optimized your Docker Image Here’s how we can optimize the Dockerfile step by step. Showing the before and after for each section with the result to clearly distinguish the improvements. 1. Change the Base Image Before: FROM ubuntu:latest RUN apt-get update && apt-get install -y curl && curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \ apt-get install -y nodejs Use ubuntu:latest, a general-purpose image that is large and includes many unnecessary tools. After: FROM node:20-alpine Switches to node:20-alpine, a lightweight image specifically tailored for Node.js applications. Result: With the first change being applied, the image size is drastically reduced by about ~200MB.  2. Simplify Installed Packages Before: RUN apt-get update && apt-get install -y \ curl \ wget \ git \ vim \ python3 \ make \ g++ && \ curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \ apt-get install -y nodejs Installs multiple tools (curl, wget, vim, git) and Node.js manually, increasing the image size and complexity. After: RUN apk add --no-cache python3 make g++ Uses apk (Alpine’s package manager) to install only essential build tools (python3, make, g++). Result: The image should be cleaner and smaller after removing the unnecessary tools, packages. (~250MB vs ~400MB with the older version) 3. Leverage Dependency Caching Before: COPY . . RUN npm install Copies all files before installing dependencies, causing cache invalidation whenever any file changes, even if dependencies remain unchanged. After: COPY package*.json ./ RUN npm install --only=production COPY . . Copies only package.json and package-lock.json first, ensuring that dependency installation is only re-run when these files change.Installs only production dependencies (--only=production) to exclude devDependencies. Result: Faster rebuilds and a smaller image by excluding unnecessary files and dependencies. 4. Remove Global npm Installations Before: RUN npm install -g nodemon eslint pm2 typescript prettier Installs global npm packages (nodemon, eslint, pm2, ect.) that are not needed in production, increasing image size. After: Remove Entirely: Global tools are omitted because they are unnecessary in production. Result: Reduced image size and eliminated unnecessary layers. 5. Use a Production-Ready CMD Before: CMD ["nodemon", "/app/bin/www"] Uses nodemon, which is meant for development, not production. Result: A streamlined and efficient startup command. 6. Remove Unnecessary Shell Customization Before: ENV PS1A="💻\[\e[33m\]\u\[\e[m\]@ubuntu-node\[\e[36m\][\[\e[m\]\[\e[36m\]\w\[\e[m\]\[\e[36m\]]\[\e[m\]: " RUN echo 'PS1=$PS1A' >> ~/.bashrc Sets and applies a custom shell prompt that has no practical use in production After: Remove Entirely: Shell customization is unnecessary and is removed. Result: Cleaner image with no redundant configurations or layers. Final Optimized Dockerfile FROM node:20-alpine WORKDIR /app RUN apk add --no-cache python3 make g++ COPY package*.json ./ RUN npm install --only=production COPY . . EXPOSE 3000 CMD ["node", "/app/bin/www"] 7. Leverage Multi-Stage Builds to Separate Build and Runtime In many Node.js projects, you might need tools like TypeScript or linters during the build phase—but they’re unnecessary in the final production image. That’s where multi-stage builds come in handy. Before: Everything—from installation to build to running—happens in a single image, meaning all build-time tools get carried into production. After: You separate the "build" and "run" stages, keeping only what’s strictly needed at runtime. Result: Smaller, cleaner production imageBuild-time dependencies are excludedFaster and safer deployments Final Optimized Dockerfile # Stage 1 - Builder FROM node:20-alpine AS builder WORKDIR /app RUN apk add --no-cache python3 make g++ COPY package*.json ./ RUN npm install --only=production COPY . . # Stage 2 - Production FROM node:20-alpine WORKDIR /app COPY --from=builder /app/node_modules ./node_modules COPY --from=builder /app ./ EXPOSE 3000 CMD ["node", "/app/bin/www"] Bonus. Don’t Forget .dockerignore Just like .gitignore, the .dockerignore file excludes unnecessary files and folders from the Docker build context (like node_modules, .git, logs, environment files, etc.). Recommended .dockerignore: node_modules .git *.log .env Dockerfile.dev tests/ Why it matters: Faster builds (Docker doesn’t copy irrelevant files)Smaller and cleaner imagesLower risk of leaking sensitive or unnecessary files Results of Optimization 1. Smaller Image Size: The switch to node:20-alpine and removal of unnecessary packages reduced the image size from 1.36GB, down to 862MB. 2. Faster Build Times: Leveraging caching for dependency installation speeds up rebuilds significantly.Build No Cache:Ubuntu (Old Dockerfile): ~126.2sNode 20 Alpine (New Dockerfile): 78.4sRebuild With Cache (After file changes):Ubuntu: 37.1s (Re-run: npm install)Node 20 Alpine: 8.7s (All Cached) 3. Production-Ready Setup: The image now includes only essential build tools and runtime dependencies, making it secure and efficient for production. By following these changes, your Dockerfile is now lighter, faster, and better suited for production environments. Let me know if you’d like further refinements! Conclusion Optimizing your Dockerfile is a crucial step in building smarter, faster, and more efficient containers. By adopting best practices: such as choosing the right base image, simplifying installed packages, leveraging caching, and using production-ready configurations, you can significantly enhance your build process and runtime performance. In this article, we explored how small, deliberate changes—like switching to node:20-alpine, removing unnecessary tools, and refining dependency management—can lead to.

          08/07/2025

          40

          Bao Dang D. Q.

          How-to

          +1

          • Knowledge

          Build Smarter: Best Practices for Creating Optimized Dockerfile

          08/07/2025

          40

          Bao Dang D. Q.

          View Transitions API

          Knowledge

          Software Development

          +0

            How to Create Smooth Navigation Transitions with View Transitions API and React Router?

            Normally, when users move between pages in a web app, they see a white flash or maybe a skeleton loader. That’s okay, but it doesn’t feel smooth. Try View Transitions API! Imagine you have a homepage showing a list of movie cards. When you click one, it takes you to a detail page with a big banner of the same movie. Right now, there’s no animation between these two screens, so the connection between them feels broken. With the View Transitions API, we can make that connection smoother. It creates animations between pages, helping users feel like they’re staying in the same app instead of jumping from one screen to another. Smooth and connected transition using View Transitions API In this blog, you’ll learn how to create these nice transitions using the View Transitions API and React Router v7. Basic Setup The easiest way to use view transitions is by adding the viewTransition prop to your React Router links: import { NavLink } from 'react-router'; <NavLink to='/movies/avengers-age-of-ultron' viewTransition> Avengers: Age of Ultron </NavLink> Only cross-fade animation without element linking It works — but it still feels a bit plain. The whole page fades, but nothing stands out or feels connected. Animating Specific Elements In the previous example, the entire page takes part in the transition. But sometimes, you want just one specific element — like an image — to animate smoothly from one page to another. Let’s say you want the movie image on the homepage to smoothly turn into the banner on the detail page. We can do that by giving both images the same view-transition-name. // app/routes/home.tsx export default function Home() { return ( <NavLink to='/movies/avengers-age-of-ultron' viewTransition> <img className='card-image' src='/assets/avengers-age-of-ultron.webp' alt='Avengers: Age of Ultron' /> <span>Avengers: Age of Ultron</span> </NavLink> ); } // app/routes/movie.tsx export default function Movie() { return ( <img className='movie-image' src='/assets/avengers-age-of-ultron.webp' alt='Avengers: Age of Ultron' /> ); } // app.css ... /* This class assign to the image of the movie card in the home page */ .card-image { view-transition-name: movie-image; } /* This class assign to the image of the movie in the movie details page */ .movie-image { view-transition-name: movie-image; } ... Now, when you click a movie card, the image will smoothly grow into the banner image on the next page. It feels much more connected and polished. Animating a single element with view-transition-name Handling Dynamic Data  This works great for a single element, but what happens if you have a list of items, like multiple movies? If you assign the same view-transition-name to all items, the browser won’t know which one to animate. Each transition name must be unique per element — but hardcoding different class names for every item is not scalable, especially when the data is dynamic. Incorrect setup – Same view-transition-name used for all items in a list. The Solution: Assign view-transition-name during navigation Instead of setting the view-transition-name upfront, a more flexible approach is to add it dynamically when navigation starts — that is, when the user clicks a link. // app/routes/home.tsx export default function Home({ loaderData: movies }: Route.ComponentProps) { return ( <ul> {movies.map((movie) => ( <li key={movie.id}> <NavLink to={`/movies/${movie.id}`} viewTransition> <img className='card-image' src={movie.image} alt={movie.title} /> <span>{movie.title}</span> </NavLink> </li> ))} </ul> ); } // app/routes/movie.tsx export default function Movie({ loaderData: movie }: Route.ComponentProps) { return ( <img className='movie-image' src={movie.image} alt={movie.title} /> ); } // app.css ... /* Assign transition names to elements during navigation */ a.transitioning .card-image { view-transition-name: movie-image; } .movie-image { view-transition-name: movie-image; } ... Final output – Smooth transition with dynamic list items Here’s what happens: When a user clicks a link, React Router adds a transitioning class to it.That class tells the browser which image should animate.On the detail page, the image already has view-transition-name: movie-image, so it matches. This way, you can reuse the same CSS for all items without worrying about assigning unique class names ahead of time. You can explore the full source code below: Live DemoSource on GitHub Browser Support The View Transitions API is still relatively new, and browser support is limited:  Chrome (from version 111)Edge (Chromium-based)Firefox & Safari: Not supported yet (as of May 2025) You should always check for support before using it in production. Conclusion The View Transitions API gives us a powerful tool to deliver smooth, native-feeling page transitions in our web apps. By combining it with React Router, you can: Enable basic transitions with minimal setupAnimate specific elements using view-transition-nameHandle dynamic content gracefully by assigning transition names at runtime Hope this guide helps you create more fluid and polished navigation experiences in your React projects!

            08/07/2025

            46

            Knowledge

            +1

            • Software Development

            How to Create Smooth Navigation Transitions with View Transitions API and React Router?

            08/07/2025

            46

            Customize software background

            Want to customize a software for your business?

            Meet with us! Schedule a meeting with us!