Header image

How to execute the Minimum Viable Product (MVP) approach in software development

12/02/2024

679

The concept of a Minimum Viable Product (MVP) has manifested in various industries for years, producing massive success for many adopters. However, even when a software product has a seemingly finite purpose, there are usually some ways to improve how it solves a specific problem.

This is part of what makes the MVP approach a bit tricky when developing software. With that in mind, let’s define the minimum viable product and discuss how to apply it successfully when developing apps:

What is an MVP?

In a software development context, a minimum viable product (MVP) is the simplest version of a software product you can offer to the public to perform a particular function, with prospects of improvements further down the road.

This product usually has a few essential features, and it’s released primarily to gauge the viability of a core idea, gain some market share early, and learn more about what users may want later. An MVP can also save you from sinking many resources into a product or feature that users aren’t interested in.

How to Build a Minimum Viable Product

Every app comprises several facets on the front and back end, so while the product idea may be clear, the question of which parts must be released first is more challenging. That said, here are some tips on how to build a minimum viable product and continuously apply this technique:

List the desired app capabilities

Say you want to build an app that enables people to store files in the cloud. Such an app can come with numerous capabilities. For example, you could allow users to organize uploaded files under folders, synchronize the storage across different devices, and share access to the files using public links.

But what if they want larger storage space and a search bar they can use to quickly find a file by typing in its name? Listing possible functionality helps you understand the fundamental function, which complementary functions are close to it, and which are mere nice-to-haves. Creating this list also lets you know which questions you want to be answered before the next iteration.

Ascertain the resources available

Another helpful step in building an MVP is to measure your resources, especially the funds. When you know the exact amount of money you have, you can easily cancel certain features off the list for the MVP. A few quick inquiries with developers, designers, marketers and other necessary personnel will produce quotations that let you know what you can or can’t afford.

This step can help in your fundraising efforts since you have an estimate of what each feature costs. So when your MVP is released, you’ll know how much you need to raise as you receive feedback. You’ll also have an easier time deciding which of the requested improvements you’ll fit into the next release.

Examine the competitor’s product

While some MVP champions emphasize a super lean product, there’s a limit to how minimalist you can be if there’s already someone doing what you want to do. If the competition has already offered certain complementary features, you may also have to provide those features at the very least.

Many users will wonder why you even bothered releasing something that offers only a fraction of what the other available solutions offer if you roll out a much leaner MVP. However, this rule isn’t absolute. The competition may offer many non-resonant features, so you should also search for other users’ opinions on the product on top of trying it out.

Doing so lets you know which unwanted features they offered and which relevant features they’ve delivered with flaws. Essentially, the competitor’s product can guide you on what to emulate while also showing you how to differentiate yourself.

Set a timeframe

Some argue that money can expedite a software development project, but there’s only so much speed it can buy you. When you embark on a project, you need time to recruit employees and build and test the product. The more features you add to your initial release, the longer it takes to get it out.

So, when building an MVP, you should specify a precise date by which the product should be ready to go public. Once you’ve done so, you can talk to engineers and other technical staff, who will let you know whether it’s possible to complete a faultless build within that time.

Accordingly, you can then concede on a few features where necessary. Additionally, a concrete release date does more than keep you on schedule. It can help you align marketing with development. For example, when it drops later, you can invite people to sign up and be the first to try your new app.

As the list grows, you’ll get a sense of the load your app will have. This enables you to decide on infrastructural provisions, customer support, and more. You’re starting a conversation and creating awareness while adequately preparing for the upcoming test. 

You can undertake various efforts before releasing an MVP since every organization has unique goals. Nevertheless, to continuously apply the MVP approach, you ought to follow these tenets:

  • You should have some assumptions about what users may like
  • Your MVP should embody these assumptions
  • Once you receive feedback affirming or dispelling those assumptions, you shouldn’t remain adamant. Instead, be ready to stop or pivot.
  • The steps leading up to your first MVP should be condensed into a straightforward, repeatable process.

Wrapping Up

Ultimately, a lot changes once you release an MVP. One day, it may be about getting many sign-ups, and the next, it may be about increasing the percentage of paying customers. You may need five employees in one month and eight in the next. There’ll be a lot to track and many parties to please.

Fortunately, here at Supreme Tech, we focus on building mobile and web apps from scratch using agile methodologies, so contact us for a free consultation on app MVPs.

Related Blog

How-to

Knowledge

+0

    Level Up Your Code: Transitioning to Validated Environment Variables

    Validated Environment variables play a critical role in software projects of all sizes. As projects grow, so does the number of environment variables—API keys, custom configurations, feature flags, and more. Managing these variables effectively becomes increasingly complex. If mismanaged, they can lead to severe bugs, server crashes, and even security vulnerabilities.  While there’s no one-size-fits-all solution, having some structure in how we manage environment variables can really help reduce mistakes and confusion down the road. In this article, I’ll share how I’ve been handling them in my own projects and what’s worked well for me so far. My Personal Story When I first started programming, environment variables were a constant source of headaches. I often ran into problems like: Misspelled variable names.Failure to retrieve variable values, even though I was sure they were set.Forgetting to define variables entirely, leading to runtime errors. These issues were tricky to detect. Typically, I wouldn’t notice anything was wrong until the application misbehaved or crashed. Debugging these errors was tedious—tracing back through the code to find that the root cause was a missing or misconfigured environment variable. For a long time, I struggled with managing environment variables. Eventually, I discovered a more effective approach: validating all required variables before running the application. This process has saved me countless hours of debugging and has become a core part of my workflow. Today, I want to share this approach with you. A Common Trap in Real Projects Beyond personal hiccups, I’ve also seen issues arise in real-world projects due to manual environment handling. One particular pitfall involves relying on if/else conditions to set or interpret environment variables like NODE_ENV. For example: if (process.env.NODE_ENV === "production") { // do something } else { // assume development } This type of conditional logic can seem harmless during development, but it often leads to incomplete coverage during testing. Developers typically test in development mode and may forget or assume things will "just work" in production. As a result, issues are only discovered after the application is deployed — when it's too late. In one of our team’s past projects, this exact scenario caused a production bug that slipped through all local tests. The root cause? A missing environment variable that was only required in production, and the conditional logic silently skipped it in development. This highlights the importance of failing fast and loudly—ideally before the application even starts. And that’s exactly what environment variable validation helps with. The Solution: Validating Environment Variables The secret to managing environment variables efficiently lies in validation. Instead of assuming all necessary variables are correctly set, validate them at the application’s startup. This prevents the application from running in an incomplete or misconfigured state, minimizing runtime errors and improving overall reliability. Benefits of Validating Environment Variables Error Prevention: Catch missing or misconfigured variables early.Improved Debugging: Clear error messages make it easier to trace issues.Security: Ensures sensitive variables like API keys are set correctly.Consistency: Establishes a standard for how environment variables are managed across your team. Implementation Here’s a simple and structured way to validate environment variables in a TypeScript project. Step 1: Define an Interface Define the expected environment variables using a TypeScript interface to enforce type safety. export interface Config { NODE_ENV: "development" | "production" | "test"; SLACK_SIGNING_SECRET: string; SLACK_BOT_TOKEN: string; SLACK_APP_TOKEN: string; PORT: number; } Step 2: Create a Config Loader Write a function to load and validate environment variables. This loader ensures that each variable is present and meets the expected type or format. Step 3: Export the Configuration Use the config loader to create a centralized configuration object that can be imported throughout your project. import { loadConfig } from "./loader"; export const config = loadConfig(); Conclusion Transitioning to validated environment variables is a straightforward yet powerful step toward building more reliable and secure applications. By validating variables during startup, you can catch misconfigurations early, save hours of debugging, and ensure your application is always running with the correct settings.

    09/07/2025

    7

    Bao Dang D. Q.

    How-to

    +1

    • Knowledge

    Level Up Your Code: Transitioning to Validated Environment Variables

    09/07/2025

    7

    Bao Dang D. Q.

    How-to

    Knowledge

    +0

      Build Smarter: Best Practices for Creating Optimized Dockerfile

      If you’ve been using Docker in your projects, you probably know how powerful it is for shipping consistent environments across teams and systems. It's time to learn how to optimize dockerfile. But here’s the thing: a poorly written Dockerfile can quickly become a hidden performance bottleneck. Making your images unnecessarily large, your build time painfully slow, or even causing unexpected behavior in production. I’ve seen this firsthand—from early projects where we just “made it work” with whatever Dockerfile we had, to larger systems where the cost of a bad image multiplied across services. My name is Bao. After working on several real-world projects and going through lots of trial and error. I’ve gathered a handful of practical best practices to optimize Dockerfile that I’d love to share with you. Whether you’re refining a production-grade image or just curious about what you might be missing. Let me walk you through how I approach Docker optimization. Hopefully it’ll save you time, headaches, and a few docker build rage moments 😅. Identifying Inefficiencies in Dockerfile: A Case Study Below is the Dockerfile we’ll analyze: Key Observations: 1. Base Image: The Dockerfile uses ubuntu:latest, which is a general-purpose image. While versatile, it is significantly larger compared to minimal images like ubuntu:slim or Node.js-specific images like node:20-slim, node:20-alpine. 2. Redundant Package Installation: Tools like vim, wget, and git are installed but may not be necessary for building or running the application. 3. Global npm Packages: Pages like nodemon, ESLint, and prettier are installed globally. These are typically used for development and are not required in a production image. 4. Caching Issues: COPY . . is placed before npm install, invalidating the cache whenever any application file changes, even if the dependencies remain the same. 5. Shell Customization: Setting up a custom shell prompt (PS1) is irrelevant for production environments, adding unnecessary steps. 6. Development Tool in Production: The CMD uses nodemon, which is a development tool, to run the application Optimized your Docker Image Here’s how we can optimize the Dockerfile step by step. Showing the before and after for each section with the result to clearly distinguish the improvements. 1. Change the Base Image Before: FROM ubuntu:latest RUN apt-get update && apt-get install -y curl && curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \ apt-get install -y nodejs Use ubuntu:latest, a general-purpose image that is large and includes many unnecessary tools. After: FROM node:20-alpine Switches to node:20-alpine, a lightweight image specifically tailored for Node.js applications. Result: With the first change being applied, the image size is drastically reduced by about ~200MB.  2. Simplify Installed Packages Before: RUN apt-get update && apt-get install -y \ curl \ wget \ git \ vim \ python3 \ make \ g++ && \ curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \ apt-get install -y nodejs Installs multiple tools (curl, wget, vim, git) and Node.js manually, increasing the image size and complexity. After: RUN apk add --no-cache python3 make g++ Uses apk (Alpine’s package manager) to install only essential build tools (python3, make, g++). Result: The image should be cleaner and smaller after removing the unnecessary tools, packages. (~250MB vs ~400MB with the older version) 3. Leverage Dependency Caching Before: COPY . . RUN npm install Copies all files before installing dependencies, causing cache invalidation whenever any file changes, even if dependencies remain unchanged. After: COPY package*.json ./ RUN npm install --only=production COPY . . Copies only package.json and package-lock.json first, ensuring that dependency installation is only re-run when these files change.Installs only production dependencies (--only=production) to exclude devDependencies. Result: Faster rebuilds and a smaller image by excluding unnecessary files and dependencies. 4. Remove Global npm Installations Before: RUN npm install -g nodemon eslint pm2 typescript prettier Installs global npm packages (nodemon, eslint, pm2, ect.) that are not needed in production, increasing image size. After: Remove Entirely: Global tools are omitted because they are unnecessary in production. Result: Reduced image size and eliminated unnecessary layers. 5. Use a Production-Ready CMD Before: CMD ["nodemon", "/app/bin/www"] Uses nodemon, which is meant for development, not production. Result: A streamlined and efficient startup command. 6. Remove Unnecessary Shell Customization Before: ENV PS1A="💻\[\e[33m\]\u\[\e[m\]@ubuntu-node\[\e[36m\][\[\e[m\]\[\e[36m\]\w\[\e[m\]\[\e[36m\]]\[\e[m\]: " RUN echo 'PS1=$PS1A' >> ~/.bashrc Sets and applies a custom shell prompt that has no practical use in production After: Remove Entirely: Shell customization is unnecessary and is removed. Result: Cleaner image with no redundant configurations or layers. Final Optimized Dockerfile FROM node:20-alpine WORKDIR /app RUN apk add --no-cache python3 make g++ COPY package*.json ./ RUN npm install --only=production COPY . . EXPOSE 3000 CMD ["node", "/app/bin/www"] 7. Leverage Multi-Stage Builds to Separate Build and Runtime In many Node.js projects, you might need tools like TypeScript or linters during the build phase—but they’re unnecessary in the final production image. That’s where multi-stage builds come in handy. Before: Everything—from installation to build to running—happens in a single image, meaning all build-time tools get carried into production. After: You separate the "build" and "run" stages, keeping only what’s strictly needed at runtime. Result: Smaller, cleaner production imageBuild-time dependencies are excludedFaster and safer deployments Final Optimized Dockerfile # Stage 1 - Builder FROM node:20-alpine AS builder WORKDIR /app RUN apk add --no-cache python3 make g++ COPY package*.json ./ RUN npm install --only=production COPY . . # Stage 2 - Production FROM node:20-alpine WORKDIR /app COPY --from=builder /app/node_modules ./node_modules COPY --from=builder /app ./ EXPOSE 3000 CMD ["node", "/app/bin/www"] Bonus. Don’t Forget .dockerignore Just like .gitignore, the .dockerignore file excludes unnecessary files and folders from the Docker build context (like node_modules, .git, logs, environment files, etc.). Recommended .dockerignore: node_modules .git *.log .env Dockerfile.dev tests/ Why it matters: Faster builds (Docker doesn’t copy irrelevant files)Smaller and cleaner imagesLower risk of leaking sensitive or unnecessary files Results of Optimization 1. Smaller Image Size: The switch to node:20-alpine and removal of unnecessary packages reduced the image size from 1.36GB, down to 862MB. 2. Faster Build Times: Leveraging caching for dependency installation speeds up rebuilds significantly.Build No Cache:Ubuntu (Old Dockerfile): ~126.2sNode 20 Alpine (New Dockerfile): 78.4sRebuild With Cache (After file changes):Ubuntu: 37.1s (Re-run: npm install)Node 20 Alpine: 8.7s (All Cached) 3. Production-Ready Setup: The image now includes only essential build tools and runtime dependencies, making it secure and efficient for production. By following these changes, your Dockerfile is now lighter, faster, and better suited for production environments. Let me know if you’d like further refinements! Conclusion Optimizing your Dockerfile is a crucial step in building smarter, faster, and more efficient containers. By adopting best practices: such as choosing the right base image, simplifying installed packages, leveraging caching, and using production-ready configurations, you can significantly enhance your build process and runtime performance. In this article, we explored how small, deliberate changes—like switching to node:20-alpine, removing unnecessary tools, and refining dependency management—can lead to.

      08/07/2025

      15

      Bao Dang D. Q.

      How-to

      +1

      • Knowledge

      Build Smarter: Best Practices for Creating Optimized Dockerfile

      08/07/2025

      15

      Bao Dang D. Q.

      View Transitions API

      Knowledge

      Software Development

      +0

        How to Create Smooth Navigation Transitions with View Transitions API and React Router?

        Normally, when users move between pages in a web app, they see a white flash or maybe a skeleton loader. That’s okay, but it doesn’t feel smooth. Try View Transitions API! Imagine you have a homepage showing a list of movie cards. When you click one, it takes you to a detail page with a big banner of the same movie. Right now, there’s no animation between these two screens, so the connection between them feels broken. With the View Transitions API, we can make that connection smoother. It creates animations between pages, helping users feel like they’re staying in the same app instead of jumping from one screen to another. Smooth and connected transition using View Transitions API In this blog, you’ll learn how to create these nice transitions using the View Transitions API and React Router v7. Basic Setup The easiest way to use view transitions is by adding the viewTransition prop to your React Router links: import { NavLink } from 'react-router'; <NavLink to='/movies/avengers-age-of-ultron' viewTransition> Avengers: Age of Ultron </NavLink> Only cross-fade animation without element linking It works — but it still feels a bit plain. The whole page fades, but nothing stands out or feels connected. Animating Specific Elements In the previous example, the entire page takes part in the transition. But sometimes, you want just one specific element — like an image — to animate smoothly from one page to another. Let’s say you want the movie image on the homepage to smoothly turn into the banner on the detail page. We can do that by giving both images the same view-transition-name. // app/routes/home.tsx export default function Home() { return ( <NavLink to='/movies/avengers-age-of-ultron' viewTransition> <img className='card-image' src='/assets/avengers-age-of-ultron.webp' alt='Avengers: Age of Ultron' /> <span>Avengers: Age of Ultron</span> </NavLink> ); } // app/routes/movie.tsx export default function Movie() { return ( <img className='movie-image' src='/assets/avengers-age-of-ultron.webp' alt='Avengers: Age of Ultron' /> ); } // app.css ... /* This class assign to the image of the movie card in the home page */ .card-image { view-transition-name: movie-image; } /* This class assign to the image of the movie in the movie details page */ .movie-image { view-transition-name: movie-image; } ... Now, when you click a movie card, the image will smoothly grow into the banner image on the next page. It feels much more connected and polished. Animating a single element with view-transition-name Handling Dynamic Data  This works great for a single element, but what happens if you have a list of items, like multiple movies? If you assign the same view-transition-name to all items, the browser won’t know which one to animate. Each transition name must be unique per element — but hardcoding different class names for every item is not scalable, especially when the data is dynamic. Incorrect setup – Same view-transition-name used for all items in a list. The Solution: Assign view-transition-name during navigation Instead of setting the view-transition-name upfront, a more flexible approach is to add it dynamically when navigation starts — that is, when the user clicks a link. // app/routes/home.tsx export default function Home({ loaderData: movies }: Route.ComponentProps) { return ( <ul> {movies.map((movie) => ( <li key={movie.id}> <NavLink to={`/movies/${movie.id}`} viewTransition> <img className='card-image' src={movie.image} alt={movie.title} /> <span>{movie.title}</span> </NavLink> </li> ))} </ul> ); } // app/routes/movie.tsx export default function Movie({ loaderData: movie }: Route.ComponentProps) { return ( <img className='movie-image' src={movie.image} alt={movie.title} /> ); } // app.css ... /* Assign transition names to elements during navigation */ a.transitioning .card-image { view-transition-name: movie-image; } .movie-image { view-transition-name: movie-image; } ... Final output – Smooth transition with dynamic list items Here’s what happens: When a user clicks a link, React Router adds a transitioning class to it.That class tells the browser which image should animate.On the detail page, the image already has view-transition-name: movie-image, so it matches. This way, you can reuse the same CSS for all items without worrying about assigning unique class names ahead of time. You can explore the full source code below: Live DemoSource on GitHub Browser Support The View Transitions API is still relatively new, and browser support is limited:  Chrome (from version 111)Edge (Chromium-based)Firefox & Safari: Not supported yet (as of May 2025) You should always check for support before using it in production. Conclusion The View Transitions API gives us a powerful tool to deliver smooth, native-feeling page transitions in our web apps. By combining it with React Router, you can: Enable basic transitions with minimal setupAnimate specific elements using view-transition-nameHandle dynamic content gracefully by assigning transition names at runtime Hope this guide helps you create more fluid and polished navigation experiences in your React projects!

        08/07/2025

        20

        Knowledge

        +1

        • Software Development

        How to Create Smooth Navigation Transitions with View Transitions API and React Router?

        08/07/2025

        20

        The journey of Anh Duong

        Our culture

        +0

          Anh Duong – A Journey of Rising Above to Shine Bright

          At SupremeTech, we often meet during meetings, rush through deadlines together, and celebrate when our products are released. But behind those intense work hours, there are powerful stories of personal transformation & growth that we don’t always get to hear. ST is not only a witness to these journeys but also a part of them. In May, during our ST WOW section—a time where we honor people who make others say “WOW,” not only in work but also in life—we recognized Anh Duong. Duong has been with SupremeTech for four years and has gone through an impressive personal transformation. Let’s explore his story together! From a Shy Boy to the Confident Anh Duong Today Just over two years ago, Duong often felt insecure, especially about his appearance. He was skinny and had trouble even carrying water bottles around the office. He often felt tired and weak due to poor health. These little moments slowly pushed him to make a change, not to impress others, but to take control of his life. He started going to the gym in April 2023. At first, it was just something to try out. When the numbers on the scale didn’t move, he felt discouraged. But instead of giving up, that became a turning point. He chose discipline. He chose daily habits. He set long-term goals. Day by day, these choices built into something bigger—not just in how he looked, but in how he felt. No Trainer, No Showing Off – Just Self-Understanding Duong didn’t have a personal trainer. There was no magic solution. He studied on his own to learn what worked for his body—what foods, exercises, and routines suited him best. He designed his own meals, workouts, and rest schedule. Not to meet someone else’s standards, but to fit what he truly needed. Now that he’s “in shape,” his training is no longer a challenge—it’s just part of a healthy lifestyle. Success Measured by Spirit, Not Muscles After one year, Duong said his energy had improved significantly. He rarely feels drained now. People around him notice he’s more cheerful and full of life. And after two years? He says it’s a turning point—he truly feels proud of what he has achieved with his body. Now, he’s more confident. He’s in a relationship. His family is proud. And most importantly, he inspires others who once felt the same way. “You won’t know until you try. Don’t work out to show off—do it to change yourself.”Nguyen Van Anh Duong That’s Duong's message to anyone who feels unsure, insecure, or not strong enough to start. At ST, we’re proud to have people like Anh Duong—not just skilled at work, but also strong in their personal lives. We believe going far takes not only skills but also willpower. It’s not just about working together, but also living and growing together. Thank you, Anh Duong, for your personal transformation effort and for being a warm and strong light in our ST family. Related articles: From Unpaid Trial to the Top: The Inspiring Rise to Vice PresidentFrom Seeking The Path to Leading The Way: Phuoc’s Journey at SupremeTech

          27/06/2025

          79

          Our culture

          +0

            Anh Duong – A Journey of Rising Above to Shine Bright

            27/06/2025

            79

            Customize software background

            Want to customize a software for your business?

            Meet with us! Schedule a meeting with us!