Skip to main content

Getting started with Microservices

Β· 19 min read
Ajay Dhangar
Founder of CodeHarborHub

1. Understanding the importance Microservices​

  • Microservices are an architectural style that structures an application as a collection of small, loosely coupled services. Each service is self-contained, focused on a specific business functionality, and can be developed, deployed, and scaled independently. This modular approach to software design offers several benefits, including increased agility, scalability, and resilience.
  • Microservices architecture is a design pattern that breaks down complex applications into smaller, more manageable services. Each service is responsible for a specific functionality and can communicate with other services through well-defined APIs. This modular approach enables faster development cycles, easier maintenance, and better scalability.

1.1. Monolithic vs Microservices​

img1

Monolithic Architecture:

  • Imagine all your items (toys, books, clothes, etc.) are stored in one big box. Finding something specific can be challenging because everything is jumbled together. If you need to update or change something, you have to dig through the entire box, which can be time-consuming and error-prone.
  • Same going to happen in software development. Monolithic architectures are like one big chunk of code where all components of an application are tightly integrated. Making changes or updates to one part of the code can have unintended consequences on other parts, leading to complex and risky development and maintenance processes.

Microservices Architecture:

  • Now, imagine your items are stored in separate, labeled containers (toys in one box, books in another, clothes in a third, etc.). Finding something specific is much easier because everything is organized and accessible. If you need to update or change something, you only need to focus on the relevant container, making the process faster and more efficient.
  • Similarly, microservices architecture breaks down an application into small, independent services, each responsible for specific functionalities. This modular approach allows for faster development cycles, easier maintenance, and better scalability. Each service can be developed, deployed, and scaled independently, promoting agility and resilience in software development.

Let’s deep dive into the differences of these 2 patterns.

1.1.1. Size and Structure​

  • Monolithic: One large, interconnected structure where all components of an application are tightly integrated.
  • Microservices: Composed of small, independent services, each responsible for specific functionalities of an application.

1.1.2. Development and Deployment​

  • Monolithic: Typically developed and deployed as a single unit.
  • Microservices: Each service can be developed and deployed independently, allowing for faster iteration and updates.

1.1.3. Modification​

  • Monolithic: Making changes often requires modifying the entire codebase. This can be time-consuming and risky, as a change in one part of the code may inadvertently affect other parts.
  • Microservices: Each service is focused on a specific functionality, making it easier to modify and update. Changes can be made to individual services without impacting the entire application. This modular approach allows for faster development cycles and easier maintenance.

1.1.4. Scaling​

  • Monolithic: Scaling a monolithic application usually involves replicating the entire application, including components that may not require additional resources. This can lead to inefficient resource utilization.
  • Microservices: Enables granular scaling, where only the services experiencing high demand need to be scaled. This results in more efficient resource utilization and better performance scalability.

1.1.5. Technology Stack​

  • Monolithic: Usually built using a single technology stack (e.g., one programming language, framework).
  • Microservices: Services can be built using different technologies best suited for their specific functionalities.

1.1.6. Fault Isolation and Resilience​

  • Monolithic: A failure in one part of the application can bring down the entire system.
  • Microservices: Faults are isolated to individual services, so a failure in one service does not necessarily impact the entire application, enhancing resilience.

1.1.7. Data Management​

  • Monolithic: Typically uses a single database shared by all components, which can lead to data coupling and scalability challenges.
  • Microservices: Each service can have its own database, allowing for better data isolation and scalability.

1.1.8. Testing​

  • Monolithic: Testing can be complex and time-consuming, as changes may impact multiple functionalities.
  • Microservices: Testing can be more focused and granular, with each service tested independently, facilitating easier debugging and maintenance.

1.1.9. Team Organization​

  • Monolithic: Development teams often work on the same codebase, leading to potential conflicts and dependencies.
  • Microservices: Teams can be organized around individual services, allowing for greater autonomy and faster development cycles.

1.1.10. Communication and Integration​

  • Monolithic: Communication between different components typically occurs through function or method calls within the same codebase.
  • Microservices: Communication between services usually happens over network protocols like HTTP or message queues, promoting loose coupling and interoperability.

2. Developing a Microservice application​

  • Great! Now that we have a clear understanding of microservices, let’s embark on developing a simple microservice project.

  • Crafting an expense tracker? In this example, users can add expenses, with each expense associated with a category. To keep this example simple, I’ll omit user management. Thus, we’ll focus on two microservices: the expense-service and category-service.

  • Creating a microservice application involves multiple steps. Let’s navigate through them methodically, one step at a time.

2.1. Developing service registry​

  • A service registry, like Eureka Server, plays a pivotal role in managing the dynamic nature of microservice architectures.

  • The Service Registry serves as a centralized repository for storing information about all the available services in the microservices architecture. This includes details such as IP addresses, port numbers, and other metadata required for communication.

  • When a microservice instance (like expense or category) starts up, it registers itself with the Service Registry.

  • As services start, stop, or scale up/down dynamically in response to changing demand, they update their registration information in the Service Registry accordingly.

  • When one service needs to communicate with another (e.g., expense needs to call category), it consults the Service Registry to obtain the necessary connection details. By querying the Service Registry, services can discover the locations and endpoints of the target services dynamically, without needing to maintain hardcoded configurations.

  • In scenarios where multiple instances of a service (such as expense) are running across different servers, the Service Registry can facilitate load balancing by distributing incoming requests among these instances.

  • By maintaining awareness of all available instances, the Service Registry helps optimize resource utilization and improve system performance.

  • So, Let’s create a service registry for our application.

2.1.1. Create a spring boot project as below.​

img2

2.1.2. Make Changes in Your application.properties File.​

# application.properties

# This name is used for identifying the application in the environment
spring.application.name=service-registry

# By default, Eureka Server uses port 8761 for communication with client applications.
# If you want you can change
server.port=8761

# Disables the Eureka client's capability to fetch the registry
# of other services from the Eureka server, as it is not acting as a Eureka client.
eureka.client.fetch-registry=false

# Disables the Eureka client's registration with the Eureka server.
# Since this application is the Eureka server itself,
# it does not need to register with any other Eureka server.
eureka.client.register-with-eureka=false

2.1.2. Update ServiceRegistryApplication.java file​

ServiceRegistryApplication.java
@SpringBootApplication
@EnableEurekaServer
public class ServiceRegistryApplication {

public static void main(String[] args) {
SpringApplication.run(ServiceRegistryApplication.class, args);
}

}

@EnableEurekaServerannotation is used to enable the Eureka Server functionality in the Spring Boot application. It configures the application to act as a Eureka Server, allowing it to register and manage services within the microservices architecture.

2.1.3. Run the application​

To see the Spring Eureka dashboard visit https://localhost:8761 url in your browser. You can see the dashboard as below.

img3

You can see that under β€˜Instances currently registered with Eureka’, it displays a message β€œNo instances available”. Because none of the microservices in our architecture have registered themselves with the Eureka Server.

2.2. Create our first microservice: CATEGORY-SERVICE​

2.2.1. Create a spring boot project as below.​

img4

2.2.2. Add @EnableDiscoveryClient in CategoryServiceApplication class​

CategoryServiceApplication.java
@SpringBootApplication
@EnableDiscoveryClient
public class CategoryServiceApplication {

public static void main(String[] args) {
SpringApplication.run(CategoryServiceApplication.class, args);
}

}

@EnableDiscoveryClient annotation is used to enable service discovery functionality in the Spring Boot application. It signifies that this microservice will register itself with a service registry (like Eureka, Consul, etc.) upon startup and will be discoverable by other microservices within the same architecture.

2.2.3. Configure application.yml file​

application.yml
spring:
# This name is used for identifying the application in the environment
application:
name: category-service

# MongoDB database configuration.
# Make sure you have created the database, before running the application
data:
mongodb:
host: 127.0.0.1
port: 27017
database: category_service

# The port on which the Spring Boot application will listen for incoming HTTP requests
server:
port: 9000

# The URL of the Eureka Server where the application will register itself for service discovery.
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/

2.2.4. Create category entity​

Category.java
@Data
@Document(collection = "categories")
@Builder
public class Category {
@Id
private String id;
private String categoryName;
}

2.2.5. Create dtos for send Api responses and accept category details​

CategoryDto.java
@Data
@Builder
public class ApiResponseDto<T> {
private boolean isSuccess;
private String message;
private T response;
}

@Data
@AllArgsConstructor
@NoArgsConstructor
public class CategoryRequestDto {
private String name;
}

2.2.6. Create category repository​

CategoryRepository.java
@Repository
public interface CategoryRepository extends MongoRepository<Category,String> {
boolean existsByCategoryName(String categoryName);

}

2.2.7. Create category service​

CategoryService.java
@Service
public interface CategoryService {

ResponseEntity<ApiResponseDto<?>> getAllCategories();

ResponseEntity<ApiResponseDto<?>> getCategoryById(String categoryId);

ResponseEntity<ApiResponseDto<?>> createCategory(CategoryRequestDto categoryRequestDto);
}
CategoryServiceImpl.java
@Component
public class CategoryServiceImpl implements CategoryService {

@Autowired
CategoryRepository categoryRepository;

@Override
public ResponseEntity<ApiResponseDto<?>> getAllCategories(){
List<Category> categories = categoryRepository.findAll();
try {
return ResponseEntity.ok(
ApiResponseDto.builder()
.isSuccess(true)
.response(categories)
.message(categories.size() + " results found!")
.build()
);
}catch (Exception e) {
// Try to create a custom exception and handle them using exception handlers
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(
ApiResponseDto.builder()
.isSuccess(false)
.response("Unable to process right now. Try again later!")
.message("No results found!")
.build()
);
}
}

@Override
public ResponseEntity<ApiResponseDto<?>> getCategoryById(String categoryId) {

try {
Category category = categoryRepository.findById(categoryId).orElse(null);
return ResponseEntity.ok(
ApiResponseDto.builder()
.isSuccess(true)
.response(category)
.build()
);
}catch (Exception e) {
// Try to create a custom exception and handle them using exception handlers
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(
ApiResponseDto.builder()
.isSuccess(false)
.response("Unable to process right now. Try again later!")
.message("No results found!")
.build()
);
}
}

@Override
public ResponseEntity<ApiResponseDto<?>> createCategory(CategoryRequestDto categoryRequestDto) {
try {
if (categoryRepository.existsByCategoryName(categoryRequestDto.getName())) {
// Try to create a custom exception and handle them using exception handlers
return ResponseEntity.status(HttpStatus.BAD_REQUEST).body(
ApiResponseDto.builder()
.isSuccess(false)
.response("Category name already exists: " + categoryRequestDto.getName())
.message("Unable to create Category.")
.build()
);
}

Category category = Category.builder()
.categoryName(categoryRequestDto.getName())
.build();

categoryRepository.insert(category);
return ResponseEntity.status(HttpStatus.CREATED).body(
ApiResponseDto.builder()
.isSuccess(true)
.message("Category saved successfully!")
.build()
);

}catch (Exception e) {
// Try to create a custom exception and handle them using exception handlers
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(
ApiResponseDto.builder()
.isSuccess(false)
.response("Unable to process right now. Try again later!")
.message("Unable to create Category.")
.build()
);
}
}


}

2.3.8. Create category controller​

CategoryController.java
@RestController
@RequestMapping("/category")
public class CategoryController {

@Autowired
private CategoryService categoryService;

@PostMapping("/new")
public ResponseEntity<ApiResponseDto<?>> createCategory(@RequestBody CategoryRequestDto categoryRequestDto){
return categoryService.createCategory(categoryRequestDto);
}

@GetMapping("/all")
public ResponseEntity<ApiResponseDto<?>> getAllCategories() {
return categoryService.getAllCategories();
}

@GetMapping("/{id}")
public ResponseEntity<ApiResponseDto<?>> getCategoryById(@PathVariable String id) {
return categoryService.getCategoryById(id);
}

}

2.3.9. Run category-service application​

  • After running the service refresh eureka server in your browser. Now you can see that CATEGORY-SERVICE is displaying under available instances.

    img8

Then check the endpoints using postman to ensure that everything is working cool.

img5

img6

2.3. Create our second microservice: EXPENSE-SERVICE​

2.3.1. Create a spring boot project as below.​

img7

2.3.2. Add @EnableDiscoveryClient in CategoryServiceApplication class​

ExpenseServiceApplication.java
@SpringBootApplication
@EnableDiscoveryClient
public class ExpenseServiceApplication {

public static void main(String[] args) {
SpringApplication.run(ExpenseServiceApplication.class, args);
}

}

2.3.3. Configure application.yml file​

application.yml
spring:
# This name is used for identifying the application in the environment
application:
name: expense-service

# MongoDB database configuration.
# Make sure you have created the database, before running the application
data:
mongodb:
host: 127.0.0.1
port: 27017
database: expense_service

# The port on which the Spring Boot application will listen for incoming HTTP requests
server:
port: 9000

# The URL of the Eureka Server where the application will register itself for service discovery.
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/

2.3.4. Create expense entity​

Expense.java
@Document(collection = "expenses")
@Data
@Builder
public class Expense {
@Id
private String id;
private String description;
private double amount;
private LocalDate date;
private String categoryId;

}

2.3.5. Create dtos for send Api responses and accept expense details​

ExpenseDto.java
@Data
@Builder
public class ApiResponseDto<T> {
private boolean isSuccess;
private String message;
private T response;
}

@Data
@AllArgsConstructor
public class ExpenseRequestDto {
private String description;
private double amount;
private LocalDate date;
private String categoryId;
}

2.3.6. Create expense repository​

ExpenseRepository.java
@Repository
public interface ExpenseRepository extends MongoRepository<Expense, String> {
}

2.3.7. Create expense service​

ExpenseService.java
@Service
public interface ExpenseService {
ResponseEntity<ApiResponseDto<?>> addExpense(ExpenseRequestDto requestDto);
ResponseEntity<ApiResponseDto<?>> getAllExpenses();
}

@Component
public class ExpenseServiceImpl implements ExpenseService{

@Autowired
private ExpenseRepository expenseRepository;

@Override
public ResponseEntity<ApiResponseDto<?>> addExpense(ExpenseRequestDto requestDto) {
// will implement later
}

@Override
public ResponseEntity<ApiResponseDto<?>> getAllExpenses() {
List<Expense> expenses = expenseRepository.findAll();
try {
return ResponseEntity.ok(
ApiResponseDto.builder()
.isSuccess(true)
.response(expenses)
.message(expenses.size() + " results found!")
.build()
);
}catch (Exception e) {
// Try to create a custom exception and handle them using exception handlers
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(
ApiResponseDto.builder()
.isSuccess(false)
.response("Unable to process right now. Try again later!")
.message("No results found!")
.build()
);
}
}
}

2.3.8. Create expense controller​

ExpenseController.java
@RestController
@RequestMapping("/expense")
public class ExpenseController {

@Autowired
private ExpenseService expenseService;

@PostMapping("/new")
public ResponseEntity<ApiResponseDto<?>> addExpense(@RequestBody ExpenseRequestDto requestDto){
return expenseService.addExpense(requestDto);
}

@GetMapping("/all")
public ResponseEntity<ApiResponseDto<?>> getAllExpenses(){
return expenseService.getAllExpenses();
}

}

2.3.9. Run expense-service application​

  • After running the service, once again refresh eureka server in your browser. Now you can see that EXPENSE-SERVICE is displaying under available instances.

    img15

  • Let’s check endpoints for confirmation.

    img9

  • Done. We have created 2 microservices successfully. Let’s move to next step.

  • Imagine the scenario, creating an expense, before saving expenses in the database, it’s crucial to validate that the category ID provided is valid, meaning it exists in the category table. However, the Expense Service lacks direct access to category information, as categories are managed by the Category Service. This necessitates communication between the two services.

2.4. Communication between microservices​

  • Before persisting the expense data, the Expense Service needs to validate the category ID present in the CategoryRequestDto.
  • To verify the category ID’s validity, the Expense Service communicates with the Category Service. It sends a request containing the category ID to the Category Service.
  • Based on the response from the Category Service, the Expense Service proceeds accordingly.
  • OpenFeign is a declarative HTTP client library for Java that simplifies the process of making HTTP requests to other microservices.
  • OpenFeign integrates seamlessly with Spring Cloud, providing additional features for service discovery and load balancing.
  • Let’s see how expense service going to interact with category service using OpenFeign

2.4.1. Add OpenFeign dependency in pom.xml​

  • Add below dependency in expence-service pom.xml.

    pom.xml
      <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-openfeign</artifactId>
    </dependency>

2.4.2. Create FeignClient for category service​

CategoryFeignService.java
@FeignClient("CATEGORY-SERVICE")
public interface CategoryFeignService {

@GetMapping("/category/{id}")
ResponseEntity<ApiResponseDto<CategoryDto>> getCategoryById(@PathVariable String id);

}
  • @FeignClient: This annotation marks the interface as a Feign client. It specifies the name of the target microservice ("CATEGORY-SERVICE"). Feign will use this name to locate the service within the service registry.
  • CategoryFeignInterface: The interface definition for the Feign client. It declares methods that will be used to make HTTP requests to the Category Service.
  • Use the same method name, parameters you have used in the category controller in the category-service. Make sure the defined method’s implementation is available in the category-service. I have implemented the getCategoryById method the category-service. Recall it again.

2.4.3. Update ExpenseServiceApplication.java​

ExpenseServiceApplication.java
@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
public class ExpenseServiceApplication {

public static void main(String[] args) {
SpringApplication.run(ExpenseServiceApplication.class, args);
}

}
  • @EnableFeignClients annotation enables the Feign client in the Spring Boot application. It scans the classpath for interfaces annotated with @FeignClient and generates proxy implementations for them. These proxies are used to make HTTP requests to other microservices or external APIs.

2.4.4. Implement addExpense method in expense service​

ExpenseServiceImpl.java
@Component
public class ExpenseServiceImpl implements ExpenseService{

@Autowired
private ExpenseRepository expenseRepository;

@Autowired
private CategoryFeignService categoryFeignService;

@Override
public ResponseEntity<ApiResponseDto<?>> addExpense(ExpenseRequestDto requestDto) {
try {

// fetching category from category service
CategoryDto category = categoryFeignService.getCategoryById(requestDto.getCategoryId()).getBody().getResponse();

if (category == null) {
// Try to create a custom exception and handle them using exception handlers
return ResponseEntity.status(HttpStatus.BAD_REQUEST).body(
ApiResponseDto.builder()
.isSuccess(false)
.response("Category not exists with id: " + requestDto.getCategoryId())
.message("Unable to create Category.")
.build()
);
}

Expense expense = Expense.builder()
.description(requestDto.getDescription())
.amount(requestDto.getAmount())
.date(requestDto.getDate())
.categoryId(requestDto.getCategoryId())
.build();

expenseRepository.insert(expense);
return ResponseEntity.status(HttpStatus.CREATED).body(
ApiResponseDto.builder()
.isSuccess(true)
.message("Expense saved successfully!")
.build()
);

}catch (Exception e) {
// Try to create a custom exception and handle them using exception handlers
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(
ApiResponseDto.builder()
.isSuccess(false)
.response("Unable to process right now. Try again later!")
.message("Unable to save expense.")
.build()
);
}
}

// getAllExpenses method here.

}

2.4.5. Run the application​

  • There will be no changes in the eureka dashboard. It will be display same as before.

  • Let’s check the add expense endpoint from frontend.

    img10

    img11

  • In wrapping up our discussion on microservices, there’s one critical aspect left to address: the challenge of accessing microservices individually via their own port numbers. This approach becomes impractical as the number of microservices, or instances thereof increases. That’s precisely where an API gateway steps in.

2.5. Developing API gateway​

  • Imagine having numerous microservices or multiple instances of a single microservice scattered across your architecture. Directly accessing each one via its unique port number would result in complexity and maintenance headaches. An API gateway acts as a centralized entry point for clients, providing a unified interface to access the various microservices.
  • An API gateway acts as a centralized entry point for clients, providing a unified interface to access the various microservices.
  • Think of the API gateway as the traffic cop of your microservices architecture. It routes incoming requests to the appropriate microservice, or instance based on predefined rules or configurations.
  • Single Entry Point: Clients interact with the API gateway, unaware of the underlying microservices’ locations or configurations. This decouples clients from the complexities of service discovery and routing.
  • Load Balancing: The API gateway can distribute incoming requests across multiple instances of a microservice, ensuring optimal resource utilization and high availability.
  • Security: Centralized authentication, authorization, and security policies can be enforced at the API gateway, safeguarding the entire system from unauthorized access and attacks.
  • Monitoring and Analytics: By serving as a centralized point of contact, the API gateway facilitates comprehensive monitoring, logging, and analytics of incoming and outgoing traffic, providing valuable insights into system performance and usage patterns.

Now let’s see how we can develop an Api-gateway for our application.

2.5.1. Create spring project​

  • Create a spring boot project as below.

    img12

2.5.2. Add @EnableDiscoveryClient annotation ApiGatewayApplication.java​

ApiGatewayApplication.java
@SpringBootApplication
@EnableDiscoveryClient
public class ApiGatewayApplication {

public static void main(String[] args) {
SpringApplication.run(ApiGatewayApplication.class, args);
}

}

2.5.3. Configure application.yml file​

application.yml
#  The port number (8080) on which the API gateway will listen for incoming requests.
server:
port: 8080

# The name of the Spring Boot application.
spring:
application:
name: api-gateway
cloud:
gateway:
mvc:
# Enables discovery-based routing. When enabled, Spring Cloud Gateway will automatically discover
# routes for registered services using service discovery
discovery:
locator:
enabled: true
# Defining the routing rules for accessing microservices
routes:
- id: category-service
uri: lb://CATEGORY-SERVICE
predicates:
- Path=/category-service/**
filters:
- StripPrefix=1

- id: expense-service
uri: lb://EXPENSE-SERVICE
predicates:
- Path=/expense-service/**
filters:
- StripPrefix=1

# The URL of the Eureka Server where the API gateway will register itself and discover other services.
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
  • id: category-service β€” This is an identifier for the route. It’s used internally within the API gateway configuration to refer to this specific route. It doesn’t have any significance outside of the configuration context.
  • uri: Specifies the URI (Uniform Resource Identifier) of the target microservice.
  • lb://: This prefix indicates that load balancing should be applied to the target URI. The lb stands for "load balancer".
  • CATEGORY-SERVICE: This is the logical name of the microservice registered with the service registry (Eureka). Use the same name which displayed in the eureka server.
  • Predicates are conditions that must be met for a request to match this route and be forwarded to the target microservice.
  • - Path=/category-service/: This predicate specifies that requests must have a path starting with "/category-service/" followed by any additional path segments. The "**" wildcard matches any number of additional path segments. You can use any prefix as you wish.
  • Filters are applied to requests before they are forwarded to the target microservice. They can modify request headers, paths, or payloads, among other things.
  • - StripPrefix=1: This filter removes one path segment, effectively stripping "/category-service" from the request path. This is necessary because the routing predicate matches requests starting with "/category-service/", but the target microservice expects requests without this prefix.

2.5.4. Test the application​

  • For instance, let’s say you want to retrieve all expenses. You would typically use the URI localhost:8080/expense-service/expense/all.

  • It matches the incoming request against the defined routes based on the configured predicates. In this case, it identifies that the request path starts with β€œ/expense-service/”, indicating that it should be directed to the expense service.

  • Before forwarding the request to the expense service, the API gateway rewrites the URI to match the expected format of the microservice. Since the expense service expects requests without the β€œ/expense-service” prefix, the API gateway removes this prefix from the URI.

  • Once the URI is properly formatted, the API gateway forwards the request to the identified microservice. In this example, it sends the request to the expense service, ensuring that it reaches the correct endpoint (β€œ/expense/all”).

    Let’s check this in post man.

    img13

    img14

That’s it! We have successfully developed a microservices architecture with a service registry, communication between microservices, and an API gateway. This modular approach to software design offers several benefits, including increased agility, scalability, and resilience. By breaking down complex applications into smaller, more manageable services, we can achieve faster development cycles, easier maintenance, and better scalability. Microservices architecture is a powerful design pattern that can help organizations adapt to changing business requirements and deliver innovative solutions to customers.

Conclusion​

Microservices architecture is a design pattern that structures an application as a collection of small, loosely coupled services. Each service is self-contained, focused on a specific business functionality, and can be developed, deployed, and scaled independently. This modular approach to software design offers several benefits, including increased agility, scalability, and resilience.

Chrome Extension Using MERN

Β· 4 min read
Khushi Kalra
Full Stack Developer

Creating a Chrome extension can seem like a daunting task, especially when you're trying to combine it with technologies like ReactJS and MongoDB. When I first set out to build my extension, I found it challenging to find a perfect YouTube tutorial or blog post that covered everything I needed. So, I turned to StackOverflow and other resources to piece together my project.

You can always take help from my github repository: https://github.com/abckhush/Basic-Chrome-Extension

Here's a step-by-step guide based on my experience:

Creating Frontend of the Extension​

Step 1: Create a React App​

First, you'll need to set up a basic React application. You can do this using Create React App:

npx create-react-app my-chrome-extension
cd my-chrome-extension

Step 2: Change the Manifest JSON File​

The manifest.json file is crucial for Chrome extensions as it contains metadata about your extension. Update the manifest.json file in the public folder with the following content:

{
"manifest_version":3,
"name":"Chrome Extension",
"version":"1.0.0",
"description":"My First Chrome Extension Using MERN",
"action": {
"default_popup": "index.html",
"default_title": "Open"
},
"permissions":[
"scripting"
]
}

Step 3: Add Height and Width​

To ensure your extension has the proper dimensions, update the index.css file in the src folder and add height and width:

body {
min-width: 300px;
min-height: 200px;
}

Check Point​

To check if you have followed all the steps properly. You can go try to run the extension in browser.

  1. Run npm build in the terminal.
  2. Open Chrome and go to chrome://extensions/.
  3. Enable "Developer mode" in the top right corner.
  4. Click "Load unpacked" and select the build folder from your React app.
  5. See if can see the default React page in the height and width you gave.

Step 4: Change Rendering to MemoryRouter​

This is the most crucial step. BrowserRouter is not supported for the Chrome Extensions, which is default in React applications. We are going to change that too MemoryRouter.

  1. Install React Router:
npm install react-router-dom
  1. Update index.js to include routes:
import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';

import { MemoryRouter as Router } from 'react-router-dom';

ReactDOM.render(
<React.StrictMode>
<Router>
<App />
</Router>
</React.StrictMode>,
document.querySelector('#root')
);

Step 5: Adding Routing​

  1. We will make a "Components" folder in src and a Home.jsx.
import React from 'react';

function Home() {
return (
<div>
<h1>Welcome to My Home Page</h1>
<p>This is a simple home page.</p>
</div>
);
}

export default Home;
  1. We will update our App.js as:
import React from 'react';
import { Routes, Route } from 'react-router-dom';
import Home from './Components/Home.jsx';

function App() {
return (
<div>
<Routes>
<Route path="/" element={<Home />} />
</Routes>
</div>
);
}

export default App;

Note: You can run "npm build" again and reload the build folder to see the changes made.

Adding Backend to the Extension​

Step 1: Create a Backend Folder​

In your project root, create a new folder called backend:

mkdir backend
cd backend

Step 2: Add server.js​

Create a server.js file in the backend folder:

const express = require('express');
const mongoose = require('mongoose');
const cors = require('cors');

const app = express();

app.use(cors());
app.use(express.json());

mongoose.connect(process.env.MONGO_URI, { useNewUrlParser: true, useUnifiedTopology: true })
.then(() => console.log('MongoDB connected'))
.catch(err => console.log(err));

app.get('/', (req, res) => {
res.send('Hello, world!');
});

const PORT = process.env.PORT || 5000;
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});

Step 3: Add a .env file​

MONGO_URI= your_mongodb_connection_string
PORT= 5000

Building a Chrome extension with ReactJS and MongoDB was a learning experience filled with challenges and triumphs. While finding the perfect tutorial was tough, the process of solving problems using StackOverflow and other resources was incredibly rewarding. I hope this guide helps you in your journey to create your own Chrome extension.

Feel free to connect on github, and happy coding!

React JS

Β· 12 min read

React is a JavaScript library primarily used for building user interfaces in single-page applications. While it's often integrated with tools like Webpack for bundling JavaScript and CSS files, React itself does not directly incorporate Webpack. Despite its nature as a library rather than a full framework or programming language, React remains instrumental in modern web development.

React offers various extensions for entire application architectural support, such as Flux and React Native, beyond mere UI.

Why React?​

  • Declarative Nature: React's declarative approach allows developers to describe the desired UI state, and React handles the rendering efficiently. This simplifies the development process by abstracting away the manual DOM manipulation.

  • Improved Performance: React uses Virtual DOM, a lightweight representation of the actual DOM. By comparing the previous and current states of this Virtual DOM, React determines the minimal set of DOM operations needed to update the UI, resulting in faster rendering and better performance.

  • Unidirectional Data Flow: In React, data flows in a unidirectional manner, typically from parent to child components. This ensures that any change in the parent component automatically propagates to its child components, simplifying the understanding of data changes and making it easier to trace errors.

  • Reusable Components: React promotes the creation of reusable UI components. Each component encapsulates its own logic and UI, allowing developers to compose complex UIs from simpler components. This modularity not only improves code organization but also accelerates development time.

  • Versatility: React's versatility extends beyond web development to mobile app development with React Native. This framework leverages React's component-based architecture to build native mobile apps using JavaScript and React principles.

  • Developer Tools: React is supported by dedicated developer tools like the React Developer Tools extension for Chrome. These tools facilitate debugging by providing insights into component hierarchies, state changes, and performance optimizations.

ReactJS History​

When compared to other technologies on the market, React is a new technology. React was created by Jordan Walke, a software engineer at Facebook, in 2011.Initially implemented in Facebook's News Feed, its success quickly led to its adoption in Instagram, showcasing its power and versatility in building dynamic user interfaces.

React Features​

Currently, ReactJS gaining quick popularity as the best JavaScript framework among web developers. It is playing an essential role in the front-end ecosystem. The important features of ReactJS are as following.

image

JSX​

JSX is a syntax extension for JavaScript that allows developers to write HTML-like code within JavaScript. This makes it easier to create and understand the structure of React components, as it closely resembles the final output in the browser. While not mandatory, using JSX is recommended in React development because it enhances readability and simplifies the creation of user interfaces.

Components​

React components enable reusability and encapsulation by breaking down UIs into self-contained pieces with their own structure, style, and behavior. This promotes code reuse, as components can be used across different parts of the application, enhancing maintainability and reducing bugs, while ensuring a clean separation of concerns.

One-way Data Binding​

One-way data binding is a pattern where data flows in a single direction, typically from the model to the view. This ensures that the state of an application is predictable and easier to manage. In the context of JavaScript frameworks, Flux and Redux are popular architectures that facilitate one-way data binding and state management.

Flux​

Flux is an architecture pattern created by Facebook for building client-side web applications. It emphasizes unidirectional data flow and is composed of four key components:

Action:​
  • Actions are plain JavaScript objects or functions that contain the type of event and any associated data (payload). Actions are the only source of information for the store.
Dispatcher:​
  • The dispatcher is a central hub that manages all the data flow in a Flux application. When an action is created, it is dispatched to all stores that have registered with the dispatcher. The dispatcher’s role is to handle these actions and ensure they reach the appropriate store.
Store:​
  • Stores hold the application state and logic. They listen for actions from the dispatcher and update their state accordingly. Stores then emit a change event to notify the view layer to re-render. Each store manages a specific portion of the application's state.
View:​
  • The view is the presentation layer, typically composed of React components. Views listen to changes from the stores and re-render themselves accordingly. They can also generate new actions based on user interactions and send them to the dispatcher.

Virtual DOM​

A virtual DOM object is a representation of the original DOM object. It works like a one-way data binding. Whenever any modifications happen in the web application, the entire UI is re-rendered in virtual DOM representation. Then it checks the difference between the previous DOM representation and new DOM. Once it has done, the real DOM will update only the things that have actually changed. This makes the application faster, and there is no wastage of memory.

Simplicity​

ReactJS uses JSX file which makes the application simple and to code as well as understand. We know that ReactJS is a component-based approach which makes the code reusable as your need. This makes it simple to use and learn.

Performance​

ReactJS is known to be a great performer. This feature makes it much better than other frameworks out there today. The reason behind this is that it manages a virtual DOM. The DOM is a cross-platform and programming API which deals with HTML, XML or XHTML. The DOM exists entirely in memory. Due to this, when we create a component, we did not write directly to the DOM. Instead, we are writing virtual components that will turn into the DOM leading to smoother and faster performance.

React Ecosystem​

The React ecosystem is vast and diverse, encompassing a wide range of libraries and tools that enhance and extend the capabilities of React. These tools help in state management, routing, form handling, styling, and more, making React a robust framework for building complex and feature-rich applications. Here are some of the most popular libraries and tools commonly used with React:

State Management​

Redux​

Description: Redux is a state management library that provides a predictable state container for JavaScript apps. It helps manage application state and enables powerful debugging capabilities through tools like the Redux DevTools.

Key Features: Centralized state, immutability, middleware support.

MobX​

Description: MobX is a simple, scalable, and battle-tested state management solution. It uses observable data to efficiently react to state changes and update the UI.

Key Features: Observable state, actions, reactions, computed values.

Recoil​

Description: Recoil is a state management library for React developed by Facebook. It provides a set of utilities to manage state in a React application with minimal boilerplate.

Key Features: Atoms, selectors, asynchronous state management.

Routing​

React Router​

Description: React Router is the most widely used routing library for React. It allows for dynamic routing in a web application, enabling navigation between different components and views.

Key Features: Nested routes, dynamic routing, query parameters.

Form Handling​

Formik​

Description: Formik is a library that simplifies form management in React applications. It helps with form validation, error handling, and form submission.

Key Features: Form state management, validation schema support, easy integration with validation libraries like Yup.

React Hook Form​

Description: React Hook Form is a performant, flexible library for managing forms in React. It leverages React hooks for form state and validation, minimizing re-renders and improving performance.

Key Features: Minimal re-renders, easy integration with UI libraries, built-in validation support.

Styling​

Styled Components​

Description: Styled Components is a library for styling React applications using tagged template literals. It allows for writing actual CSS to style components, keeping styles scoped and maintaining a clean component structure.

Key Features: Scoped styling, theme support, dynamic styling.

Emotion​

Description: Emotion is a flexible and powerful library for writing CSS styles with JavaScript. It provides both a styled component API and a CSS-in-JS approach.

Key Features: Performant styles, server-side rendering, powerful theming capabilities.

Testing​

Jest​

Description: Jest is a JavaScript testing framework developed by Facebook, designed to ensure correctness of any JavaScript codebase. It works seamlessly with React, providing a simple and efficient way to test components and applications.

Key Features: Snapshot testing, coverage reports, mocking capabilities.

React Testing Library​

Description: React Testing Library is a testing utility that encourages testing best practices by focusing on user interactions and component behavior rather than implementation details.

Key Features: Lightweight, integrates with Jest, emphasizes testing UI from the user’s perspective.

Build and Tooling​

Create React App​

Description: Create React App (CRA) is a CLI tool that sets up a new React project with a sensible default configuration. It handles configuration for tools like Webpack, Babel, ESLint, and more.

Key Features: Zero configuration, fast setup, extensibility.

Next.js​

Description: Next.js is a React framework that enables server-side rendering and static site generation for React applications. It simplifies the process of building complex React applications with features like API routes, file-based routing, and automatic code splitting.

Key Features: Server-side rendering, static site generation, API routes, fast refresh.

The React ecosystem is continuously evolving, with new tools and libraries emerging to address various needs and challenges in modern web development. These tools help streamline the development process, enhance performance, and ensure maintainability of React applications.

Pros and Cons of ReactJS​

Today, ReactJS is the highly used open-source JavaScript Library. It helps in creating impressive web apps that require minimal effort and coding. The main objective of ReactJS is to develop User Interfaces (UI) that improves the speed of the apps. There are important pros and cons of ReactJS given as following:

Advantage of ReactJS

Easy to Learn and Use​

ReactJS is much easier to learn and use. It comes with a good supply of documentation, tutorials, and training resources. Any developer who comes from a JavaScript background can easily understand and start creating web apps using React in a few days. It is the V(view part) in the MVC (Model-View-Controller) model, and referred to as ?one of the JavaScript frameworks.? It is not fully featured but has the advantage of open-source JavaScript User Interface(UI) library, which helps to execute the task in a better manner.

Creating Dynamic Web Applications Becomes Easier​

JSX for Readability and Maintainability:​

  • JSX (JavaScript XML) allows developers to write HTML elements in JavaScript. This mixture of HTML and JavaScript makes the code more readable and maintainable. For example, instead of splitting code between HTML and JavaScript files, JSX enables developers to write them together, making it easier to understand and work with the code.

Reusable Components​

A ReactJS web application is made up of multiple components, and each component has its own logic and controls. These components are responsible for outputting a small, reusable piece of HTML code which can be reused wherever you need them. The reusable code helps to make your apps easier to develop and maintain. These Components can be nested with other components to allow complex applications to be built of simple building blocks. ReactJS uses virtual DOM based mechanism to fill data in HTML DOM. The virtual DOM works fast as it only changes individual DOM elements instead of reloading complete DOM every time.

Performance Enhancement​

Virtual DOM vs. Real DOM:​

  • React uses a virtual DOM to optimize updates and rendering. When the state of a component changes, React first updates the virtual DOM, a lightweight copy of the real DOM. It then compares this virtual DOM with a snapshot of the real DOM before applying only the necessary changes to the real DOM.Instead of re-rendering the entire DOM tree, React only updates the parts that have changed, which significantly boosts performance, especially in complex applications.

Known to be SEO Friendly​

Traditional JavaScript frameworks have an issue in dealing with SEO. The search engines generally having trouble in reading JavaScript-heavy applications. Many web developers have often complained about this problem. ReactJS overcomes this problem that helps developers to be easily navigated on various search engines. It is because React.js applications can run on the server, and the virtual DOM will be rendering and returning to the browser as a regular web page.

The Benefit of Having JavaScript Library​

Today, ReactJS is choosing by most of the web developers. It is because it is offering a very rich JavaScript library. The JavaScript library provides more flexibility to the web developers to choose the way they want.

Disadvantage of ReactJS​

The high pace of development​

Continuous Learning and Updates:​

  • The React ecosystem evolves rapidly, with frequent updates and new releases. While these updates bring improvements and new features, they also mean developers need to constantly learn and adapt. For example, React Hooks, introduced in version 16.8, brought a significant change in how state and side effects are handled.

Developers had to quickly learn and integrate this new feature, which can be challenging and time-consuming.

Poor Documentation​

Rapid Updates Leading to Outdated Information:​

  • While the official React documentation has improved, the rapid pace of updates can sometimes lead to outdated or incomplete information. For example, when new features like Concurrent Mode or Suspense are introduced, documentation might lag behind, making it difficult for developers to find accurate and up-to-date information.

Developers might need to rely on community forums, blog posts, or other unofficial sources to fill in the gaps, which can be frustrating and time-consuming.

View Part​

ReactJS Covers only the UI Layers of the app and nothing else. So you still need to choose some other technologies to get a complete tooling set for development in the project.

JSX as a barrier​

ReactJS uses JSX. It's a syntax extension that allows HTML with JavaScript mixed together. This approach has its own benefits, but some members of the development community consider JSX as a barrier, especially for new developers. Developers complain about its complexity in the learning curve.

Official Documentation​

SQL

Β· 7 min read

Structured query language (SQL) is a programming language for storing and processing information in a relational database. A relational database stores information in tabular form, with rows and columns representing different data attributes and the various relationships between the data values. You can use SQL statements to store, update, remove, search, and retrieve information from the database. You can also use SQL to maintain and optimize database performance.

Why is SQL important?​

Structured query language (SQL) is a popular query language that is frequently used in all types of applications. Data analysts and developers learn and use SQL because it integrates well with different programming languages. For example, they can embed SQL queries with the Java programming language to build high-performing data processing applications with major SQL database systems such as Oracle or MS SQL Server. SQL is also fairly easy to learn as it uses common English keywords in its statements

History​

SQL was invented in the 1970s based on the relational data model. It was initially known as the structured English query language (SEQUEL). The term was later shortened to SQL. Oracle, formerly known as Relational Software, became the first vendor to offer a commercial SQL relational database management system.

Process of SQL​

When we are executing the command of SQL on any Relational database management system, then the system automatically finds the best routine to carry out our request, and the SQL engine determines how to interpret that particular command.

Structured Query Language contains the following four components in its process:

  • Query Dispatcher
  • Optimization Engines
  • Classic Query Engine
  • SQL Query Engine, etc.

A classic query engine allows data professionals and users to maintain non-SQL queries. The architecture of SQL is shown in the following diagram: image

What are the components of a SQL system?​

Relational database management systems use structured query language (SQL) to store and manage data. The system stores multiple database tables that relate to each other. MS SQL Server, MySQL, or MS Access are examples of relational database management systems. The following are the components of such a system.

SQL table​

A SQL table is the basic element of a relational database. The SQL database table consists of rows and columns. Database engineers create relationships between multiple database tables to optimize data storage space.

For example, the database engineer creates a SQL table for products in a store: image

SQL statements​

SQL statements, or SQL queries, are valid instructions that relational database management systems understand. Software developers build SQL statements by using different SQL language elements. SQL language elements are components such as identifiers, variables, and search conditions that form a correct SQL statement.

For example, the following SQL statement uses a SQL INSERT command to store Mattress Brand A, priced $499, into a table named Mattress_table, with column names brand_name and cost:

INSERT INTO Mattress_table (brand_name, cost)

VALUES(β€˜A’,’499’);

Stored procedures​

Stored procedures are a collection of one or more SQL statements stored in the relational database. Software developers use stored procedures to improve efficiency and performance. For example, they can create a stored procedure for updating sales tables instead of writing the same SQL statement in different applications.

What are SQL commands?​

Structured query language (SQL) commands are specific keywords or SQL statements that developers use to manipulate the data stored in a relational database. You can categorize SQL commands as follows.

Data definition language​

Data definition language (DDL) refers to SQL commands that design the database structure. Database engineers use DDL to create and modify database objects based on the business requirements. For example, the database engineer uses the CREATE command to create database objects such as tables, views, and indexes.

Data query language​

Data query language (DQL) consists of instructions for retrieving data stored in relational databases. Software applications use the SELECT command to filter and return specific results from a SQL table.

Data manipulation language​

Data manipulation language (DML) statements write new information or modify existing records in a relational database. For example, an application uses the INSERT command to store a new record in the database.

Data control language​

Database administrators use data control language (DCL) to manage or authorize database access for other users. For example, they can use the GRANT command to permit certain applications to manipulate one or more tables.

Transaction control language​

The relational engine uses transaction control language (TCL) to automatically make database changes. For example, the database uses the ROLLBACK command to undo an erroneous transaction.

What is MySQL?​

MySQL is an open-source relational database management system offered by Oracle. Developers can download and use MySQL without paying a licensing fee. They can install MySQL on different operating systems or cloud servers. MySQL is a popular database system for web applications.

SQL vs. MySQL​

Structured query language (SQL) is a standard language for database creation and manipulation. MySQL is a relational database program that uses SQL queries. While SQL commands are defined by international standards, the MySQL software undergoes continual upgrades and improvements.

What is NoSQL?​

NoSQL refers to non-relational databases that don't use tables to store data. Developers store information in different types of NoSQL databases, including graphs, documents, and key-values. NoSQL databases are popular for modern applications because they are horizontally scalable. Horizontal scaling means increasing the processing power by adding more computers that run NoSQL software.

SQL vs. NoSQL​

Structured query language (SQL) provides a uniform data manipulation language, but NoSQL implementation is dependent on different technologies. Developers use SQL for transactional and analytical applications, whereas NoSQL is suitable for responsive, heavy-usage applications.

Advantages of SQL​

SQL provides various advantages which make it more popular in the field of data science. It is a perfect query language which allows data professionals and users to communicate with the database. Following are the best advantages or benefits of Structured Query Language:

  1. No programming needed

SQL does not require a large number of coding lines for managing the database systems. We can easily access and maintain the database by using simple SQL syntactical rules. These simple rules make the SQL user-friendly.

  1. High-Speed Query Processing

A large amount of data is accessed quickly and efficiently from the database by using SQL queries. Insertion, deletion, and updation operations on data are also performed in less time.

  1. Standardized Language

SQL follows the long-established standards of ISO and ANSI, which offer a uniform platform across the globe to all its users.

  1. Portability

The structured query language can be easily used in desktop computers, laptops, tablets, and even smartphones. It can also be used with other applications according to the user's requirements.

  1. Interactive language

We can easily learn and understand the SQL language. We can also use this language for communicating with the database because it is a simple query language. This language is also used for receiving the answers to complex queries in a few seconds.

  1. More than one Data View

The SQL language also helps in making the multiple views of the database structure for the different database users.

Disadvantages of SQL​

With the advantages of SQL, it also has some disadvantages, which are as follows:

  1. Cost

The operation cost of some SQL versions is high. That's why some programmers cannot use the Structured Query Language.

  1. Interface is Complex

Another big disadvantage is that the interface of Structured query language is difficult, which makes it difficult for SQL users to use and manage it.

  1. Partial Database control

The business rules are hidden. So, the data professionals and users who are using this query language cannot have full database control.

Getting started with MERN

Β· 19 min read
Abhijith M S
Full Stack Developer
Ajay Dhangar
Founder of CodeHarborHub

A comprehensive guide to get you started with MERN stack. From building an API with Express.js to creating a React app, this guide covers all the basics of the MERN stack.

Getting started with MERN

caution

This article assumes you have a basic understanding of web development concepts and technologies. If you are new to web development, you might want to familiarize yourself with HTML, CSS, JavaScript, and Node.js before diving into the MERN stack.

Have you ever heard of the MERN stack? If you are a web developer or aspiring to become one, you might have come across this term. The MERN stack is a popular web development stack that consists of four key technologies: MongoDB, Express.js, React.js, and Node.js. Each of these technologies plays a crucial role in building modern web applications.

In this blog post, we will cover the basics of the MERN stack and walk you through the process of building a simple MERN application. We will start by setting up a MongoDB Atlas cluster, building an API with Express.js, creating a React app, and connecting the frontend to the backend. We will also cover styling and making requests from the frontend.

info

If you prefer to jump straight to the code, you can find the GitHub repository for this project here and star the repo to show your support. Feel free to fork the repository and experiment with the code.

What we’re covering

  • Prerequisites
  • What a Web Framework is?
  • Configuring a MongoDB Atlas Cluster
  • Building a API with Express.js
  • Creating a React App
  • Setting a Proxy from your Backend API server to the Frontend React App
  • Styling and making Requests from the frontend

Prerequisites​

Before we get started, make sure you have the following installed on your system:

  • Node.js
  • npm
  • MongoDB Compass (optional, for local MongoDB)
  • Postman (optional, for testing API endpoints)
  • Visual Studio Code (or any code editor of your choice)
  • A basic understanding of HTML, CSS, JavaScript, and Node.js
  • A GitHub account (optional, for version control)
  • A MongoDB Atlas account (optional, for cloud-based MongoDB)
  • A basic understanding of web development concepts and technologies
  • A cup of coffee β˜•οΈ (optional, but highly recommended)
  • A pinch of curiosity 🧐

What is a Web Framework?​

A web framework is a software framework designed to support the development of web applications, including web services, web resources, and web APIs. Web frameworks provide a standard way to build and deploy web applications by providing libraries, APIs, and tools to streamline the development process.

In the context of the MERN stack, each technology plays a specific role:

  • MongoDB: A NoSQL database that stores data in a flexible, JSON-like format.
  • Express.js: A Node.js web application framework that provides a robust set of features for building web applications and APIs.
  • React.js: A JavaScript library for building user interfaces and single-page applications.
  • Node.js: A JavaScript runtime built on Chrome's V8 JavaScript engine that allows you to run JavaScript code outside of a web browser.
  • MERN: A full-stack development framework that combines MongoDB, Express.js, React.js, and Node.js to build modern web applications.
  • MongoDB Atlas: A cloud-based database service that allows you to store your data in a serverless environment.
  • API: An Application Programming Interface that defines how two services can communicate and interact with each other through requests and responses.
  • React Router: A routing library for React that allows you to define routes and navigate between different components in a single-page application.
  • CSS: Cascading Style Sheets that define the visual presentation of a web page, including layout, colors, fonts, and animations.
  • HTML: Hypertext Markup Language that defines the structure and content of a web page.
  • JavaScript: A programming language that enables interactive and dynamic web content.
  • Express: A minimal and flexible Node.js web application framework that provides a robust set of features for web and mobile applications.
  • Node.js: A JavaScript runtime built on Chrome's V8 JavaScript engine that allows you to run JavaScript code outside of a web browser.
  • npm: A package manager for Node.js that allows you to install, manage, and share JavaScript packages.
  • React: A JavaScript library for building user interfaces and single-page applications.

You might have heard of Unintelligible acronyms like MERN, MEVN, MEAN, LAMP, PERN etc. These are examples of some of the most popular web frameworks. Each of these frameworks has its own set of technologies and tools that work together to build web applications.

FrameworkTechnologiesDescription
MERNMongoDB, Express.js, React.js, Node.jsA full-stack development framework for building modern web applications.
MEVNMongoDB, Express.js, Vue.js, Node.jsA full-stack development framework similar to MERN, but with Vue.js instead of React.js.
MEANMongoDB, Express.js, Angular, Node.jsA full-stack development framework with Angular instead of React.js or Vue.js.
LAMPLinux, Apache, MySQL, PHPA traditional web development stack that uses Linux as the operating system, Apache as the web server, MySQL as the database, and PHP as the server-side scripting language.
PERNPostgreSQL, Express.js, React.js, Node.jsA full-stack development framework similar to MERN, but with PostgreSQL instead of MongoDB.
info

Fun fact: HTMX has been making waves in the community due to its ability to be written in a hypertext format (that is what HTML is 😌) and being able to give you access to AJAX, CSS Transitions, WebSockets, and Server-Sent Events directly in HTML, using attributes.

Configuring a MongoDB Atlas Cluster​

MongoDB Atlas is a cloud-based database service that allows you to store your data in a serverless environment. It provides a fully managed database solution that eliminates the need for manual configuration, maintenance, and scaling of databases.

To get started with MongoDB Atlas, follow these steps:

  1. Create an Account: Sign up for a MongoDB Atlas account at https://www.mongodb.com/cloud/atlas.

  2. Create a New Cluster: Click on the "Build a Cluster" button to create a new cluster.

  3. Choose a Configuration: Select a configuration of your liking (e.g., M0, which is free and suitable for small applications).

  4. Choose a Provider: Choose your preferred cloud provider (e.g., AWS, Azure, Google Cloud).

  5. Create Deployment: Click on the "Create Deployment" button to create your cluster.

  6. Connect to Your Cluster: Once your cluster is created, click on the "Connect" button.

  7. Add Your IP Address: Click on the "Add Your Current IP Address" button to allow your IP address to access the cluster.

  8. Create a Database User: Click on the "Create a Database User" button to create a new user for your database. Make sure to note down the username and password.

  9. Choose a Connection Method: Choose the "Connect Your Application" option to get the connection string for your application.

  10. Copy the Connection String: Copy the connection string and replace the username and password with the ones you created earlier. The connection string should look something like this:

    mongodb+srv://<username>:<password>@cluster0.tpgdder.mongodb.net/?retryWrites=true&w=majority&appName=Cluster0
    info

    Make sure to replace <username> and <password> with the username and password you created for your database user.

  11. Connect to Your Cluster: Use the connection string to connect to your MongoDB Atlas cluster from your application. You can use the MongoDB Node.js driver to connect to your cluster and perform database operations. Here is an example of how you can connect to your cluster using the Node.js driver:

    app.js
    const { MongoClient } = require('mongodb');

    const uri = 'mongodb+srv://<username>:<password>@cluster0.tpgdder.mongodb.net/?retryWrites=true&w=majority&appName=Cluster0';

    const client = new MongoClient(uri);

    async function run() {
    try {
    await client.connect();

    const database = client.db('mydatabase');
    const collection = database.collection('mycollection');

    // Perform database operations here

    } finally {
    await client.close();
    }
    }

    run().catch(console.dir);

Congratulations! You have successfully configured a MongoDB Atlas cluster and connected to it using the Node.js driver. You are now ready to build your MERN application.

Building a API with Express.js​

Express.js is a minimal and flexible Node.js web application framework that provides a robust set of features for building web applications and APIs. It is widely used in the industry due to its simplicity and scalability.

Let's get to building an API with Express.js. Follow these steps to create a simple API:

  1. Create a Project Folder: Create a new folder for your project and navigate to it in your terminal.

    mkdir SimpleMernApp
    cd SimpleMernApp
  2. Initialize a New Node.js Project: Run npm init -y to initialize a new Node.js project with default settings.

    npm init -y
  3. Install Express.js: Run npm install express to install Express.js as a dependency for your project.

    npm install express
  4. Create an app.js File: Create a new file called app.js in your project folder and add the following code to create a simple Express.js server.

    app.js
    const express = require('express');
    const app = express();
    const PORT = process.env.PORT || 4000;

    app.get('/', (req, res) => {
    res.send('Hello, World!');
    });

    app.listen(PORT, () => {
    console.log(`Server is running on http://localhost:${PORT}`);
    });
  5. Start the Express.js Server: Run node app.js to start the Express.js server.

    node app.js
  6. Test the API Endpoint: Open http://localhost:4000/ in your browser to see the message "Hello, World!" displayed. This confirms that your Express.js server is running successfully.

  7. Create Additional Routes: You can create additional routes and API endpoints by adding more app.get() or app.post() methods to handle different requests.

    app.js
    app.get('/api/books', (req, res) => {
    res.json({ message: 'Get all books' });
    });

    app.post('/api/books', (req, res) => {
    res.json({ message: 'Add a new book' });
    });

    app.put('/api/books/:id', (req, res) => {
    res.json({ message: 'Update a book' });
    });

    app.delete('/api/books/:id', (req, res) => {
    res.json({ message: 'Delete a book' });
    });
  8. Install Additional Dependencies: You can install additional dependencies like mongoose for MongoDB integration, cors for enabling CORS, and dotenv for managing environment variables.

    npm install mongoose cors dotenv
  9. Create a .env File: Create a new file called .env in your project folder and add your environment variables.

    PORT=4000
    MONGODB_URI=mongodb+srv://<username>:<password>@cluster0.tpgdder.mongodb.net/mydatabase
  10. Update the app.js File: Update your app.js file to use the environment variables from the .env file.

    app.js
    require('dotenv').config();
    const express = require('express');
    const mongoose = require('mongoose');
    const cors = require('cors');
    const app = express();
    const PORT = process.env.PORT || 4000;
    const MONGODB_URI = process.env.MONGODB_URI;

    mongoose.connect(MONGODB_URI, {
    useNewUrlParser: true,
    useUnifiedTopology: true,
    });

    try {
    mongoose.connection.on('connected', () => {
    console.log('Connected to MongoDB');
    });
    } catch (error) {
    console.log('Error connecting to MongoDB:', error);
    }

    app.use(cors());
    app.use(express.json());

    app.get('/', (req, res) => {
    res.send('Hello, World!');
    });

    app.listen(PORT, () => {
    console.log(`Server is running on http://localhost:${PORT}`);
    });
  11. Create a models Folder: Create a new folder called models in your project folder to store your MongoDB models.

    mkdir models
  12. Create a Book Model: Create a new file called Book.js in the models folder and define your MongoDB model.

    models/Book.js
    const mongoose = require('mongoose');

    const bookSchema = new mongoose.Schema({
    name: { type: String, required: true },
    });

    const Book = mongoose.model('Book', bookSchema);

    module.exports = Book;
  13. Update the app.js File: Update your app.js file to use the Book model.

    app.js
    const express = require('express');
    const mongoose = require('mongoose');
    const cors = require('cors');
    const Book = require('./models/Book');
    const app = express();
    const PORT = process.env.PORT || 4000;
    const MONGODB_URI = process.env.MONGODB_URI;

    mongoose.connect(MONGODB_URI, {
    useNewUrlParser: true,
    useUnifiedTopology: true,
    });

    try {
    mongoose.connection.on('connected', () => {
    console.log('Connected to MongoDB');
    });
    } catch (error) {
    console.log('Error connecting to MongoDB:', error);
    }

    app.use(cors());
    app.use(express.json());

    app.get('/', (req, res) => {
    res.send('Hello, World!');
    });

    app.get('/api/books', async (req, res) => {
    try {
    const books = await Book.find();
    res.json(books);
    } catch (error) {
    res.status(500).json({ message: error.message });
    }
    });

    app.post('/api/books', async (req, res) => {
    const book = new Book({
    name: req.body.name,
    });

    try {
    const newBook = await book.save();
    res.status(201).json(newBook);
    } catch (error) {
    res.status(400).json({ message: error.message });
    }
    });

    app.listen(PORT, () => {
    console.log(`Server is running on http://localhost:${PORT}`);
    });
  14. Test the API Endpoints: Use Postman or a similar tool to test your API endpoints. You can send GET and POST requests to /api/books to get all books and add a new book, respectively. Make sure to check the response status codes and messages to ensure that your API is working correctly.

  15. Create Additional Models and Routes: You can create additional models and routes for your API by following the same steps as above. Make sure to define your models, create routes to handle different requests, and test your API endpoints to ensure they are working as expected.

  16. Deploy Your API: You can deploy your Express.js API to a cloud platform like Heroku, AWS, or Google Cloud Platform to make it accessible to the public. Make sure to configure your deployment settings, set up environment variables, and test your API in a production environment.

note

Top Free Hosting Platforms for Node.js

If you are new to Express.js, MongoDB, or Mongoose, I recommend checking out the official documentation for each technology to learn more about their features, capabilities, and best practices.

Congratulations! You have successfully built an API with Express.js and connected it to a MongoDB database. You are now ready to create a frontend application using React.js.

Creating a React App​

React.js is a JavaScript library for building user interfaces and single-page applications. It is widely used in the industry due to its component-based architecture, virtual DOM, and declarative syntax.

Let's get started with creating a React app. Follow these steps to create a simple React app:

  1. Create a New React App: Run npx create-react-app frontend to create a new React app named frontend.

    npx create-react-app frontend
  2. Navigate to the React App: Run cd frontend to navigate to the React app folder.

    cd frontend
  3. Start the React App: Run npm start to start the React app in development mode.

    npm start
  4. Open the React App: Open http://localhost:3000/ in your browser to see the React app running.

    http://localhost:3000

    Edit src/App.js and save to reload.

    Learn React

  5. Create Additional Components: You can create additional components in the src/components folder to organize your code and reuse components across your app.

    mkdir src/components
  6. Now, create a new file called Home.js in the src/components folder and add the following code to create a simple home component.

    src/components/Home.js
    import React from 'react';

    const Home = () => {
    return (
    <div>
    <h2>Welcome to the Home Page</h2>
    </div>
    );
    };

    export default Home;
  7. Update the App.js File: Update your App.js file to use the Home component.

    src/App.js
    import React from 'react';
    import Home from './components/Home';

    function App() {
    return (
    <div>
    <Home />
    </div>
    );
    }

    export default App;
  8. Now, you can see the Home component displayed on browser window.

    http://localhost:3000

    Welcome to the Home Page

Setting a Proxy from your Backend API server to the Frontend React App​

When you are developing a full-stack application with a separate frontend and backend, you might run into issues related to cross-origin requests. To avoid these issues during development, you can set up a proxy from your backend API server to your frontend React app.

To set up a proxy, follow these steps:

  1. Update the package.json File: Update the package.json file in your React app to include a proxy setting that points to your backend API server.

    frontend/package.json
    {
    "name": "frontend",
    "version": "0.1.0",
    "private": true,
    "proxy": "http://localhost:4000"
    }
  2. Restart the React App: Run npm start to restart the React app with the new proxy setting.

    npm start
  3. Test the Proxy: Make a request to your backend API server from your frontend React app using the proxy setting.

    src/components/Home.js
    import React, { useEffect } from 'react';

    const Home = () => {
    useEffect(() => {
    fetch('/api/books')
    .then(response => response.json())
    .then(data => console.log(data))
    .catch(error => console.error(error));
    }, []);

    return (
    <div>
    <h2>Welcome to the Home Page</h2>
    </div>
    );
    };

    export default Home;
  4. Check the Console: Open the browser console to see the data fetched from your backend API server. This confirms that the proxy is working correctly.

    [ { id: 1, title: 'Book 1' }, { id: 2, title: 'Book 2' }, { id: 3, title: 'Book 3' } ]
  5. Update the API Endpoints: Update your API endpoints in the Express.js server to return data in JSON format.

    app.js
    app.get('/api/books', async (req, res) => {
    try {
    const books = [
    { id: 1, title: 'Book 1' },
    { id: 2, title: 'Book 2' },
    { id: 3, title: 'Book 3' },
    ];
    res.json(books);
    } catch (error) {
    res.status(500).json({ message: error.message });
    }
    });
  6. Test the API Endpoints: Make a request to the /api/books endpoint from your frontend React app to fetch the data and display it on the browser window.

    src/components/Home.js
    import React, { useEffect, useState } from 'react';

    const Home = () => {
    const [books, setBooks] = useState([]);

    useEffect(() => {
    fetch('/api/books')
    .then(response => response.json())
    .then(data => setBooks(data))
    .catch(error => console.error(error));
    }, []);

    return (
    <div>
    <h2>Welcome to the Home Page</h2>
    <ul>
    {books.map(book => (
    <li key={book.id}>{book.title}</li>
    ))}
    </ul>
    </div>
    );
    };

    export default Home;
  7. Check the Browser Window: Open the browser window to see the list of books fetched from your backend API server and displayed on the screen.

    http://localhost:3000

    Welcome to the Home Page

    • Book 1
    • Book 2
    • Book 3

By setting up a proxy from your backend API server to your frontend React app, you can avoid issues related to cross-origin requests during development. This allows you to focus on building your application without worrying about the underlying infrastructure.

Styling and making Requests from the frontend​

Now that you have set up your backend API server and connected it to your frontend React app, it's time to style your app and make requests to the API endpoints. You can use CSS for styling and fetch API for making requests from the frontend.

Here is a simple guide to styling and making requests from the frontend:

  1. Create a styles.css File: Create a new file called styles.css in the src folder of your React app to add custom styles.

    src/styles.css
    body {
    font-family: 'Arial', sans-serif;
    background-color: #f0f0f0;
    margin: 0;
    padding: 0;
    }

    .container {
    max-width: 800px;
    margin: 0 auto;
    padding: 20px;
    }

    .heading {
    font-size: 24px;
    font-weight: bold;
    margin-bottom: 20px;
    }

    .list {
    list-style-type: none;
    padding: 0;
    }

    .item {
    background-color: #fff;
    border: 1px solid #ccc;
    margin-bottom: 10px;
    padding: 10px;
    }
  2. Update the App.js File: Update your App.js file to include the custom styles.

    src/App.js
    import React, { useEffect, useState } from 'react';
    import './styles.css';

    const App = () => {
    const [books, setBooks] = useState([]);

    useEffect(() => {
    fetch('/api/books')
    .then(response => response.json())
    .then(data => setBooks(data))
    .catch(error => console.error(error));
    }, []);

    return (
    <div className="container">
    <h1 className="heading">Book List</h1>
    <ul className="list">
    {books.map(book => (
    <li key={book.id} className="item">{book.title}</li>
    ))}
    </ul>
    </div>
    );
    };

    export default App;
  3. Check the Browser Window: Open the browser window to see the list of books displayed with the custom styles applied.

    http://localhost:3000

    Book List

    • Book 1
    • Book 2
    • Book 3

By adding custom styles to your React app and making requests to the API endpoints, you can create a visually appealing and interactive user interface. You can experiment with different styles, layouts, and components to enhance the user experience and make your app more engaging.

Once you have styled your app and made requests to the API endpoints, you can continue to add more features, functionality, and components to build a full-fledged web application. You can also explore other libraries, frameworks, and tools to further enhance your app and take it to the next level.

tip

Key Takeaways

  • The MERN stack consists of MongoDB, Express.js, React.js, and Node.js, which work together to build modern web applications.
  • Express.js is a minimal and flexible Node.js web application framework that provides a robust set of features for building web applications and APIs.
  • React.js is a JavaScript library for building user interfaces and single-page applications using a component-based architecture.
  • By setting up a MongoDB Atlas cluster, building an API with Express.js, creating a React app, and connecting the frontend to the backend, you can build a simple MERN application.
  • You can use CSS for styling your app and fetch API for making requests from the frontend to the backend.
  • By following best practices, experimenting with different technologies, and continuously learning and improving your skills, you can become a proficient full-stack developer and build amazing web applications.
  • Remember to have fun, stay curious, and keep exploring new ideas and technologies to expand your knowledge and grow as a developer.
  • If you have any questions, feedback, or suggestions, feel free to reach out to the authors or the community for help and support. Happy coding! πŸš€

Conclusion​

In this blog post, we covered the basics of the MERN stack and walked you through the process of building a simple MERN application. We started by setting up a MongoDB Atlas cluster, building an API with Express.js, creating a React app, and connecting the frontend to the backend. We also covered styling and making requests from the frontend to the backend API server.

Getting Started with React and Vite

Β· 7 min read
Ajay Dhangar
Founder of CodeHarborHub

Getting Started with React and Vite

Vite is a modern build tool that offers faster development times and optimized builds. It aligns with modern web standards and provides out-of-the-box support for TypeScript, making it an excellent choice for React development. In this blog post, we will learn how to get started with React by creating a new app using Vite. We will follow the steps to set up our development environment and build our first React application.

Quick Start​

To get started with Vite, we need to have Node.js installed on our system. We can install Node.js by downloading the installer from the official website or using a package manager. Once we have Node.js installed, we can use npm to create a new Vite project with the React template.

To quickly get started with Vite and React, follow these steps:

npm create vite@latest my-app --template react

This command will create a new Vite project called my-app using the React template. It will set up the project structure and install the necessary dependencies. We can then navigate to the my-app directory and start the development server to see our new React application in action.

Now navigate to the my-app directory:

cd my-app

if you prefer using npm:

npm install

Or, if you prefer using yarn:

yarn

Now start the development server:

npm start

Or, if you prefer using yarn:

yarn start

Once the development server has started, open http://localhost:3000/ in your browser to see your new React application. You can start building your React components, defining routes, and managing state using the React Context API.

Project Structure​

The Vite project structure for a new React application is minimal and easy to understand. It provides a clean and organized layout that aligns with modern web development practices. Here is an overview of the project structure:

my-app
β”œβ”€β”€ node_modules
β”œβ”€β”€ public
β”‚ β”œβ”€β”€ favicon.ico
β”‚ β”œβ”€β”€ index.html
β”œβ”€β”€ src
β”‚ β”œβ”€β”€ App.css
β”‚ β”œβ”€β”€ App.jsx
β”‚ β”œβ”€β”€ index.css
β”‚ β”œβ”€β”€ index.jsx
β”œβ”€β”€ .gitignore
β”œβ”€β”€ package.json
β”œβ”€β”€ README.md

The public directory contains the static assets for our application, such as the index.html file and the favicon.ico icon. The src directory contains the source code for our React application, including the main index.jsx file and the App.jsx component.

Building Our First React Application​

Now that we have our development environment set up and our project structure in place, we can start building our first React application. We can create new components, define routes, and manage state using the React Context API. We can also use popular libraries such as React Router and React hooks to enhance our application.

To learn more about building React applications with Vite, refer to the official React documentation. The documentation provides detailed information on React concepts, best practices, and advanced topics.

Why Vite?​

Vite offers several advantages for React development, including:

  • Faster Development: Vite provides a lightning-fast development server with hot module replacement (HMR) and instant server start. It eliminates the need for a bundler during development, resulting in faster build times and a smoother development experience.
  • Optimized Builds: Vite optimizes the production build by leveraging native ES module support in modern browsers. It generates highly optimized and tree-shaken builds, resulting in smaller bundle sizes and improved performance.
  • Modern Web Standards: Vite aligns with modern web standards and leverages native browser features such as ES modules, dynamic imports, and web workers. It provides an efficient development environment that embraces the latest web technologies.
  • TypeScript Support: Vite offers out-of-the-box support for TypeScript, enabling us to write type-safe code and leverage advanced TypeScript features. It provides seamless integration with React and TypeScript, making it an excellent choice for React development.
  • Plugin Ecosystem: Vite has a rich plugin ecosystem that allows us to extend its functionality and customize the build process. We can use plugins to add features such as CSS preprocessing, asset optimization, and code transformation.
  • Developer Experience: Vite provides an excellent developer experience with features such as instant server start, optimized builds, and real-time feedback. It streamlines the development workflow and enables us to focus on building great React applications.
  • Community Support: Vite has a growing community and active maintainers who contribute to its development and provide support. It has gained popularity in the React ecosystem and is widely adopted by developers.
  • Migration Path: Vite offers a smooth migration path for existing React projects by providing a Vite-compatible React template. It allows us to migrate our projects to Vite without significant changes to the codebase.
  • Future Compatibility: Vite is designed to be future-compatible and aligns with the latest web standards and best practices. It provides a solid foundation for building modern React applications that are ready for the future.
  • Open Source: Vite is an open-source project with a permissive license that allows us to use, modify, and distribute it freely. It is developed in the open and welcomes contributions from the community.
  • Continuous Improvement: Vite is continuously improved and updated with new features, optimizations, and bug fixes. It has a roadmap for future releases and aims to provide a cutting-edge development experience for React developers.
  • Integration with Vercel: Vite has seamless integration with Vercel, a popular platform for deploying web applications. It allows us to deploy our Vite projects to Vercel with minimal configuration and take advantage of Vercel's features such as serverless functions and edge caching.
  • Developer Tools: Vite provides a set of developer tools that enhance the development experience, including a built-in development server, optimized builds, and real-time feedback. It offers a comprehensive toolkit for building and debugging React applications.
  • Performance Optimization: Vite optimizes the development and production builds by leveraging modern web standards and best practices. It provides a performant and efficient build process that results in faster load times and improved user experience.
  • Community Plugins: Vite has a rich ecosystem of community plugins that extend its functionality and provide additional features. We can use plugins to add support for features such as PWA, internationalization, and analytics to our Vite projects.
  • Learning Resources: Vite has a growing collection of learning resources, tutorials, and documentation that help developers get started with Vite and build great React applications. It provides comprehensive guidance on using Vite effectively and efficiently.
  • Developer Community: Vite has a vibrant developer community that actively contributes to its development and provides support to fellow developers. It has a dedicated Discord server, GitHub repository, and community forums where developers can connect and collaborate.
πŸ“š Learn More:

To learn more about Vite, visit the official Vite documentation.

Conclusion​

In this blog post, we learned how to get started with React by creating a new app using Vite. We followed the steps to set up our development environment and build our first React application. We explored the advantages of using Vite for React development and discussed its features, benefits, and community support. We also learned about the project structure of a new React application created with Vite and how to build our first React application. We hope this blog post has provided you with valuable insights into using Vite for React development and has inspired you to explore the possibilities of building modern web applications with Vite and React.

Git Best Practices: Commit Often, Perfect Later, Publish Once

Β· 5 min read
Ajay Dhangar
Founder of CodeHarborHub

Git is a powerful tool for managing the development of software projects, but it can be challenging to use effectively. In this article, we'll take a look at some best practices for using Git, including how to structure your commits, how to manage branches, and how to collaborate with others. By following these best practices, you can make your development process more efficient and less error-prone.

Commit Often, Perfect Later, Publish Once​

One of the most important best practices for using Git is to commit your changes often. This means that you should make small, focused commits that capture a single logical change to your code. By committing often, you can keep a detailed history of your changes, which makes it easier to understand the evolution of your codebase and to track down bugs.

When you're working on a new feature or fixing a bug, it's important to commit your changes frequently, even if they're not perfect. You can always go back and revise your commits later to clean them up and make them more coherent. By committing often and revising later, you can avoid the temptation to make large, sweeping changes to your code all at once, which can lead to confusion and errors.

Once you're satisfied with your changes, you can publish them to a shared repository, such as GitHub or Bitbucket. By publishing your changes once, you can make it easier for others to review your work and to collaborate with you. This can help to prevent merge conflicts and to ensure that everyone is working from the most up-to-date version of the codebase.

Structure Your Commits​

When you're committing your changes, it's important to structure your commits in a way that makes it easy to understand the evolution of your codebase. This means that you should avoid making large, monolithic commits that capture multiple unrelated changes. Instead, you should make small, focused commits that capture a single logical change to your code.

One way to structure your commits is to use the "atomic commit" pattern, which involves making a series of small, focused commits that capture a single logical change to your code. For example, if you're working on a new feature, you might make a series of commits that add individual components of the feature, such as the user interface, the business logic, and the data model. By structuring your commits in this way, you can make it easier to understand the evolution of your codebase and to track down bugs.

Another way to structure your commits is to use the "semantic commit" pattern, which involves using a standardized format for your commit messages. For example, you might use a format like "feat: add new feature" or "fix: correct bug in user interface". By using a standardized format for your commit messages, you can make it easier to understand the purpose of each commit and to navigate through the history of your codebase.

Manage Your Branches​

When you're working on a software project, it's important to manage your branches effectively. This means that you should create a new branch for each new feature or bug fix that you're working on, and that you should merge your branches back into the main codebase once you're finished with them.

By managing your branches effectively, you can make it easier to collaborate with others and to keep your codebase organized. For example, if you're working on a new feature, you might create a new branch for the feature, make your changes on the branch, and then merge the branch back into the main codebase once the feature is complete. By doing this, you can make it easier for others to review your work and to collaborate with you, and you can avoid introducing bugs and conflicts into the main codebase.

Collaborate with Others​

One of the most powerful features of Git is its ability to help you collaborate with others. By using Git, you can make it easier to share your work with others, to review their work, and to resolve conflicts and merge changes together.

When you're collaborating with others, it's important to communicate effectively and to follow best practices for using Git. For example, you should make sure to pull the latest changes from the shared repository before you start working on a new feature or bug fix, and you should make sure to push your changes to the shared repository once you're finished with them. By following these best practices, you can make it easier to collaborate with others and to keep your codebase organized and up-to-date.

Conclusion​

Git is a powerful tool for managing the development of software projects, but it can be challenging to use effectively. By following best practices for using Git, such as committing often, structuring your commits, managing your branches, and collaborating with others, you can make your development process more efficient and less error-prone. By doing this, you can make it easier to understand the evolution of your codebase, to track down bugs, and to collaborate with others. Happy coding!

Sed: Normalize markdown file with Regex

Β· 3 min read
Ajay Dhangar

I have been using web clipper to save articles and blog posts for a while now. It's a great tool to save content from the web and organize it in a clean and readable format. However, the markdown files generated by web clipper are not always consistent, and I often find myself manually editing them to make them more readable.

One of the common issues I encounter is inconsistent formatting of the front matter in the markdown files. The front matter is a block of metadata at the beginning of a markdown file that contains information such as the title, author, tags, date, and description of the content. Here's an example of what the front matter looks like:

---
title: 'Sed: Normalize markdown file with Regex'
author: Ajay Dhangar
tags: [sed, regex, web clipper]
date: 2020-11-26 21:13:28
description: How to normalize markdown file with Regex
draft: false
---

As you can see, the front matter is enclosed in three dashes (---) at the beginning and end of the block, and each key-value pair is separated by a colon (:). The keys and values are also enclosed in single quotes (') to ensure that special characters are escaped properly.

To make the front matter consistent across all my markdown files, I decided to use the sed command-line utility to write a simple regular expression that would normalize the front matter. Here's the regular expression I came up with:

sed -i -E "s/^---\n(.*: .*\n)+---\n//g" file.md

Let's break down the regular expression:

  • ^---\n matches the opening three dashes at the beginning of the file, followed by a newline character.
  • (.*: .*\n)+ matches one or more lines containing a key-value pair, where the key is followed by a colon and a space, and the value is followed by a newline character.
  • ---\n matches the closing three dashes at the end of the block, followed by a newline character.
  • /g is a flag that tells sed to perform the substitution globally, i.e., on all matching lines in the file.

When I run this command on a markdown file, it removes the existing front matter and leaves me with just the content of the file. This is exactly what I want, as I can then manually add a consistent front matter to the file.

I hope this example gives you an idea of how powerful regular expressions can be when used with command-line utilities like sed. With a little bit of practice, you can write regular expressions to perform complex text manipulations with ease. If you're interested in learning more about regular expressions, I highly recommend checking out the RegexOne interactive tutorial, which is a great resource for beginners.

nvs: One Node Version Per Terminal in Windows

Β· One min read
Ajay Dhangar
Founder of CodeHarborHub

nvs is a cross-platform Node.js version manager that allows you to install and use multiple versions of Node.js on the same machine. It is similar to nvm for Unix-based systems. nvs is a simple and easy-to-use tool that allows you to switch between different versions of Node.js with a single command.

In this article, we will learn how to install and set up nvs on Windows for PowerShell and Git-Bash.

CI evolution: From FTP client to GitHub Action

Β· 3 min read
Ajay Dhangar

In the early days of web development, the most common way to deploy a website was to use an FTP client. This involved manually uploading files to a remote server, which was a time-consuming and error-prone process. As web development practices evolved, so did the tools and techniques for deploying websites. One of the most significant advancements in this area has been the introduction of continuous integration (CI) and continuous deployment (CD) pipelines, which automate the process of building and deploying web applications.

In this article, we'll take a look at the evolution of remote file management, from the use of FTP clients to the adoption of GitHub Actions for automated deployment.

The FTP client era​

In the early days of web development, the most common way to deploy a website was to use an FTP client. This involved manually uploading files to a remote server, which was a time-consuming and error-prone process. Developers would typically make changes to their local files, then use an FTP client to upload those changes to the server. This process was often slow and cumbersome, and it was easy to make mistakes that could result in broken websites.

The rise of CI/CD pipelines​

As web development practices evolved, so did the tools and techniques for deploying websites. One of the most significant advancements in this area has been the introduction of continuous integration (CI) and continuous deployment (CD) pipelines. CI/CD pipelines automate the process of building and deploying web applications, making it faster, more reliable, and less error-prone than manual deployment methods.

GitHub Actions for automated deployment​

One of the most popular CI/CD solutions for web development is GitHub Actions. GitHub Actions is a powerful, flexible, and easy-to-use tool for automating the build, test, and deployment processes of web applications. With GitHub Actions, you can define custom workflows that automatically build and deploy your web applications whenever you push changes to your repository. This makes it easy to ensure that your websites are always up-to-date and error-free, without the need for manual intervention.

Conclusion​

The evolution of remote file management has come a long way since the days of using FTP clients to manually upload files to remote servers. With the introduction of CI/CD pipelines and tools like GitHub Actions, web developers now have powerful, automated solutions for building and deploying web applications. These tools make it faster, easier, and more reliable to deploy websites, and they help ensure that your websites are always up-to-date and error-free. If you're still using an FTP client to deploy your websites, it's time to consider upgrading to a more modern, automated solution like GitHub Actions.