Skip to main content

DOM manipulation in JavaScript

· 9 min read

1. Understanding DOM

  • The Document Object Model (DOM) is a programming interface for web documents.

  • This model allows developers to interact with the document programmatically via scripting languages like JavaScript.

  • When a web page is loaded, the browser parses the HTML and creates the DOM.

  • The DOM represents the document as a tree of nodes, where each node is an object representing a part of the document:

Document Node: Represents the entire document.

Element Nodes: Represent HTML elements like <div>, <p>, <a>, etc.

Text Nodes: Contain the text content within elements.

Attribute Nodes: Represent the attributes of HTML elements (class, id, src etc.).

For example, consider the following HTML:

<!DOCTYPE html>
<html>
<head>
<title>Example with Attributes</title>
</head>
<body>
<h1 class="header" id="main-title" data-info="example">Hello, World!</h1>
<p>This is a paragraph.</p>
</body>
</html>

For this document, the DOM tree would look like this:

Document
├── html
│ ├── head
│ │ └── title
│ │ └── "Example with Attributes"
│ └── body
│ ├── h1
│ │ ├── @class="header"
│ │ ├── @id="main-title"
│ │ ├── @data-info="example"
│ │ └── "Hello, World!"
│ └── p
│ └── "This is a paragraph."

The DOM plays a central role in web development by enabling developers to create dynamic and interactive web pages.

  • Access and manipulate elements: Developers can use JavaScript to select, modify, and create HTML elements.

  • Handle events: The DOM allows developers to listen for and respond to user events, such as clicks, keypresses, and form submissions.

  • Modify styles: Through the DOM, developers can change the CSS styles of elements dynamically.

2. DOM Manipulation

2.1. Accessing Elements

  • To get an element by its ID in JavaScript, you can use the getElementById method.
<div id="title">Hello, World!</div>
// Get the element with the ID 'myElement'
const element = document.getElementById('myElement');

// Log the element to the console
console.log(element);
  • To get elements by their class, we can use the getElementsByClassName method. This method returns a live HTMLCollection of elements with the specified class name.
<div class="myClass">First Element</div>
<div class="myClass">Second Element</div>
<div class="myClass">Third Element</div>
// Get the elements with the class name 'myClass'
const elements = document.getElementsByClassName('myClass');

// Log the elements to the console
console.log(elements);

// Optionally, you can iterate over the elements as well
for (let i = 0; i < elements.length; i++) {
console.log(elements[i])
}
  • To get elements by tag name in the Document Object Model (DOM), we can use the getElementsByTagName method. This method allows you to retrieve a collection of elements that match a specified tag name.
<h1>Hello, World!</h1>
<p>This is a paragraph.</p>
<div>
<p>Another paragraph inside a div.</p>
<p>Second paragraph inside a div.</p>
</div>
// Get all <p> elements in the document
const paragraphs = document.getElementsByTagName("p");

// Loop through and log the text content of each <p> element
for (let i = 0; i < paragraphs.length; i++) {
console.log(paragraphs[i].textContent);
}
  • The querySelector method in JavaScript allows you to select and retrieve the first element that matches a specified CSS selector within the document or within a specific element,
// Select the first <p> element in the document
const firstParagraph = document.querySelector("p");

// Select the element with id="main-title"
const titleElement = document.querySelector("#main-title");

// Select the first element with class="intro"
const introParagraph = document.querySelector(".intro");

// Select the first <p> element inside the <div>
const paragraphInDiv = document.querySelector("div p");
  • The querySelectorAll method in JavaScript allows you to select and retrieve a list (or NodeList) of all elements that match a specified CSS selector within the document or within a specific element. Unlike querySelector, which returns only the first matching element, querySelectorAll returns a NodeList containing all matching elements.

// Select all <p> elements in the document
const paragraphs = document.querySelectorAll("p");

// Log the number of <p> elements found
console.log("Number of <p> elements:", paragraphs.length);

// Select all elements with class="intro"
const introElements = document.querySelectorAll(".intro");

// Select all <li> elements inside the <ul>
const listItems = document.querySelectorAll("ul li");

2.2. Modifying Content

  • innerHTML allows you to get or set the HTML markup inside an element.
// HTML element
const divElement = document.getElementById("myDiv");

// Get inner HTML content of divElement
const htmlContent = divElement.innerHTML;
console.log("Inner HTML:", htmlContent);

// Set inner HTML content of divElement
divElement.innerHTML = "<p>New content with <strong>bold</strong> text.</p>";
  • textContent allows you to get or set the text content inside an element.
// HTML element
const paragraphElement = document.getElementById("myParagraph");

// Get text content
const textContent = paragraphElement.textContent;
console.log("Text content:", textContent);

// Set text content
paragraphElement.textContent = "Updated text content.";
  • innerText allows you to get or set the visible text content inside an element.
// HTML element
const spanElement = document.getElementById("mySpan");

// Get inner text
// Retrieves the visible text content inside an element, excluding hidden elements or elements with CSS display: none.
const innerText = spanElement.innerText;
console.log("Inner text:", innerText);

// Set inner text
spanElement.innerText = "Updated inner text.";

2.3. Modifying Attributes

  • Use getAttribute() to get the value of an attribute.
  • Use setAttribute() to set a new value for an attribute.
  • Use removeAttribute() to remove an attribute.
<div class="myClass">First Element</div>
// Get the element with the ID 'myElement'
const element = document.getElementById('myElement');

// Get the value of an attribute
const classValue = element.getAttribute('class');
console.log('Class:', classValue); // Output: Class: myClass

// Set a new value for an attribute
element.setAttribute('class', 'newClass');
console.log('Updated Class:', element.getAttribute('class')); // Output: Updated Class: newClass

// Remove an attribute
element.removeAttribute('class');
console.log(element.hasAttribute('class')); // Output: false

2.4. Creating and Inserting Elements

  • createElement() method creates a new HTML element.
  • appendChild() method appends a node as the last child of a parent node.
  • insertBefore() method inserts a node before an existing element within a a specified parent node.
  • append() method appends node to the end of a parent node.
  • prepend() method inserts node to the beginning of a parent node.
<body>
<div id="container">
<ul class="todo-list"></ul>
<div>
</body>
const container = document.getElementById('container');
const todolist = document.querySelector('.todo-list');

// Create a new element
const newToDo = document.createElement('li');
newToDo.setAttribute("class", "todo-item")
newToDo.textContent = 'Buy fruits.';

// Append the new element as the last child
todolist.appendChild(newToDo);

// Create another new element
const title = document.createElement('h2');
newToDo.textContent = 'My tasks';

// Insert the title before the list
container.insertBefore(title, todolist);

// Create yet another new element
const lastElement = document.createElement('div');
lastElement.textContent = 'Last Element';

// Append yet another element as the last child
container.append(lastElement);

// Create and prepend a new element
const firstElement = document.createElement('div');
firstElement.textContent = 'First Element';

// Prepend the new element as the first child
container.prepend(firstElement);

2.5. Removing Elements

  • removeChild() method removes a specified child node from the parent node. The removed child node is returned.

  • remove() method removes the element from the DOM.

<div id="container">
<div id="childElement">Child Element</div>
<div id="anotherChildElement">Another Child Element</div>
</div>
// Get the container element
const container = document.getElementById('container');

// Get the child element to be removed
const childElement = document.getElementById('childElement');

// Remove the child element using removeChild
container.removeChild(childElement);

// Get another child element to be removed
const anotherChildElement = document.getElementById('anotherChildElement');

// Remove the element using remove()
anotherChildElement.remove();

2.6. Modifying Styles

  • The style property allows to set or get inline styles for an element. This directly modifies the style attribute of the element in the DOM.
<div id="myElement">Hello world!</div>
// Get the element
const element = document.getElementById('myElement');

// Change the background color and font size using the style property
element.style.backgroundColor = 'blue';
element.style.fontSize = '20px';
  • The classList property provides methods to add, remove, and toggle CSS classes on an element. This is a more flexible way to manage an element's classes compared to directly setting the class attribute.
// Get the element
const element = document.getElementById('myElement');

// Add a new class to the element
element.classList.add('newClass');

// Remove an existing class from the element
element.classList.remove('initialClass');

// Toggle a class on the element (add it if it doesn't exist, remove it if it does)
element.classList.toggle('toggledClass');

2.7. Event Handling

  • The addEventListener() method attaches an event handler to an element. It allows multiple event listeners to be added to a single element for the same event type.
<button id="myButton">Click Me</button>
// Define the event handler function
function handleClick() {
alert('Button was clicked!');
}

// Get the button element
const button = document.getElementById('myButton');

// Add a click event listener
button.addEventListener('click', handleClick);
  • The removeEventListener() method removes an event handler that was added with addEventListener().
button.removeEventListener('click', handleClick);

Conclusion

Mastering DOM manipulation is crucial for creating dynamic, interactive web pages. The ability to access, modify, and interact with the DOM using JavaScript allows developers to build responsive and engaging user experiences.

  • Understanding the DOM: By understanding the structure and representation of a web document through the DOM, developers can effectively interact with and manipulate web pages.

  • Accessing Elements: Methods like getElementById(), getElementsByClassName(), getElementsByTagName(), querySelector(), and querySelectorAll() enable precise selection of elements within the DOM, facilitating targeted manipulations.

  • Modifying Content and Attributes: Techniques such as using innerHTML, textContent, and innerText for content modification, alongside getAttribute(), setAttribute(), and removeAttribute() for attribute management, provide powerful ways to dynamically change the document's content and properties.

  • Creating and Inserting Elements: Methods like createElement(), appendChild(), insertBefore(), append(), and prepend() allow developers to construct and integrate new elements into the DOM, enabling the dynamic construction of web pages.

  • Removing Elements: Using removeChild() and remove() methods facilitates the removal of elements from the DOM, which is essential for maintaining clean and efficient document structures.

  • Modifying Styles: Direct manipulation of inline styles via the style property and managing classes with classList methods (add(), remove(), toggle()) offer flexible control over the appearance and styling of elements.

  • Event Handling: The ability to attach and remove event listeners using addEventListener() and removeEventListener() empowers developers to create interactive elements that respond to user actions, enhancing the user experience.

By leveraging these DOM manipulation techniques, developers can create rich, interactive web applications that provide a seamless and dynamic user experience. Understanding and utilizing these tools effectively is key to modern web development.

Building a RESTful CRUD API with Spring Boot: A step by step guide

· 14 min read

What is RESTFUL API?

  • In the realm of modern web development, RESTful APIs have become a cornerstone for building scalable, efficient, and maintainable web applications.

  • In the world of computers and the internet, applications communicate with each other using a set of rules.

  • RESTful APIs (Application Programming Interfaces) are act as intermediaries between different software applications (client and server), allowing them to communicate and share data with each other over the internet.

  • Representational State Transfer (REST): This is a style of building software systems that use standard HTTP methods (like GET, POST, PUT, DELETE) to perform operations on resources (like data stored in a database). It emphasizes simplicity, scalability, and flexibility.

  • API (Application Programming Interface): Think of an API as a set of rules and protocols that allow different software applications to talk to each other. It defines how different parts of software systems can interact and exchange data.

  • So, when we say a “RESTful API”, we’re talking about a set of rules and conventions that govern how applications communicate with each other over the internet using standard HTTP methods.

Why spring boot?

  • Among the myriad of frameworks available for building RESTful APIs, Spring Boot stands out as a robust and developer-friendly option for Java developers.

  • Spring Boot makes it simple for developers to create web applications without getting bogged down in complex configuration.

  • With Spring Boot, you can quickly build and deploy applications, which is great for trying out ideas or making changes fast.

  • It comes with many useful tools and features ready to use, like handling data, security, and more, saving you time and effort.

  • Spring Boot can easily connect with other tools and libraries, making it flexible for different needs. Motive of this article

  • In this comprehensive guide, we’ll delve into the process of creating a RESTful CRUD (Create, Read, Update, Delete) API for managing user data using Spring Boot and MySQL. We’ll cover everything from project setup to testing, demonstrating best practices and essential techniques along the way. By the end of this tutorial, you’ll have a solid understanding of how to architect, develop RESTful APIs using Spring Boot.

  • Without further ado, let’s embark on this journey of building a RESTful user CRUD API with Spring Boot.

To build a Spring Boot project, you’ll need a few prerequisites:

  • Java Development Kit (JDK): Spring Boot applications are typically written in Java, so you’ll need to have the JDK installed on your system. Spring Boot supports Java 8 and newer versions, so make sure you have a compatible JDK installed.

  • Integrated Development Environment (IDE): While you can build Spring Boot applications using a simple text editor and command-line tools, using an IDE can greatly enhance your productivity. Popular choices include IntelliJ IDEA, Eclipse, and Spring Tool Suite (STS).

  • Build Tool: Spring Boot projects are typically built using either Maven or Gradle. Maven is more commonly used, but Gradle offers some advantages in terms of flexibility and performance. Choose whichever build tool you’re more comfortable with.

  • Understanding of Java: While you don’t need to be an expert, it’s beneficial to have a basic understanding of Java programming.

  • Database Knowledge (Optional): Having some knowledge of database concepts and SQL can be beneficial. Spring Boot supports various databases, including MySQL, PostgreSQL, MongoDB, and more.

Step 1: Setting up project.

  • Visit spring initializer and fill in all the details accordingly and at last click on the GENERATE button. Extract the zip file and import it into your IDE.

    img-03

1.1. Add below dependencies in pom.xml file.

<dependencies>
// we'll use this dependency to create RESTful API endpoints,
// handle HTTP requests (GET, POST, PUT, DELETE), and return JSON responses.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>

// we'll use this dependency to interact with a database,
// define JPA entities (data models), perform CRUD operations,
// and execute custom database queries.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>

// we'll use this dependency to establish a connection to
// our MySQL database, execute SQL queries, and manage database transactions.
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.33</version>
<scope>runtime</scope>
</dependency>

// we'll use Lombok annotations (such as @Data, @Getter, @Setter)
// in our Java classes to automatically generate common methods,
// making your code cleaner and more concise.
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>

// we'll use this dependency to annotate your Java model classes
// with validation constraints (e.g., @NotBlank, @NotNull, @Size)
// and automatically validate request data in your RESTful API endpoints.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>

</dependencies>

1.2. Update application.properties file

spring.jpa.hibernate.ddl-auto=update
spring.datasource.url=jdbc:mysql://localhost:3306/usercrud
spring.datasource.username=your localhost username
spring.datasource.password=your localhost password
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver

Step 2: Create project structure.

  • Create below folder structure inside src folder. We’ll travel through each file one by one.

    img-03

Step 3: Create User Model

  • Models define the structure and attributes of the data entities that the application manages.

  • For example, a User model might include attributes like id, username, email, and password.

  • Models often include annotations or custom logic to validate the data before it is persisted to the database. For example, you might use annotations like @NotBlank, @Email, or @Size to enforce constraints on the data.

  • Models are typically mapped to database tables using Object-Relational Mapping (ORM) frameworks like Hibernate in Spring Boot applications. They define the structure of the database tables and establish relationships between entities.

// User.java

@Data
@AllArgsConstructor
@NoArgsConstructor
@Entity
@Table(uniqueConstraints = {
@UniqueConstraint(columnNames = "username"),
@UniqueConstraint(columnNames = "email")
})
public class User {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;

@NotBlank
@Size(min=3, max = 20)
private String username;

@NotBlank
@Email
private String email;

@NotBlank
@Size(min=10, max = 10)
private String phone;

private LocalDateTime regDateAndTime;

}

Step 4: Create DTO classes

  • DTOs (Data Transfer Objects) play a crucial role in Spring Boot CRUD applications by providing a flexible and efficient mechanism for transferring data between layers (Client and Server), optimizing performance, encapsulating business logic, ensuring compatibility, and enhancing security and privacy.
// ApiResponseDto.java

@Data
@AllArgsConstructor
public class ApiResponseDto<T> {
private String status;
private T response;

}

// ApiResponseStatus.java

public enum ApiResponseStatus {
SUCCESS,
FAIL
}

// UserRegistrationDTO.java

@Data
@AllArgsConstructor
@NoArgsConstructor
public class UserRegistrationDTO {

@NotBlank(message = "Username is required!")
@Size(min= 3, message = "Username must have atleast 3 characters!")
@Size(max= 20, message = "Username can have have atmost 20 characters!")
private String userName;

@Email(message = "Email is not in valid format!")
@NotBlank(message = "Email is required!")
private String email;

@NotBlank(message = "Phone number is required!")
@Size(min = 10, max = 10, message = "Phone number must have 10 characters!")
@Pattern(regexp="^[0-9]*$", message = "Phone number must contain only digits")
private String phone;

}

Step 5: Create Exception classes

  • Custom exceptions help to improve the clarity and maintainability of the code by providing specific error handling for common scenarios encountered in a CRUD application.
  • They allow developers to handle exceptional cases gracefully and communicate errors effectively.
// UserNotFoundException.java

// This exception is thrown when attempting to retrieve a user from the database, but the user does not exist.
public class UserNotFoundException extends Exception{
public UserNotFoundException(String message) {
super(message);
}
}


// UserAlreadyExistsException.java

// This exception is thrown when attempting to create a new user, but a user with the same identifier (e.g., username, email) already exists in the database.
public class UserAlreadyExistsException extends Exception{
public UserAlreadyExistsException(String message) {
super(message);
}
}


// UserServiceLogicException.java

// This exception serves as a generic exception for any unexpected errors or business logic violations that occur within the user service layer.
public class UserServiceLogicException extends Exception{
public UserServiceLogicException() {
super("Something went wrong. Please try again later!");
}
}

Step 6: Create User Repository Interface

  • Repository interfaces abstract the details of data access. Instead of directly interacting with data storage mechanisms (such as databases), you define repository interfaces to declare methods for common CRUD (Create, Read, Update, Delete) operations.

  • JpaRepository is a part of Spring Data JPA and provides CRUD (Create, Read, Update, Delete) operations for the User entity.

  • The first generic parameter User specifies the entity class that this repository manages, implying that User is an entity class.

  • The second generic parameter Integer specifies the type of the primary key of the User entity.

// UserRepository.java

@Repository
public interface UserRepository extends JpaRepository<User, Integer> {

// Developers can define methods in repository interfaces with custom query keywords,
// and Spring Data JPA automatically translates them into appropriate SQL queries.
User findByEmail(String email);

User findByUsername(String userName);

List<User> findAllByOrderByRegDateTimeDesc();

}
  • By extending JpaRepository, UserRepository inherits methods for performing various database operations such as saving, deleting, finding, etc., without needing to write these methods explicitly. These methods are provided by Spring Data JPA based on the naming convention of the methods in the repository interface.

Step 7: Create User Service class

  • Service classes in Spring Boot CRUD applications serve as the backbone for implementing business logic, managing transactions, abstracting data access, centralizing business rules, promoting reusability, and handling errors effectively.

  • By placing business logic within service classes, you centralize the rules governing your application’s behavior. This makes it easier to maintain and modify the behavior of your application without having to hunt down logic scattered across different parts of the codebase.

@Service
public interface UserService {

ResponseEntity<ApiResponseDto<?>> registerUser(UserDetailsRequestDto newUserDetails)
throws UserAlreadyExistsException, UserServiceLogicException;

ResponseEntity<ApiResponseDto<?>> getAllUsers()
throws UserServiceLogicException;

ResponseEntity<ApiResponseDto<?>> updateUser(UserDetailsRequestDto newUserDetails, int id)
throws UserNotFoundException, UserServiceLogicException;

ResponseEntity<ApiResponseDto<?>> deleteUser(int id)
throws UserServiceLogicException, UserNotFoundException;

}
@Component
@Slf4j
public class UserServiceImpl implements UserService{

@Autowired
private UserRepository userRepository;

@Override
public ResponseEntity<ApiResponseDto<?>> registerUser(UserDetailsRequestDto newUserDetails)
throws UserAlreadyExistsException, UserServiceLogicException {

// logic to register user
}

@Override
public ResponseEntity<ApiResponseDto<?>> getAllUsers() throws UserServiceLogicException {
// logic to get all users
}

@Override
public ResponseEntity<ApiResponseDto<?>> updateUser(UserDetailsRequestDto newUserDetails, int id)
throws UserNotFoundException, UserServiceLogicException {
// logic to update user
}

@Override
public ResponseEntity<ApiResponseDto<?>> deleteUser(int id) throws UserServiceLogicException, UserNotFoundException {
// logic to delete user
}
}
  • Now let’s see how we can implement each of the methods in UserServiceImpl separately.
@Override
public ResponseEntity<ApiResponseDto<?>> registerUser(UserDetailsRequestDto newUserDetails)
throws UserAlreadyExistsException, UserServiceLogicException {

try {
if (userRepository.findByEmail(newUserDetails.getEmail()) != null){
throw new UserAlreadyExistsException("Registration failed: User already exists with email " newUserDetails.getEmail());
}
if (userRepository.findByUsername(newUserDetails.getUserName()) != null){
throw new UserAlreadyExistsException("Registration failed: User already exists with username " newUserDetails.getUserName());
}

User newUser = new User(
newUserDetails.getUserName(), newUserDetails.getEmail(), newUserDetails.getPhone(), LocalDateTime.no()
);

// save() is an in built method given by JpaRepository
userRepository.save(newUser);

return ResponseEntity
.status(HttpStatus.CREATED)
.body(new ApiResponseDto<>(ApiResponseStatus.SUCCESS.name(), "New user account has been successfully created!"));

}catch (UserAlreadyExistsException e) {
throw new UserAlreadyExistsException(e.getMessage());
}catch (Exception e) {
log.error("Failed to create new user account: " + e.getMessage());
throw new UserServiceLogicException();
}
}
@Override
public ResponseEntity<ApiResponseDto<?>> getAllUsers() throws UserServiceLogicException {
try {
List<User> users = userRepository.findAllByOrderByRegDateTimeDesc();

return ResponseEntity
.status(HttpStatus.OK)
.body(new ApiResponseDto<>(ApiResponseStatus.SUCCESS.name(), users)
);

}catch (Exception e) {
log.error("Failed to fetch all users: " + e.getMessage());
throw new UserServiceLogicException();
}
}
@Override
public ResponseEntity<ApiResponseDto<?>> updateUser(UserDetailsRequestDto newUserDetails, int id)
throws UserNotFoundException, UserServiceLogicException {
try {
User user = userRepository.findById(id).orElseThrow(() -> new UserNotFoundException("User not found with id " + id));

user.setEmail(newUserDetails.getEmail());
user.setUsername(newUserDetails.getUserName());
user.setPhone(newUserDetails.getPhone());

userRepository.save(user);

return ResponseEntity
.status(HttpStatus.OK)
.body(new ApiResponseDto<>(ApiResponseStatus.SUCCESS.name(), "User account updated successfully!")
);

}catch(UserNotFoundException e){
throw new UserNotFoundException(e.getMessage());
}catch(Exception e) {
log.error("Failed to update user account: " + e.getMessage());
throw new UserServiceLogicException();
}
}
@Override
public ResponseEntity<ApiResponseDto<?>> deleteUser(int id) throws UserServiceLogicException, UserNotFoundException {
try {
User user = userRepository.findById(id).orElseThrow(() -> new UserNotFoundException("User not found with id " + id));

userRepository.delete(user);

return ResponseEntity
.status(HttpStatus.OK)
.body(new ApiResponseDto<>(ApiResponseStatus.SUCCESS.name(), "User account deleted successfully!")
);
} catch (UserNotFoundException e) {
throw new UserNotFoundException(e.getMessage());
} catch (Exception e) {
log.error("Failed to delete user account: " + e.getMessage());
throw new UserServiceLogicException();
}
}
note
  • The @Service annotation is used to indicate that a class is a service component in the Spring application context.

  • The @Component annotation is a generic stereotype annotation used to indicate that a class is a Spring component. Components annotated with @Component are candidates for auto-detection when using Spring's component scanning feature.

  • The @Autowired annotation is used to automatically inject dependencies into Spring-managed beans. When Spring encounters a bean annotated with @Autowired, it looks for other beans in the application context that match the type of the dependency and injects it.

  • The @Slf4j annotation is not a standard Spring annotation but rather a Lombok annotation used for logging.

Step 8: Create controller

  • A controller class in a Spring Boot application is responsible for handling incoming HTTP requests and returning appropriate HTTP responses.

  • It serves as an entry point for processing client requests and often delegates the actual business logic to service classes.

  • A controller class is typically annotated with @RestController or @Controller. Inside the controller class, you define methods that handle specific HTTP requests. These methods are annotated with @RequestMapping, @GetMapping, @PostMapping, @PutMapping, @DeleteMapping, or other similar annotations to specify the HTTP method and the URL path that the method should respond to.

  • Each method in the controller class represents a particular endpoint of the REST API.

  • Controller classes often rely on service classes to perform business logic. Dependencies on these service classes are typically injected using the @Autowired annotation or constructor injection.

  • Controller methods return the response to the client. This can be done by returning a ResponseEntity object to have more control over the response status code, headers, and body.

@RestController
@RequestMapping("/users")
public class UserController {

@Autowired
public UserService userService;

@PostMapping("/new")
public ResponseEntity<ApiResponseDto<?>> registerUser(@Valid @RequestBody UserDetailsRequestDto userDetailsRequestDto) throws UserAlreadyExistsException, UserServiceLogicException {
return userService.registerUser(userDetailsRequestDto);
}

@GetMapping("/get/all")
public ResponseEntity<ApiResponseDto<?>> getAllUsers() throws UserServiceLogicException {
return userService.getAllUsers();
}

@PutMapping("/update/{id}")
public ResponseEntity<ApiResponseDto<?>> updateUser(@Valid @RequestBody UserDetailsRequestDto userDetailsRequestDto, @PathVariable int id)
throws UserNotFoundException, UserServiceLogicException {
return userService.updateUser(userDetailsRequestDto, id);
}

@DeleteMapping("/delete/{id}")
public ResponseEntity<ApiResponseDto<?>> deleteUser(@PathVariable int id)
throws UserNotFoundException, UserServiceLogicException {
return userService.deleteUser(id);
}

}
note
  • The @PathVariable annotation is used to extract values from the URI template of the incoming request. E.g., updateUser method.

  • The @RequestParam annotation is used to extract query parameters from the URL of the incoming request.

  • The @RequestBody annotation is used to extract the request body of the incoming HTTP request. It binds the body of the request to a method parameter in a controller method, typically for POST, PUT, and PATCH requests. E.g., registerUser method.

Step 9: Create Exception Handler class

  • Exception handlers in Spring Boot applications are used to handle exceptions thrown during the processing of HTTP requests.

  • They allow you to centralize error handling logic and provide custom responses to clients when errors occur.

  • @RestControllerAdvice annotation is used to indicate that the class contains advice that applies to all controllers. This advice will be applied globally to handle exceptions thrown from any controller in the application.

  • To create an exception handler, you annotate a method within a controller class with @ExceptionHandler and specify the type(s) of exceptions it can handle.

// UserServiceExceptionHandler.java

@RestControllerAdvice
public class UserServiceExceptionHandler {

@ExceptionHandler(value = UserNotFoundException.class)
public ResponseEntity<ApiResponseDto<?>> UserNotFoundExceptionHandler(UserNotFoundException exception) {
return ResponseEntity.status(HttpStatus.NOT_FOUND).body(new ApiResponseDto<>(ApiResponseStatus.FAIL.name(), exception.getMessage()));
}

@ExceptionHandler(value = UserAlreadyExistsException.class)
public ResponseEntity<ApiResponseDto<?>> UserAlreadyExistsExceptionHandler(UserAlreadyExistsException exception) {
return ResponseEntity.status(HttpStatus.CONFLICT).body(new ApiResponseDto<>(ApiResponseStatus.FAIL.name(), exception.getMessage()));
}

@ExceptionHandler(value = UserServiceLogicException.class)
public ResponseEntity<ApiResponseDto<?>> UserServiceLogicExceptionHandler(UserServiceLogicException exception) {
return ResponseEntity.badRequest().body(new ApiResponseDto<>(ApiResponseStatus.FAIL.name(), exception.getMessage()));
}

@ExceptionHandler(value = MethodArgumentNotValidException.class)
public ResponseEntity<ApiResponseDto<?>> MethodArgumentNotValidExceptionHandler(MethodArgumentNotValidException exception) {

List<String> errorMessage = new ArrayList<>();

exception.getBindingResult().getFieldErrors().forEach(error -> {
errorMessage.add(error.getDefaultMessage());
});
return ResponseEntity.badRequest().body(new ApiResponseDto<>(ApiResponseStatus.FAIL.name(), errorMessage.toString()));
}

}

Step 10: Run your application and test with postman/frontend😊.

Register user failed: User details invalid!

img-03

Register user successful

img-04

Retrieve all users

img-05

Update the details of John

img-06

Delete user john

img-07

Hey guys, that’s it. We have successfully developed rest crud API for a user management system.

Cryptography and Its Use in Cyber Security

· 4 min read
Pujan Sarkar
Cyber Security Enthusiast

Introduction

In the realm of cyber security, cryptography stands as a critical tool for protecting information. As digital data exchange grows exponentially, the importance of cryptography in ensuring data security and privacy cannot be overstated. This blog explores the fundamental concepts of cryptography, its historical significance, and its contemporary applications in cyber security.

Understanding Cryptography

Cryptography is the science of encoding and decoding information to protect it from unauthorized access. It involves various techniques and algorithms that transform readable data, known as plaintext, into an unreadable format, known as ciphertext. Only those who possess the appropriate decryption key can convert the ciphertext back into plaintext.

Key Concepts in Cryptography

  1. Encryption and Decryption: The process of converting plaintext into ciphertext is called encryption, while the process of converting ciphertext back into plaintext is called decryption.
  2. Symmetric Key Cryptography: The same key is used for both encryption and decryption. Examples include AES and DES.
  3. Asymmetric Key Cryptography: Uses a pair of keys - a public key for encryption and a private key for decryption. Examples include RSA and ECC.
  4. Hash Functions: Take an input and produce a fixed-size string of characters, which is typically a hash value. Hash functions are used for data integrity and password storage.

Historical Significance of Cryptography

Cryptography has been used for centuries to secure communication. Some historical milestones include:

  • Caesar Cipher: Used by Julius Caesar to protect military messages, this substitution cipher shifts letters by a fixed number of positions in the alphabet.
  • Enigma Machine: Used by the Germans during World War II, this electromechanical device encrypted messages. The successful decryption of Enigma-encrypted messages by the Allies significantly impacted the war's outcome.
  • Diffie-Hellman Key Exchange: Introduced in 1976, this method allowed secure key exchange over a public channel, laying the groundwork for modern public-key cryptography.

Cryptography in Modern Cyber Security

In today's digital world, cryptography is essential for securing data and maintaining privacy. Its applications are vast and varied:

Secure Communication

Cryptography ensures that communication between parties remains confidential and secure. Protocols like SSL/TLS use cryptographic techniques to protect data transmitted over the internet, such as during online banking and shopping.

Data Integrity

Hash functions play a crucial role in ensuring data integrity. When data is transmitted or stored, hash functions can verify that the data has not been altered. This is particularly important for software distribution and digital signatures.

Authentication

Cryptographic methods are used to verify the identities of users and devices. Passwords are typically stored as hash values, and public-key infrastructure (PKI) systems use digital certificates to authenticate entities.

Blockchain Technology

Cryptography is the backbone of blockchain technology. Cryptographic hashing ensures the integrity of data blocks, while asymmetric cryptography secures transactions and verifies identities. This technology underpins cryptocurrencies like Bitcoin and has applications in various fields, including supply chain management and healthcare.

Secure Storage

Encrypting data at rest ensures that even if unauthorized individuals gain access to storage media, they cannot read the data without the decryption key. This is crucial for protecting sensitive information on devices and in cloud storage.

Challenges in Cryptography

While cryptography is a powerful tool, it is not without challenges:

  • Key Management: Securely generating, storing, and distributing cryptographic keys is complex and critical for maintaining security.
  • Performance Overheads: Cryptographic operations can be computationally intensive, affecting system performance, especially in resource-constrained environments.
  • Quantum Computing: Emerging quantum computers have the potential to break many of the cryptographic algorithms currently in use, necessitating the development of quantum-resistant algorithms.

Future Directions in Cryptography

The field of cryptography is continuously evolving to address emerging threats and challenges. Some future directions include:

Post-Quantum Cryptography

With the advent of quantum computing, researchers are developing cryptographic algorithms that are resistant to quantum attacks. These algorithms aim to provide security even in the presence of powerful quantum computers.

Homomorphic Encryption

This advanced form of encryption allows computations to be performed on encrypted data without decrypting it. Homomorphic encryption has significant implications for data privacy, particularly in cloud computing and data analysis.

Zero-Knowledge Proofs

Zero-knowledge proofs enable one party to prove to another that a statement is true without revealing any information beyond the validity of the statement. This concept has applications in authentication, privacy-preserving protocols, and blockchain technology.

Conclusion

Cryptography is a cornerstone of cyber security, providing the means to protect data and maintain privacy in an increasingly interconnected world. As technology advances and new threats emerge, the field of cryptography will continue to evolve, offering innovative solutions to ensure the security and integrity of our digital lives. By understanding and implementing cryptographic techniques, individuals and organizations can safeguard their information and build a secure future.

Getting Started with PostgreSQL

· 15 min read
Nayanika Mukherjee
Full Stack Developer

The PostgreSQL language, primarily SQL (Structured Query Language), is the standard language for interacting with the PostgreSQL database. SQL is used to define the structure of the database (Data Definition Language or DDL), manipulate the data (Data Manipulation Language or DML), control access (Data Control Language or DCL), and query the data (Data Query Language or DQL). In addition to standard SQL, PostgreSQL supports procedural languages like PL/pgSQL, which allows for writing complex functions and triggers with control structures, error handling, and more. This makes PostgreSQL a versatile and powerful tool for database management, providing both simplicity for basic queries and advanced features for complex database operations. The extensibility of PostgreSQL allows users to define custom functions, operators, and data types, enhancing the database's capabilities beyond the typical relational model.

Introduction to PostgreSQL

PostgreSQL, often referred to as Postgres, is a powerful, open-source object-relational database management system (ORDBMS) known for its robustness, reliability, and performance. Developed by a global community of developers, PostgreSQL has a history spanning over 30 years, which has contributed to its reputation as one of the most advanced and feature-rich databases available.

Key Features

  • Open Source: PostgreSQL is freely available under the PostgreSQL License, an open-source license that allows for wide usage and distribution.
  • ACID Compliance: Ensures transactional integrity with Atomicity, Consistency, Isolation, and Durability, making it suitable for applications requiring reliable data transactions.
  • SQL Standards Compliance: PostgreSQL supports a wide range of SQL standards, providing a rich set of SQL capabilities.
  • Extensibility: Users can define their own data types, operators, index methods, and even procedural languages. Extensions like PostGIS add additional functionality for specific use cases.
  • Concurrency and Performance: Utilizing Multi-Version Concurrency Control (MVCC), PostgreSQL handles multiple transactions concurrently without locking, ensuring high performance and scalability.
  • Replication and High Availability: Supports various replication methods, including streaming replication and logical replication, ensuring data availability and redundancy.
  • Advanced Data Types: Offers support for advanced data types such as JSON, XML, and arrays, allowing for more flexible and complex data models.
  • Full-Text Search: Built-in support for full-text search capabilities enables efficient text searching and indexing.
  • Community and Support: Backed by a strong community and a wealth of documentation, tutorials, and third-party tools, making it easier for users to get support and resources.

History

PostgreSQL's history dates back to 1986, originating as the POSTGRES project at the University of California, Berkeley, under the leadership of Professor Michael Stonebraker. The project aimed to address limitations in existing database systems by introducing an advanced database management system with a focus on extensibility and complex data types. In 1996, the project was renamed PostgreSQL to reflect its support for SQL, the standardized query language. Since then, PostgreSQL has evolved through contributions from a global community of developers, gaining a reputation for its robustness, advanced features, and adherence to SQL standards. Over the decades, it has become one of the most reliable and feature-rich open-source database systems available, widely used in various industries for its performance, scalability, and flexibility.

Installing PostgreSQL

Installing PostgreSQL varies depending on your operating system. Below are step-by-step instructions for installing PostgreSQL on Linux (Ubuntu), Windows, and macOS.

Installing PostgreSQL on Linux (Ubuntu)

  • Update your package lists:
sudo apt update
  • Install PostgreSQL:
sudo apt install postgresql postgresql-contrib
  • Start PostgreSQl service:
sudo systemctl start postgresql
  • Enable PostgreSQL to start on boot:
sudo systemctl enable postgresql
  • Switch to the postgres user and open the psql shell:
sudo -i -u postgres
psql
  • Exit the psql shell:
\q

Installing PostgreSQL on Windows

  • Download the PostgreSQL installer: Visit the official PostgreSQL website and download the installer for your version of Windows.

  • Run the installer: Double-click the downloaded file to start the installation process.

  • Follow the installation steps:

    • Choose the installation directory. Select components to install (typically, you'll want to include pgAdmin).
    • Set the password for the PostgreSQL superuser (postgres).
    • Select the port number (default is 5432).
    • Choose the locale.
    • Complete the installation: Finish the installation and optionally launch Stack Builder to install additional tools and drivers.
  • Access PostgreSQL: Use the pgAdmin GUI or the psql command-line tool to interact with your PostgreSQL instance.

Installing PostgreSQL on MacOS

  • Install Homebrew (if not already installed): Homebrew is a package manager for macOS. You can install it using:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  • Update Homebrew:
brew update
  • Install PostgreSQL:
brew install postgresql
  • Start PostgreSQL services:
brew services start postgresql
  • Initialize the database (if not automatically done):
initdb /usr/local/var/postgres
  • Access PostgreSQL:
psql postgres

Verifying the Installation:

To verify that PostgreSQL is installed correctly, you can perform a few simple commands.

  • Open psql:
psql -U postgres
  • Create a test database:
CREATE DATABASE testdb;
  • Connect to a test database:
\c testdb
  • Create a Test Table:
CREATE TABLE test_table (id SERIAL PRIMARY KEY, name VARCHAR(50));
  • Create data into a test table:
INSERT INTO test_table (name) VALUES ('Test Name');
  • Insert data into test table:
INSERT INTO test_table (name) VALUES ('Test Name');
  • Query the test table:
SELECT * FROM test_table;

If you see the inserted data, your PostgreSQL installation is working correctly.

Basic PostgreSQL Configuration

Locate the configuration files:

  • postgresql.conf: Main configuration file for database settings.

  • pg_hba.conf: Client authentication configuration file. These files are typically located in /etc/postgresql/[version]/main/ on Debian-based systems or /var/lib/pgsql/data/ on RedHat-based systems.

  • Edit postgresql.conf: Open the file in a text editor:

sudo nano /etc/postgresql/[version]/main/postgresql.conf  # Debian/Ubuntu
sudo nano /var/lib/pgsql/data/postgresql.conf # CentOS/RHEL

Some basic settings to configure:

  • Listen addresses: Set the IP addresses on which the server listens for connections.
listen_addresses = '*'
  • Port: The port PostgreSQL server listens on.
port = 5432
  • Shared Buffers: Memory allocated for database caching.
shared_buffers = 128MB
  • Logging: Set logging parameters for troubleshooting.
logging_collector = on
log_directory = 'pg_log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'

Edit pg_hba.conf: This file controls client authentication.

sudo nano /etc/postgresql/[version]/main/pg_hba.conf  # Debian/Ubuntu
sudo nano /var/lib/pgsql/data/pg_hba.conf # CentOS/RHEL

Add a line to allow connections:

# TYPE  DATABASE        USER            ADDRESS                 METHOD
host all all 0.0.0.0/0 md5

Restart PostgreSQL

After making changes, restart the PostgreSQL service:

sudo systemctl restart postgresql

Verify the Configuration

Check the status of the PostgreSQL service:

sudo systemctl status postgresql
  • Test the connection: Use psql or a GUI tool like pgAdmin to connect to the database with the new user:
psql -h localhost -U myuser -d mydb

You've now installed and configured PostgreSQL for basic use, created a database and user, and set up basic configuration parameters. For more advanced configurations and optimizations, refer to the PostgreSQL official documentation.

Basic SQL Commands in PostgreSQL

Here are some basic SQL commands you can use in PostgreSQL to interact with your databases:

  1. Connecting to PostgreSQL:
psql -h localhost -U myuser -d mydb
  1. Basic SQL Commands:
  • Creating a Database:
CREATE DATABASE mydb;
  • Creating a Table:
CREATE TABLE employees (
id SERIAL PRIMARY KEY,
name VARCHAR(100),
position VARCHAR(50),
salary NUMERIC
);
  • Inserting Data:
INSERT INTO employees (name, position, salary) VALUES ('John Doe', 'Manager', 60000);
INSERT INTO employees (name, position, salary) VALUES ('Jane Smith', 'Developer', 50000);
  • Querying Data:
SELECT * FROM employees;
SELECT name, salary FROM employees WHERE position = 'Developer';
  • Updating Data:
UPDATE employees SET salary = 65000 WHERE name = 'John Doe';
  • Deleting Data:
DELETE FROM employees WHERE name = 'Jane Smith';
  • Adding a Column:
ALTER TABLE employees ADD COLUMN hire_date DATE;
  • Removing a Column:
ALTER TABLE employees DROP COLUMN hire_date;
  • Dropping a Table:
DROP TABLE employees;
  1. User and Permission Management:
  • Creating a User:
CREATE USER myuser WITH ENCRYPTED PASSWORD 'mypassword';
  • Granting Privileges:
GRANT ALL PRIVILEGES ON DATABASE mydb TO myuser;
  1. Indexes:
  • Creating an Index:
CREATE INDEX idx_name ON employees (name);
  • Dropping an Index:
DROP INDEX idx_name;
  1. Joins:
  • Inner Join:
SELECT a.name, b.department
FROM employees a
INNER JOIN departments b ON a.department_id = b.id;
  • Left Join:
SELECT a.name, b.department
FROM employees a
LEFT JOIN departments b ON a.department_id = b.id;
  • Right Join:
SELECT a.name, b.department
FROM employees a
RIGHT JOIN departments b ON a.department_id = b.id;

These commands provide a basic foundation for working with PostgreSQL.

Advanced SQL Features

Advanced SQL features in PostgreSQL provide powerful tools for complex data manipulation, analysis, and performance optimization. Here are some key advanced features:

  1. Common Table Expressions (CTEs) CTEs allow you to define temporary result sets that can be referenced within a SELECT, INSERT, UPDATE, or DELETE statement.

Example:

WITH sales AS (
SELECT date, amount FROM transactions WHERE type = 'sale'
)
SELECT date, SUM(amount) FROM sales GROUP BY date;
  1. Window Functions Window functions perform calculations across a set of table rows related to the current row.

Example:

SELECT name, salary, AVG(salary) OVER (PARTITION BY department) AS avg_department_salary
FROM employees;
  1. Recursive Queries Recursive CTEs allow you to query hierarchical data, such as organizational structures or graphs.

Example:

WITH RECURSIVE employee_hierarchy AS (
SELECT id, name, manager_id FROM employees WHERE manager_id IS NULL
UNION ALL
SELECT e.id, e.name, e.manager_id
FROM employees e
INNER JOIN employee_hierarchy eh ON e.manager_id = eh.id
)
SELECT * FROM employee_hierarchy;
  1. JSON and JSONB PostgreSQL provides extensive support for JSON data types, allowing you to store and query JSON data efficiently.

Example:

CREATE TABLE products (
id SERIAL PRIMARY KEY,
data JSONB
);

INSERT INTO products (data) VALUES ('{"name": "Widget", "price": 25, "tags": ["sale", "new"]}');

SELECT data->>'name' AS name, data->>'price' AS price FROM products;
  1. Full-Text Search Full-text search capabilities allow you to perform complex text searches.

Example:

CREATE TABLE documents (
id SERIAL PRIMARY KEY,
content TEXT
);

INSERT INTO documents (content) VALUES ('PostgreSQL is a powerful, open-source relational database system.');

CREATE INDEX idx_gin_content ON documents USING GIN (to_tsvector('english', content));

SELECT * FROM documents WHERE to_tsvector('english', content) @@ to_tsquery('powerful & open-source');
  1. Full-Text Search Full-text search capabilities allow you to perform complex text searches.

Example:

CREATE TABLE documents (
id SERIAL PRIMARY KEY,
content TEXT
);

INSERT INTO documents (content) VALUES ('PostgreSQL is a powerful, open-source relational database system.');

CREATE INDEX idx_gin_content ON documents USING GIN (to_tsvector('english', content));

SELECT * FROM documents WHERE to_tsvector('english', content) @@ to_tsquery('powerful & open-source');
  1. Table Partitioning Table partitioning improves performance and manageability by dividing large tables into smaller, more manageable pieces.

Example:

CREATE TABLE sales (
id SERIAL PRIMARY KEY,
sale_date DATE,
amount NUMERIC
) PARTITION BY RANGE (sale_date);

CREATE TABLE sales_2023 PARTITION OF sales FOR VALUES FROM ('2023-01-01') TO ('2024-01-01');
CREATE TABLE sales_2024 PARTITION OF sales FOR VALUES FROM ('2024-01-01') TO ('2025-01-01');
  1. Foreign Data Wrappers (FDW) FDWs allow you to access and manipulate data from external data sources as if they were local tables.

Example:

CREATE EXTENSION postgres_fdw;

CREATE SERVER foreign_server
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host 'foreign_host', dbname 'foreign_db', port '5432');

CREATE USER MAPPING FOR myuser
SERVER foreign_server
OPTIONS (user 'foreign_user', password 'foreign_password');

CREATE FOREIGN TABLE foreign_table (
id INT,
data TEXT
) SERVER foreign_server
OPTIONS (schema_name 'public', table_name 'foreign_table');
  1. Advanced Indexing PostgreSQL supports a variety of indexing methods, including B-tree, Hash, GIN, and GiST indexes.

Example:

CREATE INDEX idx_gin_data ON products USING GIN (data jsonb_path_ops);
  1. Materialized Views Materialized views store the result of a query physically and can be refreshed as needed, improving query performance for complex operations.

Example:

CREATE MATERIALIZED VIEW sales_summary AS
SELECT date_trunc('month', sale_date) AS month, SUM(amount) AS total_sales
FROM sales
GROUP BY month;

REFRESH MATERIALIZED VIEW sales_summary;
  1. Advanced Transaction Management PostgreSQL supports advanced transaction control features such as savepoints, two-phase commit, and more.

Savepoints Example:

BEGIN;
SAVEPOINT my_savepoint;

INSERT INTO employees (name, position, salary) VALUES ('New Employee', 'Intern', 30000);

-- If something goes wrong, rollback to the savepoint
ROLLBACK TO SAVEPOINT my_savepoint;

-- Commit the transaction
COMMIT;
  1. Extensions PostgreSQL has a rich ecosystem of extensions that add new functionality to the database.

Example:

CREATE EXTENSION pg_trgm;  -- Enables fuzzy string matching
CREATE EXTENSION postgis; -- Adds spatial data support

These advanced features of PostgreSQL can help you handle complex data processing and optimization tasks effectively.

Security and User Management

PostgreSQL provides robust security and user management features to ensure data protection and control over database access. Security measures include authentication methods, roles, and permissions. Authentication can be configured using pg_hba.conf, allowing methods like password-based (md5, scram-sha-256), and host-based authentication. Roles in PostgreSQL are used to manage user privileges and can be grouped into two types: login roles (users) and group roles. Roles can be granted specific privileges on database objects like tables, views, and functions, controlling what actions a user can perform. Additionally, PostgreSQL supports SSL for encrypted connections, enhancing data transmission security.

A practical example of user management in PostgreSQL involves creating a new user, assigning roles, and setting permissions. The following SQL code demonstrates this process:

-- Create a new user with a password
CREATE USER new_user WITH ENCRYPTED PASSWORD 'securepassword';

-- Create a new role
CREATE ROLE read_only;

-- Grant the role to the user
GRANT read_only TO new_user;

-- Assign permissions to the role
GRANT CONNECT ON DATABASE mydb TO read_only;
GRANT USAGE ON SCHEMA public TO read_only;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO read_only;

-- Apply the same permissions to any new tables created in the schema
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO read_only;

In this example, a user new_user is created with a specific password, assigned a role read_only, and granted permissions to connect to the database, use the schema, and select data from all tables within the schema. This approach ensures that users have only the necessary access, adhering to the principle of least privilege, thereby enhancing the database's overall security.

Extensions and Customization

PostgreSQL's extensibility and customization capabilities allow it to be tailored to a wide range of use cases. Extensions add new functionality to the database system, such as data types, functions, operators, index types, and procedural languages. These extensions can be installed and managed using SQL commands. PostgreSQL also supports custom functions and stored procedures, which can be written in various languages like PL/pgSQL, PL/Python, PL/Perl, and PL/Tcl. Additionally, PostgreSQL provides mechanisms for defining custom data types and operators, enabling users to extend the database's capabilities to suit specific requirements.

Extensions in PostgreSQL

PostgreSQL comes with a collection of extensions that can be easily installed to extend its functionality. Some popular extensions include:

  • PostGIS: Adds support for geographic objects, enabling location-based queries.

  • pg_trgm: Provides functions and operators for determining the similarity of text based on trigram matching.

  • hstore: Allows for the storage of key-value pairs within a single PostgreSQL value.

  • uuid-ossp: Generates universally unique identifiers (UUIDs).

  • Installing an Extension To install an extension, you can use the CREATE EXTENSION command. For example, to install the pg_trgm extension:

CREATE EXTENSION pg_trgm;

Custom Functions and Stored Procedures

Custom functions and stored procedures allow for encapsulating logic that can be reused across queries. They can be written in various procedural languages. Here's an example of a custom function written in PL/pgSQL:

Example: Custom Function

CREATE OR REPLACE FUNCTION calculate_discount(price NUMERIC, discount_rate NUMERIC) RETURNS NUMERIC AS $$
BEGIN
RETURN price - (price * discount_rate / 100);
END;
$$ LANGUAGE plpgsql;

This function, calculate_discount, takes a price and a discount rate as input and returns the discounted price.

Custom Data Types

PostgreSQL allows for the creation of custom data types, which can be useful for representing complex data structures.

Example: Custom Data Type

CREATE TYPE address AS (
street VARCHAR,
city VARCHAR,
state VARCHAR,
zip_code VARCHAR
);

This custom type address can then be used as a column type in a table:

CREATE TABLE employees (
id SERIAL PRIMARY KEY,
name VARCHAR(100),
home_address address
);

Custom Operators

Custom operators can be created to define new operations on existing data types.

Example: Custom Operator First, create a function that defines the operation:

CREATE FUNCTION add_years(date, integer) RETURNS date AS $$
BEGIN
RETURN $1 + ($2 * INTERVAL '1 year');
END;
$$ LANGUAGE plpgsql;

Next, create the operator that uses this function:

CREATE OPERATOR + (
LEFTARG = date,
RIGHTARG = integer,
PROCEDURE = add_years
);

Now you can use the + operator to add years to a date:

SELECT '2024-06-23'::date + 5;  -- Returns '2029-06-23'

Conclusion

PostgreSQL stands out as a robust, versatile, and highly extensible open-source relational database system that meets the needs of both small-scale applications and large, complex systems. With its rich set of features including advanced SQL capabilities, comprehensive security and user management, extensive support for extensions, and the ability to create custom functions, data types, and operators, PostgreSQL empowers developers and administrators to tailor the database to their specific requirements. Its commitment to standards compliance, coupled with continuous innovation and community support, ensures that PostgreSQL remains a top choice for organizations seeking a reliable and powerful database solution.

Cyber Security and the Web Explosion

· 18 min read
Pujan Sarkar
Cyber Security Enthusiast

Introduction

In the digital age, the exponential growth of the internet, often referred to as the web explosion, has transformed every facet of modern life. From personal communication to business operations, the internet has become a fundamental pillar. However, this explosion has also introduced significant challenges, particularly in the realm of cyber security. As we become increasingly interconnected, the need to protect sensitive information and maintain privacy has never been more critical.

Understanding the Web Explosion

The term "web explosion" describes the rapid and widespread increase in internet usage, fueled by advancements in technology, increased accessibility, and the proliferation of connected devices. The web explosion is characterized by:

  • Unprecedented Connectivity: Billions of devices are now connected to the internet, creating a vast network of data exchange.
  • Data Proliferation: Enormous amounts of data are generated daily, including personal information, financial transactions, and business communications.
  • Evolving Technologies: Innovations such as cloud computing, IoT (Internet of Things), and AI (Artificial Intelligence) have transformed how we interact with the internet.

The web explosion has led to significant societal changes, influencing how we communicate, work, learn, and entertain ourselves. This unprecedented connectivity has broken down geographical barriers, allowing for real-time communication and collaboration across the globe. However, it has also created new vulnerabilities and threats that cyber security professionals must address.

The Importance of Cyber Security

Cyber security involves protecting internet-connected systems, including hardware, software, and data, from cyber attacks. Effective cyber security measures are essential to safeguarding personal data, maintaining business continuity, and protecting critical infrastructure. In an era where data is considered the new oil, the importance of cyber security cannot be overstated.

Personal Data Protection

For individuals, cyber security is crucial in protecting personal information from identity theft, fraud, and privacy breaches. With the increasing amount of personal data shared online, such as through social media, e-commerce, and online banking, individuals are more vulnerable to cyber attacks. Cyber security measures help prevent unauthorized access to personal data, ensuring that sensitive information remains confidential.

Business Continuity

For businesses, cyber security is vital for maintaining operational integrity and customer trust. Cyber attacks can disrupt business operations, leading to financial losses, reputational damage, and legal consequences. By implementing robust cyber security measures, businesses can protect their assets, ensure regulatory compliance, and maintain customer confidence.

Protection of Critical Infrastructure

On a broader scale, cyber security is essential for protecting critical infrastructure, such as power grids, transportation systems, and healthcare facilities. Cyber attacks on critical infrastructure can have devastating consequences, affecting public safety and national security. Governments and organizations must collaborate to secure these vital systems against cyber threats.

Key Cyber Security Challenges

The web explosion has introduced several cyber security challenges:

  • Increased Attack Surface: With more devices connected to the internet, the potential points of entry for cyber attackers have multiplied. Each connected device represents a possible vulnerability that attackers can exploit.
  • Sophisticated Threats: Cyber attacks have become more sophisticated, employing advanced techniques such as phishing, ransomware, and state-sponsored hacking. Attackers continually evolve their methods to bypass security measures, making it challenging to defend against them.
  • Data Breaches: High-profile data breaches have exposed sensitive information, leading to financial losses and reputational damage. Data breaches often result from vulnerabilities in security systems, highlighting the need for robust defenses.
  • Regulatory Compliance: Organizations must navigate complex regulatory landscapes to ensure data protection and privacy. Compliance with regulations such as GDPR (General Data Protection Regulation) and CCPA (California Consumer Privacy Act) is critical but can be challenging to achieve.

Increased Attack Surface

The increased attack surface is a direct consequence of the web explosion. As more devices become connected to the internet, the number of potential vulnerabilities increases. These devices, often referred to as the Internet of Things (IoT), include everything from smart home appliances to industrial control systems. Each device connected to the network can be a potential target for cyber attackers. To address this challenge, it is essential to implement comprehensive security measures for all connected devices and regularly update and patch software to fix vulnerabilities.

Sophisticated Threats

Sophisticated cyber threats are continually evolving, making it difficult for organizations to defend against them. Phishing attacks, for example, have become more targeted and convincing, often using social engineering techniques to trick individuals into revealing sensitive information. Ransomware attacks have also increased in frequency and severity, with attackers demanding large sums of money to restore access to compromised data. State-sponsored hacking adds another layer of complexity, as nation-states engage in cyber espionage and sabotage. To combat these threats, organizations must stay informed about the latest attack methods and implement advanced security measures, such as threat intelligence and behavior analysis.

Data Breaches

Data breaches can have severe consequences for individuals and organizations. High-profile breaches, such as those affecting Equifax and Target, have exposed millions of people's personal and financial information. These breaches often result from weaknesses in security systems, such as unpatched vulnerabilities or weak access controls. The fallout from a data breach can be extensive, including financial losses, legal liabilities, and damage to an organization's reputation. To prevent data breaches, organizations must implement strong access controls, encrypt sensitive data, and regularly monitor their systems for signs of compromise.

Regulatory Compliance

Regulatory compliance is a significant challenge for organizations, especially those operating in multiple jurisdictions. Regulations such as GDPR and CCPA require organizations to implement strict data protection measures and provide individuals with rights over their data. Non-compliance can result in substantial fines and legal penalties. Achieving compliance requires a comprehensive understanding of the regulatory landscape and the implementation of appropriate security measures. Organizations must also establish processes for responding to data breaches and ensuring that individuals' rights are protected.

Essential Cyber Security Measures

To address these challenges, individuals and organizations must implement robust cyber security measures:

  1. Firewalls and Antivirus Software: Basic but essential tools that protect against unauthorized access and malware. Firewalls act as a barrier between a trusted network and untrusted networks, controlling incoming and outgoing traffic based on security rules. Antivirus software detects and removes malicious software, preventing it from compromising systems.
  2. Encryption: Ensures that data is unreadable to unauthorized users, both in transit and at rest. Encryption protects sensitive information from being intercepted or accessed by unauthorized individuals. Implementing strong encryption protocols for data storage and communication is crucial for maintaining data confidentiality.
  3. Multi-Factor Authentication (MFA): Adds an extra layer of security by requiring multiple forms of verification before granting access. MFA combines something you know (password), something you have (security token), and something you are (biometric verification) to enhance security. It significantly reduces the risk of unauthorized access, even if one factor is compromised.
  4. Regular Software Updates: Keeping software up to date helps protect against known vulnerabilities. Software vendors frequently release updates and patches to fix security flaws. Ensuring that all software, including operating systems and applications, is up to date is essential for maintaining security.
  5. Security Training: Educating employees about cyber security best practices reduces the risk of human error. Employees are often the weakest link in the security chain, making them prime targets for social engineering attacks. Regular training sessions and awareness programs can help employees recognize and respond to potential threats.
  6. Incident Response Plans: Preparing for potential security breaches minimizes damage and ensures a swift recovery. An incident response plan outlines the steps to take in the event of a security breach, including identifying and containing the threat, notifying affected parties, and restoring systems to normal operation. Regularly testing and updating the plan ensures that it remains effective.

Firewalls and Antivirus Software

Firewalls and antivirus software are foundational elements of a robust cyber security strategy. Firewalls act as a first line of defense, controlling the flow of traffic between trusted and untrusted networks. By establishing rules and policies for network traffic, firewalls can block unauthorized access and prevent malicious activity. Antivirus software, on the other hand, detects and removes malware, such as viruses, worms, and trojans, that can compromise systems. Regularly updating antivirus software ensures that it can identify and neutralize the latest threats.

Encryption

Encryption is a critical security measure for protecting sensitive data. It involves converting data into a coded format that can only be read by authorized individuals with the decryption key. Encryption is used to protect data both in transit (e.g., during communication over the internet) and at rest (e.g., stored on a device or server). Strong encryption algorithms, such as AES (Advanced Encryption Standard), provide a high level of security. Implementing encryption helps prevent data breaches and ensures that sensitive information remains confidential.

Multi-Factor Authentication (MFA)

Multi-factor authentication (MFA) significantly enhances security by requiring users to provide multiple forms of verification before accessing a system. This approach reduces the risk of unauthorized access, even if one factor, such as a password, is compromised. MFA typically combines something you know (e.g., a password), something you have (e.g., a security token or mobile device), and something you are (e.g., a fingerprint or facial recognition). Implementing MFA is particularly important for securing sensitive accounts and systems.

Regular Software Updates

Regularly updating software is essential for maintaining security. Software vendors frequently release updates and patches to address security vulnerabilities and improve functionality. Failure to apply these updates can leave systems exposed to known threats. Organizations should establish a process for regularly updating all software, including operating systems, applications, and firmware. Automated update mechanisms can help ensure that updates are applied promptly and consistently.

Security Training

Security training is a crucial component of an organization's cyber security strategy. Employees are often targeted by cyber attackers through social engineering techniques, such as phishing emails and phone scams. By providing regular training and awareness programs, organizations can educate employees about common threats and best practices for avoiding them. Training should cover topics such as recognizing phishing attempts, using strong passwords, and reporting suspicious activity. A well-informed workforce is better equipped to protect against cyber threats.

Incident Response Plans

An incident response plan outlines the steps to take in the event of a security breach. The goal is to minimize damage, restore normal operations, and prevent future incidents. A comprehensive incident response plan includes procedures for identifying and containing the threat, notifying affected parties, and conducting a post-incident analysis to identify lessons learned. Regularly testing and updating the plan ensures that it remains effective and relevant. Incident response teams should conduct simulated exercises to practice their response to various types of cyber incidents.

As cyber threats evolve, so too must our defenses. Some emerging trends in cyber security include:

  • Artificial Intelligence and Machine Learning: AI and ML are increasingly used to detect and respond to cyber threats in real-time. These technologies can analyze vast amounts of data to identify patterns and anomalies that may indicate malicious activity. AI and ML can also automate threat detection and response, reducing the time it takes to mitigate attacks.
  • Zero Trust Architecture: A security model that assumes no trust by default, requiring verification for every access request. The zero trust approach emphasizes continuous monitoring and verification of users and devices, regardless of their location. This model helps prevent unauthorized access and lateral movement within a network.
  • Blockchain Technology: Offers enhanced security features for data integrity and authentication. Blockchain's decentralized and tamper-proof nature makes it an attractive option for securing transactions and data exchanges. It can be used to create secure and transparent systems for various applications, including supply chain management and digital identity verification.
  • Quantum Computing: While still in its infancy, quantum computing poses both challenges and opportunities for future cyber security. Quantum computers have the potential to break current encryption algorithms, necessitating the development of quantum-resistant encryption methods. At the same time, quantum computing can enhance cyber security by solving complex problems more efficiently.

Artificial Intelligence and Machine Learning

Artificial intelligence (AI) and machine learning (ML) are transforming the field of cyber security. These technologies can analyze vast amounts of data to identify patterns and anomalies that may indicate malicious activity. AI and ML can also automate threat detection and response, reducing the time it takes to mitigate attacks. For example, AI-powered security systems can analyze network traffic in real-time to detect and block suspicious activity. Machine learning algorithms can continuously learn from new data, improving their ability to identify emerging threats.

Zero Trust Architecture

The zero trust security model is based on the principle of "never trust, always verify." Unlike traditional security models that assume trust within the network perimeter, zero trust requires continuous verification of all users and devices, regardless of their location. This approach helps prevent unauthorized access and lateral movement within a network. Implementing zero trust involves several key components, including strong authentication mechanisms, micro-segmentation of networks, and continuous monitoring of user and device behavior. By adopting a zero trust model, organizations can enhance their security posture and reduce the risk of data breaches.

Blockchain Technology

Blockchain technology offers unique security features that can enhance data integrity and authentication. A blockchain is a decentralized and tamper-proof ledger that records transactions in a transparent and secure manner. Each block in the chain contains a cryptographic hash of the previous block, creating a secure and immutable record. Blockchain can be used to create secure systems for various applications, such as supply chain management, digital identity verification, and secure voting systems. By leveraging blockchain technology, organizations can enhance the security and transparency of their operations.

Quantum Computing

Quantum computing represents both a challenge and an opportunity for cyber security. Quantum computers have the potential to break current encryption algorithms, posing a significant threat to data security. As quantum computing technology advances, it will be necessary to develop quantum-resistant encryption methods to protect sensitive information. At the same time, quantum computing can enhance cyber security by solving complex problems more efficiently. For example, quantum algorithms can improve the detection of anomalies and the optimization of security protocols. Research and development in quantum computing and quantum-resistant encryption are essential to preparing for the future of cyber security.

The Role of Governments and Organizations

Governments and organizations play a crucial role in cyber security:

  • Regulatory Frameworks: Governments must establish and enforce regulations to protect data and privacy. Regulations such as GDPR and CCPA set standards for data protection and hold organizations accountable for their security practices. Governments must also update regulations to keep pace with evolving threats and technologies.
  • Public-Private Partnerships: Collaboration between public and private sectors enhances threat intelligence and response capabilities. Sharing information about cyber threats and vulnerabilities can help organizations and governments stay ahead of emerging threats. Public-private partnerships can also facilitate the development of best practices and standards for cyber security.
  • Investment in Research: Continuous investment in cyber security research and development is essential to stay ahead of emerging threats. Research efforts should focus on developing new security technologies, improving threat detection and response capabilities, and understanding the evolving threat landscape. Governments and organizations should allocate resources to support cyber security research and innovation.

Regulatory Frameworks

Regulatory frameworks play a critical role in ensuring data protection and privacy. Governments must establish and enforce regulations that set standards for cyber security practices. Regulations such as GDPR and CCPA require organizations to implement strict data protection measures and provide individuals with rights over their data. Compliance with these regulations helps ensure that organizations prioritize cyber security and protect sensitive information. Governments must also update regulations to keep pace with evolving threats and technologies. This involves staying informed about emerging cyber security challenges and adapting regulatory requirements accordingly.

Public-Private Partnerships

Public-private partnerships are essential for enhancing threat intelligence and response capabilities. Collaboration between the public and private sectors allows for the sharing of information about cyber threats and vulnerabilities. This information sharing can help organizations and governments stay ahead of emerging threats and respond more effectively to cyber incidents. Public-private partnerships can also facilitate the development of best practices and standards for cyber security. By working together, the public and private sectors can enhance their collective cyber security posture and protect critical infrastructure.

Investment in Research

Investment in cyber security research and development is crucial for staying ahead of emerging threats. Research efforts should focus on developing new security technologies, improving threat detection and response capabilities, and understanding the evolving threat landscape. Governments and organizations should allocate resources to support cyber security research and innovation. This includes funding academic research, supporting public-private research initiatives, and fostering collaboration between industry and academia. By investing in research, we can develop the knowledge and tools needed to address current and future cyber security challenges.

Case Studies in Cyber Security

The Equifax Data Breach

One of the most significant data breaches in history, the Equifax breach in 2017 exposed the personal information of over 147 million people. Hackers exploited a vulnerability in the company's web application framework. The breach underscored the importance of regular software updates and robust security practices. Equifax faced severe consequences, including financial losses, legal penalties, and reputational damage. The incident highlighted the need for organizations to prioritize cyber security and implement comprehensive measures to protect sensitive information.

WannaCry Ransomware Attack

In 2017, the WannaCry ransomware attack affected more than 200,000 computers across 150 countries. The attack targeted systems running outdated versions of Microsoft Windows. The incident highlighted the critical need for timely software updates and comprehensive security measures. WannaCry caused widespread disruption, affecting organizations in various sectors, including healthcare, transportation, and finance. The attack underscored the importance of patching vulnerabilities and implementing strong defenses against ransomware.

SolarWinds Cyber Espionage Campaign

In 2020, a sophisticated cyber espionage campaign targeted SolarWinds, a major IT management company. The attackers inserted malicious code into the company's software updates, affecting thousands of organizations worldwide, including government agencies. This attack emphasized the importance of supply chain security and the need for stringent monitoring of third-party software. The SolarWinds breach demonstrated the potential impact of supply chain attacks and highlighted the need for organizations to implement comprehensive security measures to protect their supply chains.

Future Directions in Cyber Security

Cyber Security Automation

Automation in cyber security involves using advanced technologies to automate threat detection, response, and recovery processes. This approach helps in handling large volumes of data and detecting threats faster than human capabilities. Cyber security automation can improve the efficiency and effectiveness of security operations, allowing organizations to respond more quickly to cyber incidents. For example, automated threat detection systems can analyze network traffic in real-time to identify and block malicious activity. By leveraging automation, organizations can enhance their cyber security posture and reduce the risk of data breaches.

Cyber Resilience

Cyber resilience refers to an organization's ability to withstand, recover from, and adapt to cyber attacks. It involves proactive measures such as continuous monitoring, risk management, and disaster recovery planning. Building cyber resilience requires a comprehensive approach that includes implementing strong security measures, regularly testing and updating incident response plans, and fostering a culture of security awareness. By enhancing cyber resilience, organizations can minimize the impact of cyber attacks and ensure business continuity.

Privacy-Enhancing Technologies

As data privacy concerns grow, privacy-enhancing technologies (PETs) are becoming crucial. These technologies include methods like homomorphic encryption, differential privacy, and secure multi-party computation, which allow for data analysis while preserving privacy. PETs enable organizations to process and analyze data without exposing sensitive information, addressing privacy concerns and regulatory requirements. Implementing privacy-enhancing technologies can help organizations protect individuals' privacy and build trust with their customers.

Cyber Security Workforce Development

The demand for skilled cyber security professionals is increasing. Investing in education and training programs to develop a skilled cyber security workforce is essential to address the growing cyber threat landscape. Governments and organizations should support initiatives to attract and retain cyber security talent, such as scholarships, internships, and professional development programs. By fostering a skilled workforce, we can enhance our collective ability to defend against cyber threats and protect sensitive information.

Conclusion

The web explosion has brought tremendous opportunities and conveniences, but it has also created significant cyber security challenges. As we navigate this digital landscape, understanding and implementing effective cyber security measures is paramount. By staying informed about emerging threats, investing in advanced technologies, and fostering a culture of security, we can protect our digital world and ensure a secure future.

A Beginner’s Guide to the Top 5 React Hooks

· 10 min read

Why React Hooks?

Evolution of React:

  • Since its inception, React has undergone significant evolution, with each new release introducing enhancements and improvements to the framework. In the early days of React, class components were the primary means of building reusable UI components. Class components provided a way to manage component state and lifecycle methods, allowing developers to create dynamic and interactive user interfaces.

  • Introducing Functional Components:

  • With the release of React 16.8 in February 2019, the React team introduced a groundbreaking feature known as hooks. This shift towards functional components with hooks opened up new possibilities for organizing and managing React code, leading to cleaner, more concise component logic.

The Need for Hooks:

  • While class components served as the cornerstone of React development for many years, they had certain limitations. Class components often led to complex hierarchies, known as “wrapper hell,” and made it challenging to reuse component logic.

  • In response to these challenges, the React team introduced hooks as a more elegant and composable solution for managing component logic. Hooks allow developers to encapsulate stateful logic and side effects within functional components, making it easier to understand, test, and maintain React code.

  • Now that we have some context on React hooks, let’s explore the react hooks every beginner should know.

1. ‘useState’ hook

  • The useState hook is one of the fundamental hooks in React, allowing functional components to manage local state. With useState, you can add state variables to your components and update them over time, enabling dynamic and interactive user interfaces.

Importing useState hook from react:

import { useState } from 'react';

Declaring a state variable named count with an initial value of 0,

  • The useState hook takes an initial state value as an argument and returns a stateful value paired with a function to update that value.
const [count, setCount] = useState(0);

Updating count variable using setCount method,

const Counter = () => {
const [count, setCount] = useState(0);

return (
<div>
<p>You clicked {count} times</p>
<button onClick={() => setCount(count + 1)}>Click me</button>
</div>
);
};
  • In above example, when the button is clicked, the onClick event handler calls the setCount function with the updated value of count (count + 1), causing the component to re-render with the new state value.

  • Note: We cannot update a state variable like, count = count +1

Updating objects and arrays in useState,

  • To update specific properties of an object or array stored in state using useState, you need to use the functional form of set function and spread the previous state (prevState) along with the updated properties.
const Counter = () => {
const [person, setPerson] = useState({id: '1', name: 'John', age: 25});

const updateName = (newName) => {
setPerson(prevState => {
return { ...prevState, name: newName };
});
};

const updateAge = (newAge) => {
setPerson(prevState => {
return { ...prevState, age: newAge };
});
};

return (
<div>
{/* form to update name and age */}
</div>
);
};

2. ‘useEffect’ hook

  • The useEffect hook in React enables functional components to perform side effects, such as data fetching, DOM manipulation, or subscriptions. It replaces lifecycle methods like componentDidMount, componentDidUpdate, and componentWillUnmount in class components.

componentDidMount

  • In class components, this method is called after a component is mounted or rendered in the DOM. It’s commonly used for performing initialization that requires DOM nodes or data fetching operations.

  • For componentDidMount behavior, you can pass an empty dependency array ([]) as the second argument to useEffect, which tells React to run the effect only once after the initial render.

useEffect(() => {
// Perform initialization or side effects
console.log("The component is rendered initially.")
}, []);

componentDidUpdate

  • In class components, this method is called after the component’s state or props are updated and the component re-renders. It’s useful for performing side effects after a component updates, such as updating the DOM in response to prop or state changes.

  • For componentDidUpdate behavior, we can simply omit the dependency array in useEffect . his means the effect will be executed whenever any state or prop value changes, potentially leading to unnecessary re-renders or performance issues if not used carefully.

useEffect(() => {
// Effect runs after every render
console.log("The component is rendered.")
});
  • To overcome unnecessary re-renders, you can specify dependencies in the dependency array. When any of these dependencies change, the effect will be re-run.
useEffect(() => {
// Perform side effects after state or props update
console.log("dependency1 or dependency2 have updated.")
}, [dependency1, dependency2]);

componentWillUnmount

  • In class components, this method is called just before a component is unmounted from the DOM. It’s used for cleanup tasks like removing event listeners or cancelling network requests to prevent memory leaks.

  • For componentWillUnmount behavior, you can return a cleanup function from the effect. This function will be called when the component is unmounted.

useEffect(() => {
// Perform side effects
console.log("dependency is updated.")
return () => {
// Cleanup tasks
console.log("The component is unmounted.")
};
}, [dependency]);

3. ‘useContext’ hook

  • The useContext hook is a powerful feature in React that allows components to consume data from a context without explicitly passing the data through each component manually (as props). This is particularly useful for passing down global data, such as themes, user authentication status, language preferences, etc., to deeply nested components in your application without prop drilling.

Create a Context

  • First, you need to create a context using the createContext function provided by React. This function returns a Context object.
// themeContext.js
import React, { createContext } from 'react';

export const ThemeContext = createContext(null);

Provide Context

  • Then, you need to wrap the part of your component tree where you want to make the context available using the Context.Provider component. This is typically placed at a higher level in your component hierarchy.
function App() {
const theme = 'dark';

return (
<ThemeContext.Provider value={theme}>
<MyComponent/>
</ThemeContext.Provider>
);
}

Consume Context

  • Now, any component within the provider can access the context using the useContext hook.
import React, { useContext } from 'react';
import ThemeContext from './ThemeContext';

function MyComponent() {
const theme = useContext(ThemeContext);

return <div
style={{
background: theme === 'dark' ?
'#222' : '#fff' }
}
>
Content
</div>;
}
  • Now, MyComponent can access the theme value without having to pass it as a prop from higher-level components.

  • That’s the basics of using the useContext hook in React! It simplifies the process of passing data through the component tree, making your code cleaner and more efficient.

4. ‘useReducer’ hook

  • The useReducer hook in React is used for managing more complex state logic within functional components. It's an alternative to the more commonly used useState hook, especially when you have state transitions that are more intricate and involve multiple sub-values or when the next state depends on the previous one.

State Initialization

  • You start by defining an initial state. This could be a single value, an object, or an array depending on your application’s needs.
const Counter = () => {
// Step 1: Define initial state
const initialState = { count: 0 };

return (
<div>
content
</div>
);
};

Reducer Function

  • You define a reducer function. This function takes two arguments: the current state and an action, and returns the new state based on the action. The reducer function is responsible for updating the state.
// Step 2: Define reducer function
const reducer = (state, action) => {
switch (action.type) {
case 'increment':
return { count: state.count + 1 };
case 'decrement':
return { count: state.count - 1 };
default:
throw new Error();
}
};

Dispatching Actions

  • To update the state, you dispatch actions. An action is an object that describes what kind of state change you want to perform. It typically has a type property that describes the action type, and optionally a payload property that carries data relevant to the action.
const Counter = () => {
const initialState = { count: 0 };

// Step 3: Use useReducer hook
const [state, dispatch] = useReducer(reducer, initialState);

return (
<div>
Count: {state.count}
<button onClick={() => dispatch({ type: 'increment' })}>+</button>
<button onClick={() => dispatch({ type: 'decrement' })}>-</button>
</div>
);
};
  • When you dispatch an action, React calls your reducer with the current state and the action you’ve dispatched. The reducer decides how to update the state based on the action type and returns the new state.

  • React re-renders the component with the new state, and any components that depend on that state will also re-render.

5. ‘useRef’ hook

  • useRef is used to create a mutable reference that persists across renders without causing re-renders when the value changes.

Example 1

import React, { useRef } from 'react';

function MyComponent() {
// Create a ref to store a DOM element
const myInputRef = useRef(null);

// Function to focus on the input element
const focusInput = () => {
// Accessing the current value of the ref
myInputRef.current.focus();
};

return (
<div>
{/* Attaching the ref to the input element */}
<input type="text" ref={myInputRef} />
<button onClick={focusInput}>Focus Input</button>
</div>
);
}

export default MyComponent;

In this example, myInputRef is created using useRef, and it's attached to the input element. When the button is clicked, the focusInput function is called, which accesses the current property of the myInputRef to focus on the input element.

Example 2

import React, { useState, useRef } from 'react';

function Counter() {
// State for storing the count
const [count, setCount] = useState(0);

// Ref for storing the interval ID
const intervalIdRef = useRef(null);

// Function to start the counter
const startCounter = () => {
// Check if counter is already running
if (intervalIdRef.current !== null) {
return; // If already running, do nothing
}

// Start the counter
intervalIdRef.current = setInterval(() => {
setCount(prevCount => prevCount + 1);
}, 1000);
};

// Function to stop the counter
const stopCounter = () => {
// Check if counter is running
if (intervalIdRef.current === null) {
return; // If not running, do nothing
}

// Stop the counter
clearInterval(intervalIdRef.current);
intervalIdRef.current = null;
};

return (
<div>
<h2>Counter: {count}</h2>
<button onClick={startCounter}>Start</button>
<button onClick={stopCounter}>Stop</button>
</div>
);
}

export default Counter;
  • We have a state variable count that stores the current count.
  • We create a ref named intervalIdRef using useRef(null). This ref will be used to store the ID returned by setInterval so that we can later clear the interval.
  • startCounter function starts a timer using setInterval and increments the count every second. It first checks if the counter is already running to avoid starting multiple timers simultaneously.
  • stopCounter function stops the timer by calling clearInterval. It also checks if the counter is running before attempting to stop it.
  • The buttons call startCounter and stopCounter when clicked, respectively.
  • This example demonstrates how useRef can be used to store mutable values (in this case, the interval ID) across re-renders without causing unnecessary re-renders.

Unveiling the Significance of JS ES6 features

· 14 min read
  • In the fast-paced world of web development, staying ahead of the curve is not just an advantage — it’s a necessity. Enter ECMAScript 6, or ES6 for short, a game-changer that has redefined the landscape of JavaScript programming.

  • What if you could write JavaScript code that is not only more concise but also more powerful? How would it feel to have a set of features that streamline your workflow, enhance code readability, and unlock new possibilities in your projects? The answer lies in understanding the transformative impact of ES6 on the world’s most widely-used programming language.

  • As we dive into the significance of ES6, we’ll discover how this evolution has not only simplified the developer experience but has also laid the foundation for more robust, expressive, and maintainable code.

  • ES6 introduces a plethora of features that elevate JavaScript development to new heights.

1. let and const

  • In ECMAScript 2015 (ES6), the let and const keywords were introduced to declare variables, offering improvements over the traditional var keyword.
  • Scope: Variables declared with ‘let’ and ‘const’ have block-level scope, meaning they are limited to the block, statement, or expression where ‘var’ has global level scope.
if (true) {
let x = 10;
const y = 20;
var z = 30;
console.log(x); // Outputs: 10
console.log(y); // Outputs: 20
console.log(z); // Outputs: 30
}


console.log(x); // Outputs: Error: x is not defined
console.log(y); // Outputs: Error: x is not defined
console.log(z); // Outputs: 30
  • Hoisting: Unlike variables declared with var, variables declared with let and const are not hoisted to the top of their scope. They remain in the temporal dead zone until the point of declaration.
console.log(a); // outputs: 20
var a = 20;

console.log(b); // Error: Cannot access 'b' before initialization
let b = 20;

console.log(c); // Error: Cannot access 'c' before initialization
const c = 20;
  • Reassignment: Variables declared with let can be reassigned, allowing for flexibility in updating values where, variables declared with const are constant and cannot be reassigned once a value is assigned.
let p = 30;
p = 40; // Valid

const pi = 3.14;
pi = 3.145; // Error: Assignment to constant variable
  • However, this does not make objects or arrays declared with const immutable; it means the reference to the object or array cannot be changed.
const colors = ['red', 'green', 'blue'];
colors.push('yellow'); // Valid
colors = ['purple']; // Error: Assignment to constant variable
  • Declaration: Variables declared with const must be assigned a value at the time of declaration.
var x; // valid
let y; // valid
const z; // Error: Missing initializer in const declaration

2. Arrow functions

  • Arrow functions, introduced in ECMAScript 2015 (ES6), provide a concise and more readable syntax for writing functions in JavaScript.
// In ES5
var add = function(x, y) {
return x + y;
};
// ES6 (Arrow Function)
const add = (x, y) {
return x + y;
}
// If the function body is a single expression,
// you can omit the braces {} and the return keyword.
const add = (x, y) => x + y;
  • Arrow functions are more concise compared to traditional function expressions, especially when the function has a simple body.

3. Template literals

  • Template literals, introduced in ECMAScript 2015 (ES6), provide a more flexible and concise way to create strings in JavaScript. They use backticks (`) instead of single or double quotes and allow for embedded expressions and multiline strings.
  • Embedded expressions: Template literals support the embedding of expressions, including variables, functions, and operations, directly within the string.
// In ES5
var a = 5;
var b = 10;
var result = 'The sum of ' + a + ' and ' + b + ' is ' + (a + b) + '.';

// In ES6 (Template Literal with Embedded Expression)
const a = 5;
const b = 10;
const result = `The sum of ${a} and ${b} is ${a + b}.`;
  • Multiline strings: One of the significant advantages of template literals is their ability to create multiline strings without the need for explicit line breaks or concatenation.
// In ES5
var multilineString = 'This is a long string\n' +
'that spans multiple lines\n' +
'using concatenation.';

// In ES6 (Template Literal)
const multilineString = `This is a long string
that spans multiple lines
using template literals.`;

4. Destructuring assignments

  • Destructuring is a powerful feature introduced in ECMAScript 2015 (ES6) that allows you to extract values from arrays or properties from objects and assign them to variables in a more concise and expressive way.
  • It simplifies the process of working with complex data structures.
  • Array destructuring
// In ES5
var numbers = [1, 2, 3];
var a = numbers[0];
var b = numbers[1];
var c = numbers[2];

// In ES6
const [a, b, c] = [1, 2, 3];
console.log(a, b, c); // Outputs: 1 2 3
  • Object destructuring - Alias assignment, Nested destructuring
// In ES5
var person = { name: 'John', marks: 85 };
var name = person.name;
var marks = person.marks;

// In ES6
const person = { name: 'John', marks: 85 };
const { name, marks } = person;
console.log(name, marks); // Outputs: John 85

// In ES6 - alias assignment
const person = { name: 'John', marks: 85 };
const { name: studentName, marks: finalMarks } = person;
console.log(studentName, finalMarks); // Outputs: John 85

//In ES6 - Nested destructuring
const user = {
name: 'John',
age: 30,
address: {
city: 'New York',
country: 'USA'
}
};

const { name, age, address: { city, country } } = user;
console.log(name, age, city, country); // Outputs: John 30 New York USA
  • Function Parameter Destructuring
// ES6
function printPerson({ firstName, lastName }) {
console.log(`${firstName} ${lastName}`);
}

const person = { firstName: 'John', lastName: 'Doe' };
printPerson(person); // Outputs: John Doe

5. Default parameters

  • Default parameters, introduced in ECMAScript 2015 (ES6), allow you to assign default values to function parameters in case the arguments are not provided or are explicitly set to undefined.
// without default values

function add(x, y) {
return x + y;
}
console.log(add()); // outputs NaN
console.log(add(1, 2)); // outputs 3

// let’s see how we handle this issue in ES5 and ES6.

// In ES5
function add(x, y) {
x = x || 0;
y = y || 0;
return x + y;
}

// ES6 (Default Parameters)
function add(x = 0, y = 0) {
return x + y;
}

console.log(add()); // Outputs: 0
console.log(add(1, 2)); // Outputs: 3

6. The spread and rest operator

  • The rest and spread operators are two powerful features introduced in ECMAScript 2015 (ES6) that enhance the way we work with arrays and function parameters. Despite having similar syntax (the ellipsis …), they serve different purposes.
  • As the name suggests, the spread operator “spreads” the values in an array or a string across one or more arguments. In cases where we require all the elements of an iterable or object to help us achieve a task, we use a spread operator.
// In ES6 - spread operator example 1 with array

const greeting = ['Welcome', 'back', 'John!'];

console.log(greeting); // ['Welcome', 'back', 'John!']
console.log(...greeting); // Welcome back John!

// Note: console.log(...greeting) is equivalent to console.log('Welcome', 'back', 'John!');
// In ES6 - spread operator example 1 with Object

const obj1 = { a : 1, b : 2 };

// add members obj1 to obj3
const obj2 = { ...obj1, c: 3 };
console.log(obj2); // {a: 1, b: 2, c: 3}
  • The rest operator is converse to the spread operator. while the spread operator expands elements of an iterable, the rest operator collects several elements compress them into an array. In functions when we require to pass arguments but were not sure how many we have to pass, the rest parameter makes it easier.
// In ES6 - rest operator example 1
let func = function(...args) {
console.log(args);
}

func(3); // [3]
func(4, 5, 6); // [4, 5, 6]

// In ES6 - rest operator example 2
function func(a, b, ...nums) {
console.log( a + ' ' + b ); // 1 2
// the rest go into titles array
console.log(nums); [3, 4, 5]
}

func(1, 2, 3, 4, 5);

// Note: There must be only one rest operator in javascript functions and
// should always be at the end in the parameter list, else it causes an error.

7. Promises

  • Promises were introduced in ECMAScript 2015 (ES6) to simplify asynchronous programming and provide a more structured way to handle asynchronous operations. They are especially useful for dealing with asynchronous operations like network requests, file reading, or timeouts.
  • Creating a promise — A Promise is created using the Promise constructor, which takes a function called the "executor." The executor function has two parameters, resolve and reject, which are functions provided by the Promise implementation.
  • A Promise can be in one of three states: Pending, Fulfilled, Rejected
  • Pending: The initial state; the promise is neither fulfilled nor rejected.
  • Fulfilled: The operation completed successfully, and the promise has a resulting value.
  • Rejected: The operation failed, and the promise has a reason for the failure.
  • Handling promises — To handle the result of a Promise, you can use the .then() method for success and .catch() method for failure. These methods are called on the Promise instance.
// creating a promise
const fetchData = () => {
return new Promise((resolve, reject) => {
// Simulate an asynchronous operation (e.g., fetching data from a server)
setTimeout(() => {
const success = Math.random() > 0.5; // Simulate success or failure randomly

if (success) {
const data = { message: 'Data successfully fetched!' };
resolve(data); // Resolve with the fetched data
} else {
reject(new Error('Failed to fetch data')); // Reject with an error
}
}, 1000); // Simulate a 1-second delay
});
};

// handling promise
fetchData()
.then((result) => {
console.log(result.message);
})
.catch((error) => {
console.error(error.message);
});
  • We define a function fetchData that returns a new Promise.

  • Inside the Promise constructor, we simulate an asynchronous operation using setTimeout. The operation randomly succeeds or fails.

  • If the operation is successful, we call resolve with an object representing the fetched data. If there is an error, we call reject with an Error object.

  • We use the then method to handle the successful result and the catch method to handle errors.

  • promise.all() — A utility method that takes an array of Promises and returns a new Promise that is fulfilled with an array of the fulfilled values when all the promises in the array are fulfilled. If any promise in the array is rejected, the resulting Promise is rejected with the reason of the first rejected promise.

const promise1 = Promise.resolve('One');
const promise2 = Promise.resolve('Two');
const promise3 = new Promise((resolve, reject) => {
setTimeout(() => resolve('Three'), 1000);
});

Promise.all([promise1, promise2, promise3])
.then((results) => {
console.log(results); // Outputs: ['One', 'Two', 'Three']
})
.catch((error) => {
console.error(error);
});

  • promise.race() — Similar to Promise.all(), but it settles as soon as any of the promises in the array settles, either fulfilled or rejected.
const promise1 = Promise.resolve('Fast');
const promise2 = new Promise((resolve, reject) => {
setTimeout(() => resolve('Slow'), 2000);
});

Promise.race([promise1, promise2])
.then((result) => {
console.log(result); // Outputs: 'Fast'
})
.catch((error) => {
console.error(error);
});

8. Modules

  • In ECMAScript 2015 (ES6), the module system was introduced to allow developers to organize their code into reusable and maintainable pieces. Before ES6 modules, JavaScript relied on various patterns like immediately-invoked function expressions (IIFE) or the CommonJS pattern for modular development. ES6 modules provide a standardized and native way to work with modules in JavaScript.
  • In ES6, a file becomes a module when it contains at least one import or export statement.
  • export statement is used to specify what values are accessible from a module, and the import statement is used to bring those values into another module.
  • Individual export
// student.js
export const name = "Mary";
export const age = 17;

// main.js
import { name, age } from "./person.js";
console.log(name, age); // outputs: Mary 17


- All at once export
// student.js
const name = "Jesse";
const age = 40;

export {name, age};

// main.js
import { name, age } from "./person.js";
console.log(name, age); // outputs: Mary 17
  • default export — A module can have a default export, which is the main export of the module. It is often used when a module represents a single value or function.
// myModule.js

// Default exporting a function
export default function() {
console.log('Default function executed!');
}

// main.js

// Importing the default export
import myDefaultFunction from './myModule';

myDefaultFunction(); // Outputs: Default function executed!

9. Classes

  • Classes in ECMAScript 2015 (ES6) introduced a more convenient and syntactic way to create constructor functions and work with prototype-based inheritance. JavaScript, being a prototype-based language, lacked a formal class structure prior to ES6.
  • Classes provide a cleaner and more familiar syntax for creating objects and organizing code in an object-oriented manner.
class Animal {
// Constructor method for initializing instances
constructor(name, sound) {
this.name = name;
this.sound = sound;
}

// Method for making the animal make its sound
makeSound() {
console.log(`${this.name} says ${this.sound}`);
}
}

// Creating instances of the class
const dog = new Animal('Dog', 'Woof');
const cat = new Animal('Cat', 'Meow');

// Using class methods
dog.makeSound(); // Outputs: Dog says Woof
cat.makeSound(); // Outputs: Cat says Meow

Classes support inheritance through the extends keyword. This allows a new class to inherit the properties and methods of an existing class.

class Cat extends Animal {
constructor(name, sound, color) {
super(name, sound); // Calls the constructor of the parent class
this.color = color;
}

// unique method for cats
purr() {
console.log(`${this.name} purrs softly.`);
}
}

const kitty = new Cat('Kitty', 'Meow', 'White');
kitty.makeSound(); // Outputs: Kitty says Meow
kitty.purr(); // Outputs: Kitty purrs softly.

10. Symbols

  • Symbols are a primitive data type introduced in ECMAScript 2015 (ES6) to provide a way to create unique identifiers.
  • Unlike strings or numbers, symbols are guaranteed to be unique, which makes them useful for scenarios where you need to create property keys that won’t clash with other properties.
// creating symbol
const mySymbol = Symbol();
console.log(typeof mySymbol); // Outputs: symbol

Symbols are guaranteed to be unique, even if they have the same description. The description is a human-readable string that can be used for debugging but does not affect the uniqueness of the symbol.

const symbol1 = Symbol('apple');
const symbol2 = Symbol('apple');

console.log(symbol1 === symbol2); // Outputs: false

// Symbols are often used to create non-enumerable properties on objects,
// helping prevent unintentional name collisions.

const myObject = {
[Symbol('key')]: 'value',
};

for (const key in myObject) {
console.log(key); // No output, as the symbol property is non-enumerable
}

console.log(Object.keys(myObject)); // Outputs: []

conclusion

From the simplicity of arrow functions to the modularity of ES6 modules, and the flexibility of template literals, ES6 has revolutionized the way developers write and structure their code. The introduction of let and const for variable declarations, destructuring for concise data extraction, and the powerful features of Promises for asynchronous operations have all contributed to a more robust and developer-friendly JavaScript.

ES6 not only addressed common pain points in JavaScript but also paved the way for a more modern and scalable approach to building applications. With advancements like the spread and rest operators, default parameters, and the introduction of classes for object-oriented programming, ES6 has empowered developers to create cleaner, more maintainable code.

In conclusion, ES6 has not only elevated the capabilities of JavaScript but has also redefined the developer experience, making it more enjoyable and productive.

Mastering Design Patterns in Java

· 19 min read
  • In the world of software engineering, turning ideas into actual code can be tricky.

  • As developers, our goal is not just to make things work, but also to make sure our code is maintainable, scalable, adaptable and reusable.

  • Enter design patterns — the time-tested blueprints that empower us to tackle recurring design problems with elegance and efficiency.

  • At its heart, a design pattern is like a ready-made solution for common problems we face when designing software. These solutions are like shortcuts, saving us time and effort by using proven strategies that experts have refined over many years.

  • In this article, we’ll delve into some of the most important design patterns that every developer should be familiar with. We’ll explore their principles, why they’re useful, and how you can use them in real projects. Whether you’re struggling with creating objects, organizing relationships between classes, or managing how objects behave, there’s a design pattern that can help.

  • Let’s begin.

1. Singleton pattern

  • The Singleton pattern is a creational design pattern that ensures a class has only one instance and provides a global point of access to that instance. In simpler terms, it’s like ensuring there’s only one unique copy of a particular object in your program, and you can access that object from anywhere in your code.

  • Let’s take a simple real-world example: the clipboard. Picture multiple applications or processes running on a computer, each attempting to access the clipboard concurrently. If each application were to create its own version of the clipboard to manage copy and paste operations, it could lead to conflicting data.

public class Clipboard {

private String value;

public void copy(String value) {
this.value = value;
}

public String paste() {
return value;
}
}
  • In the above example, we've defined a Clipboard class capable of copying and pasting values. However, if we were to create multiple instances of Clipboard, each instance would hold its own separate data.
public class Main {
public static void main(String[] args) {

Clipboard clipboard1 = new Clipboard();
Clipboard clipboard2 = new Clipboard();

clipboard1.copy("Java");
clipboard2.copy("Design patterns");

System.out.println(clipboard1.paste()); // output: Java
System.out.println(clipboard2.paste()); // output: Design patterns
}
}
  • Clearly, this isn’t ideal. We expect both clipboard instances to display the same value. This is precisely where the Singleton pattern proves its worth.
public class Clipboard {

private String value;

private static Clipboard clipboard = null;

// Private constructor to prevent instantiation from outside
private Clipboard() {}

// Method to provide access to the singleton instance
public static Clipboard getInstance() {
if (clipboard == null) {
clipboard = new Clipboard();
}
return clipboard;
}

public void copy(String value) {
this.value = value;
}

public String paste() {
return value;
}
}
  • By implementing the Singleton pattern, we ensure that only one instance of the Clipboard class exists throughout the program execution.
public class Main {
public static void main(String[] args) {

// Getting the singleton instances
Clipboard clipboard1 = Clipboard.getInstance();
Clipboard clipboard2 = Clipboard.getInstance();

clipboard1.copy("Java");
clipboard2.copy("Design patterns");

System.out.println(clipboard1.paste()); // output: Design patterns
System.out.println(clipboard2.paste()); // output: Design patterns
}
}
  • Now, both clipboard1 and clipboard2 reference the same instance of the Clipboard class, ensuring consistency across the application.

2. Factory Design pattern

  • The Factory Design Pattern is a creational design pattern that provides an interface for creating objects in a super class but allows subclasses to decide which class to instantiate. In other words, it provides a way to delegate the instantiation logic to child classes.

  • Imagine you’re building a program that simulates a simple console based calculator. You have different types of operations like addition, subtraction, multiplication, division etc. Each operation has its own unique behavior. Now, you want to create these operation objects in your program based on customer choice.

  • The challenge is you need a way to create these operation objects without making your code too complex or tightly coupled. This means you don’t want your code to rely too heavily on the specific classes of operations directly. You also want to make it easy to add new types of operations later without changing a lot of code.

  • The Factory Design Pattern helps you solve this problem by providing a way to create objects without specifying their exact class. Instead, you delegate the creation process to a factory class.

  • Define the product interface. (Operation).

public interface Operation {
double calculate(double number1, double number2);
}
  • Implement concrete products for each operation.
// for addition
public class AddOperation implements Operation{
@Override
public double calculate(double number1, double number2) {
return number1 + number2;
}
}

// for substration
public class SubOperation implements Operation{
@Override
public double calculate(double number1, double number2) {
return number1 - number2;
}
}

// for multiplication
public class MulOperation implements Operation{
@Override
public double calculate(double number1, double number2) {
return number1 * number2;
}
}

// for division
public class DivOperation implements Operation{
@Override
public double calculate(double number1, double number2) {
if(number2 == 0)
throw new ArithmeticException("Cannot divide by zero!");
return number1 / number2;
}
}

// An exception class invokes when user input invalid choice for operation
public class InvalidOperationException extends Exception{
public InvalidOperationException(String message) {
super(message);
}

}
  • Create a factory class (OperationFactory) with a method (getInstance) to create objects based on some parameter.
public interface OperationFactory {
Operation getInstance(int choice) throws InvalidOperation;
}

public class OperationFactoryImpl implements OperationFactory{
@Override
public Operation getInstance(int choice) throws InvalidOperationException {
if(choice==1)
return new AddOperation();
else if(choice==2)
return new SubOperation();
else if(choice==3)
return new MulOperation();
else if(choice==4)
return new DivOperation();
throw new InvalidOperation("Invalid operation selected!");
}
}
  • Use the factory to create objects without knowing their specific classes.
public static void main(String[] args) {
Scanner scan = new Scanner(System.in);
Output output = new ConsoleOutput();

try {

System.out.println("\n1. Addition(+)\n2. Subtraction(-)\n3. Multiplication(*)\n4. Division(/)");

// getting choice from user
System.out.println("\n\nSelect your operation (1-4): ");
int choice = scan.nextInt()

// getting 2 operands from user
System.out.println("Enter first operand: ");
double operand1 = scan.nextDouble();
System.out.println("Enter second operand: ");
double operand2 = scan.nextDouble();

// create opeartion instance based on user choice
OperationFactory operationFactory = new OperationFactoryImpl();
Operation operation = operationFactory.getInstance(choice);

// printing result
System.out.println("\nThis result is " + operation.calculate(operand1, operand2) + ".");
}
catch (InputMismatchException e) {
System.out.println("Invalid input type!\n");
}
catch (InvalidOperation | ArithmeticException e) {
System.out.println(e.getMessage());
}

scan.close();
}
  • Here the Main class demonstrates the usage of the factory to create different operation objects without knowing their specific implementation classes (Loose coupling).
  • It only interacts with the factory interface. Not only that, but we can also easily add new types of operations without changing existing client code. We are just needed to create a new concrete product and update the factory if necessary.

3. Builder pattern

  • The Builder Pattern provides a way to construct an object by allowing you to set its various properties (or attributes) in a step-by-step manner.

  • Some of the parameters might be optional for an object, but we are forced to send all the parameters and optional parameters need to send as NULL. We can solve this issue with large number of parameters by providing a constructor with required parameters and then different setter methods to set the optional parameters.

  • This pattern is particularly useful when dealing with objects that have many optional parameters or configurations.

  • Imagine we’re developing a user entity. Users have different properties like name, email, phone and city etc. Here name and email are required fields and phone and city are optional. Now, each person might have different combinations of these properties. Some might have city, others might not. Some might have phone, others might not. The Builder Design Pattern helps you create these users flexibly, step by step.

// Main product class
public class User {
private String name; // required field
private String email; // required field
private String phone; // optional field
private String city; // optional field

public User(UserBuilder userBuilder) {
this.name = userBuilder.getName();
this.email = userBuilder.getEmail();
this.phone = userBuilder.getPhone();
this.city = userBuilder.getCity();
}

public static UserBuilder builder(String name, String email) {
return new UserBuilder(name, email);
}

@Override
public String toString() {
return "User = " +
"{ name: '" + name + '\'' +
", email: '" + email + '\'' +
", phone: '" + phone + '\'' +
", city: '" + city + '\'' +
" }";
}

// builder class
public static class UserBuilder {
private String name; // required field
private String email; // required field
private String phone = "unknown"; // optional field
private String city = "unknown"; // optional field

public UserBuilder(String name, String email) {
this.name = name;
this.email = email;
}

// getters

public UserBuilder name(String name) {
this.name = name;
return this;
}

public UserBuilder email(String email) {
this.email = email;
return this;
}

public UserBuilder phone(String phone) {
this.phone = phone;
return this;
}

public UserBuilder city(String city) {
this.city = city;
return this;
}

public User build() {
return new User(this);
}
}

}
  • UserBuilder class: Is the inner builder class responsible for constructing User objects. It has fields representing the presence or absence of different properties (name, email, phone, city). The class provides setter methods for each properties, which return the builder itself (name(), phone(), city(),email() This enables method chaining.
  • User class: Is the class represents the product you want to build using the builder pattern. It has private fields to represent the properties of the user (name, email, phone, city). The constructor of User takes a UserBuilder object and initializes its fields based on the builder's settings. There is a static method builder() that returns a new instance of UserBuilder, providing a convenient way to create a new builder.
  • Here’s an example of how you can use this code to create a user with optional properties:
public class Main {
public static void main(String[] args) {

User user1 = User
.builder("John", "john@abc@gmail.com")
.build();

System.out.println(user1); // User = { name: 'John', email: 'john@abc@gmail.com', phone: 'unknown', city: 'unknown' }

User user2 = User
.builder("Mary", "mary@abc@gmail.com")
.city("Colombo")
.build();

System.out.println(user2); // User = { name: 'Mary', email: 'mary@abc@gmail.com', phone: 'unknown', city: 'Colombo' }
}

}
  • So that’s what builder patterns is guys. This pattern is useful when you have complex objects with many optional parameters, and it helps keep your code clean and easy to understand. It allows you to construct different variations of objects with the same builder, adjusting parameters as needed.

4. Adapter pattern

  • The Adapter pattern is a structural design pattern that allows objects with incompatible interfaces to work together. It acts as a bridge between two incompatible interfaces.

  • Imagine a situation where two classes or components perform similar tasks but have different method names, parameter types, or structures. The Adapter pattern allows these incompatible interfaces to work together by providing a wrapper (the adapter) that translates the interface of one class into an interface that the client expects.

  • Target is the interface expected by the client.

  • Adaptee is the class that needs to be adapted.

  • Adapter is the class that implements the Target interface and wraps the Adaptee class.

  • Client class is the class that uses the adapter to interact with the Adaptee through the Tareget interface.

// Target interface
interface CellPhone {
void call();
}

// Adaptee (the class to be adapted)
class FriendCellPhone {
public void ring() {
System.out.println("Ringing");
}
}

// Adapter class implementing the Target interface
class CellPhoneAdapter implements CellPhone {
private FriendCellPhone friendCellPhone;

public CellPhoneAdapter(FriendCellPhone friendCellPhone) {
this.friendCellPhone = friendCellPhone;
}

@Override
public void call() {
friendCellPhone.ring();
}
}

// Client class
public class AdapterMain {
public static void main(String[] args) {
// Using the adapter to make Adaptee work with Target interface
FriendCellPhone adaptee = new FriendCellPhone();
CellPhone adapter = new CellPhoneAdapter(adaptee);
adapter.call();
}
}

In this example:

  • CellPhone is the target interface that your client code expects, and you do not have an implementation of it.
  • FriendCellPhone is the class you want to adapt/reuse (the Adaptee), which has a method named ring rather than creating new implementaion of CellPhone interface.
  • CellPhoneAdapter is the adapter class that implements the CellPhone interface and wraps an instance of FriendCellPhone. The call method in the adapter delegates the call to the ring method of the FriendCellPhone class.
  • AdapterMain class serves as the client that demonstrates the usage of the Adapter pattern in action.


Why adapter pattern?

  • The Adaptee might be a class from a third-party library or a legacy codebase that you can’t modify directly. By using an adapter, you can adapt its interface to match the interface expected by the client without modifying the original code.
  • The client might only require specific functionality from the Adaptee. By using an adapter, you can provide a tailored interface that exposes only the necessary functionality, rather than exposing the entire interface of the Adaptee.
  • It might seem that you can achieve similar functionality by creating an instance of the Target interface directly, using an adapter provides benefits in terms of code reusability, maintainability, and flexibility, especially when dealing with existing code or third-party libraries.

5. Decorator pattern

  • The Decorator Pattern is a design pattern in object-oriented programming that allows behavior to be added to individual objects, either statically or dynamically, without affecting the behavior of other objects from the same class.

  • In this pattern, there is a base class (or interface) that defines the common functionality, and one or more decorator classes that add additional behavior. These decorator classes wrap the original object, augmenting its behavior in a modular and flexible way.

  • Imagine, you are tasked with creating a drawing application that allows users to create and customize shapes with various decorations. It should be able to easily add new decorators for additional features without changing the existing code for shapes or other decorators.

  • Let’s see how we can achieve that using decorator pattern.

// Shape Interface
interface Shape {
void draw();
String getName();
}

// Concrete Shape: Circle
class Circle implements Shape {
private String name;

public Circle(String name) {
this.name = name;
}

public String getName() {
return name;
}

@Override
public void draw() {
System.out.println("Drawing circle, " + getName() + ".");
}
}
  • Shape Interface: Defines the basic operations that all shapes should support. In this case, it includes the draw() method to draw the shape and getName() to get the name of the shape.
  • Circle Class: Implements the Shape interface, representing a concrete shape (in this case, a circle). It has a name attribute and implements the draw() method to draw a circle.
// Abstract Decorator Class
abstract class ShapeDecorator implements Shape {
private Shape decoratedShape;

public ShapeDecorator(Shape decoratedShape) {
this.decoratedShape = decoratedShape;
}

@Override
public void draw() {
decoratedShape.draw();
}

@Override
public String getName() {
return decoratedShape.getName();
}
}
  • ShapeDecorator Abstract Class: An abstract class implementing the Shape interface. It contains a reference to a Shape object (the decorated shape) and delegates the draw() method to this object.
// Concrete Decorator: BorderDecorator
class BorderDecorator extends ShapeDecorator {
private String color;
private int widthInPxs;

public BorderDecorator(Shape decoratedShape, String color, int widthInPxs) {
super(decoratedShape);
this.color = color;
this.widthInPxs = widthInPxs;
}

@Override
public void draw() {
super.draw();
System.out.println("Adding " + widthInPxs + "px, " + color + " color border to " + getName() + ".");
}
}

// Concrete Decorator: ColorDecorator
class ColorDecorator extends ShapeDecorator {
private String color;

public ColorDecorator(Shape decoratedShape, String color) {
super(decoratedShape);
this.color = color;
}

@Override
public void draw() {
super.draw();
System.out.println("Filling with " + color + " color to " + getName() + ".");
}
}
  • BorderDecorator and ColorDecorator Classes: Concrete decorator classes that extend ShapeDecorator. They add additional features to the decorated shapes, such as borders and colors. They override the draw() method to add their specific functionality while also calling the draw() method of the decorated shape.
// Main Class
public class DecoratorMain {
public static void main(String[] args) {
// Create a circle
Shape circle1 = new Circle("circle1");

// Decorate the circle with a border
Shape circle1WithBorder = new BorderDecorator(circle1, "red", 2);

// Decorate the circle with a color
Shape circle1WithBorderAndColor = new ColorDecorator(circle1WithBorder, "blue");

// Draw the decorated circle
circle1WithBorderAndColor.draw();

// output
// Drawing circle, circle1.
// Adding 2px, red color border to circle1.
// Filling with blue color to circle1.
}
}
  • DecoratorMain Class: Contains the main() method where the decorator pattern is demonstrated. It creates a circle, decorates it with a border, and then further decorates it with a color. Finally, it calls the draw() method to visualize the decorated shape.
  • Now, with the implementation of the Decorator Pattern, our drawing application gains the remarkable ability to embellish not only circles but also a plethora of geometric shapes such as rectangles, triangles, and beyond. Moreover, the extensibility of this pattern enables us to seamlessly integrate additional decorators, offering features like transparency, diverse border styles (solid, dotted), and much more. This dynamic enhancement capability, achieved without altering the core structure of the shapes, underscores the pattern’s prowess in promoting code reusability, flexibility, and scalability.

6. Observer pattern

  • The Observer Pattern a behavioral design pattern commonly used in object-oriented programming to establish a one-to-many dependency between objects. In this pattern, one object (called the subject or observable) maintains a list of its dependents (observers) and notifies them of any state changes, usually by calling one of their methods.

Here’s how it works:

  • Subject: This is the object that holds the state and manages the list of observers. It provides methods to attach, detach, and notify observers.

  • Observer: This is the interface that defines the method(s) that the subject calls to notify the observer of any state changes. Typically, observers implement this interface.

  • Concrete Subject: This is the concrete implementation of the subject interface. It maintains the state and sends notifications to observers when the state changes.

  • Concrete Observer: This is the concrete implementation of the observer interface. It registers itself with a subject to receive notifications and implements the update method to respond to state changes.

  • In the context of a YouTube channel subscriber scenario, the YouTube channel is the subject, and the subscribers are the observers. When an event happens in a YouTube channel, it notifies all its subscribers about the new video so they can watch it.

  • Let’s implement this example in code,

public enum EventType {
NEW_VIDEO,
LIVE_STREAM
}

public class YoutubeEvent {
private EventType eventType;
private String topic;

public YoutubeEvent(EventType eventType, String topic) {
this.eventType = eventType;
this.topic = topic;
}

// getters ans setters

@Override
public String toString() {
return eventType.name() + " on " + topic;
}
}
  • EventType: The EventType enum defines the types of events that can occur, such as NEW_VIDEO , LIVE_STREAM and more.
  • Event: The YoutubeEvent class represents the events that occur in the system. It contains information such as the type of event and the topic.
public interface Subject {

void addSubscriber(Observer observer);
void removeSubscriber(Observer observer);
void notifyAllSubscribers(YoutubeEvent event);

}

public interface Observer {
void notifyMe(String youtubeChannelName, YoutubeEvent event);
}

  • Subject: The Subject interface declares methods to manage subscribers (addSubscriber and removeSubscriber) and to notify them (notifyAllSubscribers) when an event occurs.
  • Observer: The Observer interface declares a method (notifyMe) that subjects call to notify observers of any change in state.
public class YoutubeChannel implements Subject{

private String name;
private List<Observer> subscribers = new ArrayList<>();

public YoutubeChannel(String name) {
this.name = name;
}

public String getName() {
return name;
}

@Override
public void addSubscriber(Observer observer) {
subscribers.add(observer);
}

@Override
public void removeSubscriber(Observer observer) {
subscribers.remove(observer);
}

@Override
public void notifyAllSubscribers(YoutubeEvent event) {
for(Observer observer: subscribers) {
observer.notifyMe(getName(), event);
}
}
}
  • Concrete Subject: The YoutubeChannel class implements the Subject interface. It maintains a list of subscribers and notifies them when a new event occurs. package observer;
public class YoutubeSubscriber implements Observer{
private String name;

public YoutubeSubscriber(String name) {
this.name = name;
}

public String getName() {
return name;
}

public void setName(String name) {
this.name = name;
}

@Override
public void notifyMe(String youtubeChannelName, YoutubeEvent event) {
System.out.println("Dear " + getName() + ", Notification from " + youtubeChannelName + ": " + event);
}
}
  • Concrete Observer: The YoutubeSubscriber class implements the Observer interface. It defines the behavior to be performed when notified by a subject.
public class ObserverMain {
public static void main(String[] args) throws InterruptedException {
YoutubeChannel myChannel = new YoutubeChannel("MyChannel");

Observer john = new YoutubeSubscriber("John");
Observer bob = new YoutubeSubscriber("Bob");
Observer tom = new YoutubeSubscriber("Tom");

myChannel.addSubscriber(john);
myChannel.addSubscriber(bob);
myChannel.addSubscriber(tom);

myChannel.notifyAllSubscribers(new YoutubeEvent(EventType.NEW_VIDEO, "Design patterns"));
myChannel.removeSubscriber(tom);
System.out.println();
Thread.sleep(5000);
myChannel.notifyAllSubscribers(new YoutubeEvent(EventType.LIVE_STREAM, "JAVA for beginners"));

}
}
  • Main Class: The ObserverMain class contains the main method where we test our implementation. It creates a YoutubeChannel instance, adds subscribers to it, notifies them of new video event, and removes one of the subscribers and again notifies them of a live stream event.
// output
Dear John, Notification from MyChannel: NEW_VIDEO on Design patterns
Dear Bob, Notification from MyChannel: NEW_VIDEO on Design patterns
Dear Tom, Notification from MyChannel: NEW_VIDEO on Design patterns

Dear John, Notification from MyChannel: LIVE_STREAM on JAVA for beginners
Dear Bob, Notification from MyChannel: LIVE_STREAM on JAVA for beginners
  • By using the Observer design pattern, the YouTube channel can easily notify all its subscribers whenever a new video is uploaded without tightly coupling the channel and its subscribers. This promotes a more flexible and maintainable design.

Conclusion

In conclusion, design patterns are indispensable tools for Java developers, offering proven solutions to recurring design problems and promoting code reusability, maintainability, and scalability. By understanding and implementing these patterns effectively, developers can craft robust, flexible, and easily maintainable software solutions. While mastering design patterns requires practice and experience, the benefits they bring to software development are invaluable. Whether you’re working on a small project or a large-scale enterprise application, leveraging design patterns empowers you to write cleaner, more efficient code and ultimately become a more proficient Java developer.

Mastering OOP concepts in JAVA

· 23 min read

Programing paradigms are approaches to write code, each with its own principles, concepts and guidelines. These paradigms guide how developers structure and organize their programs, as well as how they think about problem-solving. Here are some common programming paradigms.

  • Imperative Programming: Imperative programming is based on the idea of giving the computer a sequence of instructions to perform. Eg., C, Assembly
  • Declarative Programing: Declarative programming emphasizes expressing what should be accomplished rather than how to achieve it. Eg., SQL
  • Functional Programming: Functional programming treats computation as the evaluation of mathematical functions and avoids changing state and mutable data. It emphasizes the use of pure functions, higher-order functions, and immutable data structures. Eg., Haskell, Lisp, and Clojure
  • Object-Oriented Programming (OOP): Object-oriented programming organizes code around objects, representing real-world entities or abstract concepts. Eg., Java, Python, C++
  • Procedural Programming: Procedural programming emphasizes the use of procedures (functions or routines) to structure code. It focuses on breaking down a problem into a set of procedures that perform specific tasks, with an emphasis on modularity and reusability. Eg., C, Fortran
  • Event-Driven Programming: Event-driven programming focuses on responding to events or user actions, such as mouse clicks or keyboard inputs. It typically involves event listeners or handlers that execute in response to specific events. GUI (Graphical User Interface) programming often follows this paradigm.

In this article I am going to discuss about one of the most important programing paradigms, Object Oriented Programing (OOP) in JAVA.

As I mentioned earlier,

OOP, which is a programming paradigm or methodology used in software development whose fundamental building blocks are objects.

  • OOP promotes the organization of code into modular and reusable components, making it easier to manage and maintain complex software systems. OOP is widely used in software development for its ability to model real-world entities and their relationships effectively.

  • OOP is based on many key principles. Let’s go through one-by-one principles with explanations and examples.

Object

  • In Object-Oriented Programming (OOP), objects are the fundamental building blocks and the key concept.
  • Objects represent real-world entities or concepts in the context of a software program.
  • Examples: Student, Book, Hospital, Cart and so on…

Class

  • An object can have 2 things to describe itself. They are properties and behaviors.
  • Imagine you’re talking about a cat. One might be sleek and black, while another is fluffy and brown. Each cat has its own unique combination of traits.
  • So, how do we capture the essence of a cat in a way that fits all these variations? Enter the concept of a ‘class.’
  • A class is like a blueprint for creating objects. It’s a plan that defines the properties (like color, size, and breed) and behaviors (such as meowing, sleeping, and chasing mice) that all cats shares.
  • Think of class as a template that provides a common idea of what a particular object is, allowing us to create individual instances of that object with their own distinct characteristics.
  • Classes in object-oriented programming (OOP) represent properties and behaviors of objects through attributes and methods, respectively.
  • Properties (Attributes): Properties are the characteristics or data associated with an object. In classes, properties are defined as variables. Each instance of a class (object) has its own set of properties.For example, in a class representing a “Student”, properties could include “name”, “age”, “major”, and “GPA”.
  • Behaviors (Methods): Behaviors are the actions or operations that an object can perform. In classes, behaviors are defined as methods or functions. Methods operate on the data stored in the object’s properties. For example, in the “Student” class, methods could include “study”, “attend class”, “take exam”, and “submit assignment”.

Let’s see a simple example in Java to illustrate how a class represents properties and behaviors.

// Define a class representing a Student
public class Student {
// Properties (attributes)
private String name;
private int age;
private String major;

// Method to study
public void study() {
System.out.println(name + " is studying " + major + ".");
}

// Method to attend class
public void attendClass() {
System.out.println(name + " is attending class.");
}

// Method to take an exam
public void takeExam() {
System.out.println(name + " is taking an exam.");
}

// Method to submit assignment
public void submitAssignment() {
System.out.println(name + " is submitting an assignment.");
}

}
  • In this example, the “Student” class represents a student with properties (name, age, major) and behaviors (study, attendClass, takeExam, submitAssignment). Each instance of the “Student” class will have its own set of properties and can perform the defined behaviors.

  • Now that we’ve defined our class, let’s dive into how we can bring it to life by creating instances of it — essentially, the objects themselves. This process introduces us to a vital concept in OOP: the constructor.

Constructors

  • A constructor is a special method within a class responsible for initializing new objects.
  • Think of it as the gateway through which we breathe life into our class, providing initial values for its properties.
  • When we create a new instance of a class, we call upon its constructor to set up the object’s initial state.
  • Constructors have the same name as the class and do not have a return type, not even void.
  • To create a new student object, we use the ‘new’ keyword followed by the class name, along with any required arguments for the constructor.
  • There are different types of constructors:

Default Constructor:

  • If a class does not explicitly define any constructors, Java provides a default constructor with no arguments. The default constructor initializes the object’s attributes to default values (e.g., numeric types to 0, object references to null).
// Define a class representing a Student
public class Student {
// Properties (attributes)
private String name;
private int age;
private String major;

// Default Constructor
// (will discuss about `this` keyword later in this aricle)
public Student() {
this.name = "Unknown";
this.age = 0;
this.major = "Undeclared";
}

// Other methods...

}
// Creating a student object using the default constructor
Student john = new Student();

Parameterized Constructors:

  • Constructors can accept parameters to initialize the object with specific values. You can define multiple constructors with different parameter lists, allowing for constructor overloading.
public class Student {
private String name;
private int age;
private String major;

// Parameterized Constructor
public Student(String name, int age, String major) {
this.name = name;
this.age = age;
this.major = major;
}

// Other methods...
}
// Creating a student object using the parameterized constructor
Student alice = new Student("Alice", 20, "Computer Science");

Copy constructor:

  • A copy constructor creates a new object by copying the values of another object. It’s used to create a new object that is a copy of an existing one.
public class Student {
private String name;
private int age;
private String major;

// Copy Constructor
public Student(Student otherStudent) {
this.name = otherStudent.name;
this.age = otherStudent.age;
this.major = otherStudent.major;
}

// Other methods...
}

// Creating a student object using another student object (copy constructor)
Student bob = new Student(alice);

Now, we have created objects right. How can we access the properties of a particular object? Let’s check it out next.

Accessing attributes and methods of an object

  • To access the properties of a particular object in Java, we can use dot notation (.) followed by the name of the attribute or method name.
// Creating a student object using the parameterized constructor
Student alice = new Student("Alice", 20, "Computer Science");

// Accessing properties of the 'alice' object
String aliceName = alice.name;
int aliceAge = alice.age;
String aliceMajor = alice.major;

// modifing properties of the 'alice' object
alice.name = "Alice Mark";
alice.age = 22;
alice.major = "Software Engineering";

// Accessing methods of the 'alice' object
alice.study();
alice.attendClass();
alice.takeExam();
alice.submitAssignment();

'this' keyword in JAVA

  • The this keyword in Java is a reference to the current object within a method or constructor. It's a special reference that allows you to access the current object's properties, methods, or constructors from within its own class.
  • Accessing Instance Variables: You can use this to refer to instance variables of the current object when there's a naming conflict with method parameters or local variables.
public class Student {
private String name;

public void setName(String name) {
this.name = name; // changed the name of the object which called this method
}
}
  • Calling Another Constructor: In a constructor, you can use this() to call another constructor in the same class. This is often used to reduce code duplication and initialize common properties.
public class Student {
private String name;
private int age;
private String major;

// modified default constructor
public Student() {
this("Unknown", 0, "Undeclared"); // Calls the parameterized constructor with default values
}

public Student(String name, int age) {
this.name = name;
this.age = age;
}
}
  • Passing Current Object as a Parameter: You can use this to pass the current object as a parameter to other methods or constructors.
public class Student {
private String name;

public void printName() {
printFullName(this); // Passing the current object as a parameter
}

private void printFullName(Student student) {
System.out.println(student.name);
}
}

Access Modifiers

  • Access modifiers are keywords used to control the visibility or accessibility of classes, methods, and variables within a Java program.

  • They determine which parts of your code can be accessed or modified from other parts of your program, as well as from external code.

  • Java has four main access modifiers:

    1. public: The public access modifier makes the class, method, or variable accessible from any other class.

    2. protected: The protected access modifier allows access to the member within the same package or by subclasses (will discuss later about subclasses), even if they are in a different package. It restricts access to classes outside the package unless they are subclasses of the class containing the protected member.

    3. default (no modifier): If no access modifier is specified, the default access level is package-private. Members with default access are accessible only within the same package. They cannot be accessed from outside the package, even by subclasses.

    4. private: The private access modifier restricts access to the member only within the same class. It is the most restrictive access level and prevents access from outside the class, including subclasses.

let’s demonstrate the use of access modifiers with the Student class we used earlier.

public class Student {
// Public access modifier
public String name;

// Protected access modifier
protected int age;

// Default (package-private) access modifier
String major;

// Private access modifier
private double gpa;

// Constructor
public Student(String name, int age, String major, double gpa) {
this.name = name;
this.age = age;
this.major = major;
this.gpa = gpa;
}

// Public method
public void displayInfo() {
System.out.println("Name: " + name);
System.out.println("Age: " + age);
System.out.println("Major: " + major);
System.out.println("GPA: " + gpa);
}

// Protected method
protected void study() {
System.out.println(name + " is studying.");
}

// Default method (package-private)
void attendClass() {
System.out.println(name + " is attending class.");
}

// Private method
private void calculateGPA() {
// GPA calculation logic
}
}

Encapsulation

  • Encapsulation refers to the bundling of data (attributes) and methods (functions) that operate on an object into a single unit, often referred to as a class.
  • Encapsulation hides the internal state and implementation details of an object from the outside world, providing controlled access to the object’s properties and behaviors.
  • The primary goal of encapsulation is to restrict access to some of the object’s components, while exposing only what is necessary and safe for the outside world.
  • In Java, encapsulation is achieved using access modifiers (public, private, protected, and default) to control the visibility and accessibility of class members (attributes and methods).
  • key principles and practices related to encapsulation in Java:
    1. Declare the attributes (fields) of a class as private.
    2. Provide public methods (getters and setters) to access and manipulate the private fields. Getters allow read-only access, and setters allow modification, ensuring controlled access to the data.
    3. Let’s achieve encapsulation in previously discussed Car class by declaring attributes as private and providing getters and setters .
public class Student {
// Private data members (attributes)
// hence cannot access or modify directly using dot (.) operator
private String name;
private int age;
private String major;

// Constructor
public Student(String name, int age, String major) {
this.name = name;
this.age = age;
this.major = major;
}

// Getter methods (accessors)
public String getName() {
return name;
}

public int getAge() {
return age;
}

public String getMajor() {
return major;
}

// Setter methods (mutators)
public void setName(String name) {
this.name = name;
}

public void setAge(int age) {
this.age = age;
}

public void setMajor(String major) {
this.major = major;
}

// Display student information
public void displayInfo() {
System.out.println("Name: " + name);
System.out.println("Age: " + age);
System.out.println("Major: " + major);
System.out.println("GPA: " + gpa);
}

}
  • The Student class encapsulates data members (name, age, major) by declaring them as private.
  • Getter methods (getName(), getAge(), getMajor()) provide controlled access to retrieve the values of the private attributes.
  • Setter methods (setName(), setAge(), setMajor()) allow controlled modification of the private attributes, ensuring data integrity.
public class Main {
public static void main(String[] args) {
// Creating a new student object
Student alice = new Student("Alice", 20, "Computer Science", 3.5);

// Displaying Alice's information using getter methods
System.out.println("Student Information:");
System.out.println("Name: " + alice.getName());
System.out.println("Age: " + alice.getAge());
System.out.println("Major: " + alice.getMajor());
System.out.println("GPA: " + alice.getGpa());

// Updating Alice's information using setter methods
alice.setAge(21);
alice.setGpa(3.7);

// Displaying Alice's updated information
System.out.println("\nUpdated Student Information:");
alice.displayInfo();
}
}
  • Encapsulation is a key principle in Java and other object-oriented languages that promotes data integrity, code maintainability, and code security by controlling access to the internal state of objects. It is an essential practice for creating well-structured and robust Java programs.

Inheritance

  • Inheritance is a key concept in object-oriented programming (OOP) that allows a new class (called a subclass or derived class) to inherit attributes and methods from an existing class (called a superclass or base class).
  • The subclass can then extend or modify the behavior of the superclass while also inheriting its properties.
  • Superclass (Base Class): The class whose members (attributes and methods) are inherited by another class is known as the superclass or base class.
  • Subclass (Derived Class): The class that inherits the members from a superclass is called the subclass or derived class. A subclass can have its own additional members and can also override or extend the members inherited from the superclass.
  • IS-A Relationship: Inheritance establishes an “is-a” relationship between the subclass and the superclass, indicating that the subclass is a specialized version of the superclass. For example, if Dog is a subclass of Animal, then it can be said that "a dog is an animal."
  • “extends” Keyword: In Java, you specify inheritance using the “extends” keyword when defining a class. A subclass is created as a specialization of the superclass.
  • “super” keyword: In Java, the super keyword is used to refer to the superclass (parent class) of the current subclass (child class). It allows you to access and call members (attributes or methods) of the superclass, as well as explicitly call the superclass’s constructor.

Let’s create a subclass of the Student class called UndergraduateStudent.

public class UndergraduateStudent extends Student {
// Additional attributes specific to undergraduate students
private int yearLevel;

// Constructor for UndergraduateStudent
public UndergraduateStudent(String name, int age, String major, int yearLevel) {
// Call the constructor of the superclass (Student)
super(name, age, major);
this.yearLevel = yearLevel;
}

// Getter method for yearLevel
public int getYearLevel() {
return yearLevel;
}

// Setter method for yearLevel
public void setYearLevel(int yearLevel) {
this.yearLevel = yearLevel;
}

// Method specific to undergraduate students
public void enrollCourse(String courseName) {
System.out.println(getName() + " is enrolled in " + courseName);
}
}

let’s create another subclass of the Student class called GraduateStudent.

public class GraduateStudent extends Student {
// Additional attributes specific to graduate students
private String advisor;
private String researchTopic;

// Constructor for GraduateStudent
public GraduateStudent(String name, int age, String major, String advisor, String researchTopic) {
// Call the constructor of the superclass (Student)
super(name, age, major);
this.advisor = advisor;
this.researchTopic = researchTopic;
}

// Getter method for advisor
public String getAdvisor() {
return advisor;
}

// Setter method for advisor
public void setAdvisor(String advisor) {
this.advisor = advisor;
}

// Getter method for researchTopic
public String getResearchTopic() {
return researchTopic;
}

// Setter method for researchTopic
public void setResearchTopic(String researchTopic) {
this.researchTopic = researchTopic;
}

// Method specific to graduate students
public void conductResearch() {
System.out.println(getName() + " is conducting research on " + researchTopic);
}
}

Example for accessing properties of both the superclass (Student) and the subclass (GraduateStudent).

public class Main {
public static void main(String[] args) {
// Creating a GraduateStudent object
GraduateStudent gradStudent = new GraduateStudent("John", 25, "Computer Science", "Dr. Smith", "Machine Learning");

// Accessing properties of the superclass (Student)
System.out.println("Student Information:");
System.out.println("Name: " + gradStudent.getName());
System.out.println("Age: " + gradStudent.getAge());
System.out.println("Major: " + gradStudent.getMajor());

// Accessing properties of the subclass (GraduateStudent)
System.out.println("\nGraduate Student Information:");
System.out.println("Advisor: " + gradStudent.getAdvisor());
System.out.println("Research Topic: " + gradStudent.getResearchTopic());

// Calling methods of the superclass
System.out.println("\nDisplaying student information:");
gradStudent.displayInfo(); // Calling superclass method

// Calling methods of the subclass
System.out.println("\nConducting research:");
gradStudent.conductResearch(); // Calling subclass method
}
}

Types of inheritance

  1. Single inheritance
  • In single inheritance, a subclass inherits from only one superclass.
  • Java supports single inheritance, where a class can have only one direct superclass.
  • Example: Class Dog inherits from class Animal.
  1. Multilevel Inheritance
  • In multilevel inheritance, a subclass inherits from another subclass, forming a chain of inheritance.
  • Each subclass in the chain inherits properties and behaviors from its immediate superclass.
  • Example: Class GrandChild inherits from class Child, which in turn inherits from class Parent.
  1. Hierarchical Inheritance
  • In hierarchical inheritance, multiple subclasses inherit from a single superclass.
  • Each subclass shares common properties and behaviors inherited from the same superclass.
  • Example: Classes Cat, Dog, and Rabbit all inherit from class Animal.
  1. Multiple Inheritance (Not Supported in Java)
  • Multiple inheritance allows a subclass to inherit from multiple superclasses.
  • While Java doesn’t support multiple inheritance of classes, it supports multiple inheritance of interfaces through interface implementation.
  • Example: Class Student inherits from both class Person and class Scholar.
  1. Hybrid Inheritance (Not Supported in Java)
  • Hybrid inheritance is a combination of two or more types of inheritance.
  • It can involve single, multilevel, and hierarchical inheritance, along with multiple inheritance if supported by the programming language.
  • Java doesn’t directly support hybrid inheritance due to the absence of multiple inheritance of classes.

Polymorphism

  • The word “poly” means many and “morphs” means forms, so it means many forms. we can define Java Polymorphism as the ability of a message to be displayed in more than one form. It allows us to perform a single action in different ways.
  • In Java, polymorphism is primarily achieved through method overriding and method overloading.

Compile-Time Polymorphism (Static Binding or Early Binding) — Method overloading.

  • Compile-time polymorphism occurs when the compiler determines which method or operation to execute at compile time based on the method signature (method overloading) Method overloading allows multiple methods with the same name, but different parameter lists within the same class.
public class Box {

public double calculateVolume(double sideLength) {
return sideLength * sideLength * sideLength;
}

public double calculateArea(double length, double width, double height) {
return length * width * height;
}
}

Run-Time Polymorphism (Dynamic Binding or Late Binding) — Method overriding.

  • Run-time polymorphism occurs when the JVM determines which method or operation to execute at runtime based on the actual object type (method overriding). Method overriding allows a subclass to provide a specific implementation of a method that is already defined in its superclass.
class Shape {
// Method to calculate the area of a generic shape
public double calculateArea() {
return 0; // Default implementation for a generic shape
}
}

class Circle extends Shape {
private double radius;

public Circle(double radius) {
this.radius = radius;
}

@Override
public double calculateArea() {
return Math.PI * radius * radius; // Override to calculate the area of a circle
}
}

class Triangle extends Shape {
private double base;
private double height;

public Triangle(double base, double height) {
this.base = base;
this.height = height;
}

@Override
public double calculateArea() {
return 0.5 * base * height; // Override to calculate the area of a triangle
}
}

Abstraction

  • Abstraction involves hiding the unnecessary details while exposing only what is relevant and important.
  • In Java, abstraction is primarily achieved through abstract classes and interfaces.

Abstract classes

  • An abstract class is a class that cannot be instantiated on its own and is meant to be extended by other classes.
  • It may contain abstract methods (methods without a body) that are meant to be implemented by its subclasses.
  • Abstract classes can also have concrete (implemented) methods.
  • Abstract methods are declared in abstract classes and are meant to be implemented by concrete (non-abstract) subclasses. These methods define a contract that must be fulfilled by the subclasses.
abstract class Shape {
// Abstract method with no body
public double calculateArea();

void setColor(String color) {
// Concrete method with an implementation
System.out.println("Setting color to " + color);
}
}

class Circle extends Shape {
private double radius;

public Circle(double radius) {
this.radius = radius;
}

@Override
public double calculateArea() {
return Math.PI * radius * radius; // Override to calculate the area of a circle
}
}

class Triangle extends Shape {
private double base;
private double height;

public Triangle(double base, double height) {
this.base = base;
this.height = height;
}

@Override
public double calculateArea() {
return 0.5 * base * height; // Override to calculate the area of a triangle
}
}
public class Main {
public static void main(String[] args) {
// Create a Shape object
Shape shape = new Shape(); // Compilation Error: Cannot instantiate the abstract class Shape

// Create a Circle object
Shape circle = new Circle(5.0);

// Calculate and print the area of the circle
double circleArea = circle.calculateArea();
System.out.println("Area of the circle: " + circleArea);

// Set the color of the circle
circle.setColor("Red");
}
}

Interfaces

  • An interface is a pure form (100%) of abstraction in Java.
  • It defines a contract by specifying a set of abstract methods that implementing classes must provide.
  • Classes can implement multiple interfaces, allowing for a high level of abstraction and flexibility in code design.
  • Interfaces cannot have implemented/concreate methods.
  • To implement an interface we use the keyword “implements” with class.
  • In Java, you cannot directly create an object of an interface because interfaces cannot be instantiated.
// Shape interface
public interface Shape {
// Abstract method to calculate the area
double calculateArea();

}

// Circle class implementing the Shape interface
public class Circle implements Shape {
private double radius;

public Circle(double radius) {
this.radius = radius;
}

@Override
public double calculateArea() {
return Math.PI * radius * radius;
}
}

// Triangle class implementing the Shape interface
public class Triangle implements Shape {
private double base;
private double height;

public Triangle(double base, double height) {
this.base = base;
this.height = height;
}

@Override
public double calculateArea() {
return 0.5 * base * height;
}
}
`public class Main {
public static void main(String[] args) {
// Create a Circle object
Shape circle = new Circle(5.0);

// Calculate and print the area of the circle
double circleArea = circle.calculateArea();
System.out.println("Area of the circle: " + circleArea);

// Create a Triangle object
Shape triangle = new Triangle(4.0, 3.0);

// Calculate and print the area of the triangle
double triangleArea = triangle.calculateArea();
System.out.println("Area of the triangle: " + triangleArea);
}
}
  • Abstract classes and Interfaces are used to define a generic template for other classes to follow. They define a set of rules and guidelines that their subclasses must follow. By providing an abstract class, we can ensure that the classes that extend it have a consistent structure and behavior. This makes the code more organized and easier to maintain.

Up casting and Down casting in Abstraction

  • Upcasting is the typecasting of a child object to a parent object. Upcasting can be done implicitly. We can only access methods and fields defined in the parent interface/class through this reference.
  • Downcasting means the typecasting of a parent object to a child object. Downcasting cannot be implicit. We have direct access to all methods and fields defined in the sub class, in addition to any inherited methods or fields from its superclass or implemented interfaces.
// Shape interface
public interface Shape {
// Abstract method to calculate the area
double calculateArea();

}

// Circle class implementing the Shape interface
public class Circle implements Shape {
private double radius;

public Circle(double radius) {
this.radius = radius;
}

// getter method for radius
public double getRadius() {
return this.radius;
}

@Override
public double calculateArea() {
return Math.PI * radius * radius;
}
}

public class Main {
public static void main(String[] args) {
// Implicit Upcasting
Shape circle = new Circle(5.0);

// We have access to calculate area, since it is overridden from the Shape interface (Parent)
double circleArea = circle.calculateArea();
System.out.println("Area of the circle: " + circleArea);

// compilation error, because it's not part of the Shape interface (Parent)
System.out.println(circle.getRadius());

// Implicit down casting: compilation error
Circle circle2 = new Shape();

// Explicit down casting
Circle circle2 = (Circle) circle;

// We have access, because the reference variable circle2 is of type sub class, which has this method
System.out.println(circle2.getRadius());

}
}

Conclusion

  • In conclusion, Object-Oriented Programming (OOP) is a powerful paradigm that promotes code organization, reusability, and maintainability by modeling real-world entities as objects with properties and behaviors. Throughout this article, we’ve explored the four main concepts of OOP: encapsulation, inheritance, polymorphism, and abstraction, and how they can be applied in various scenarios to improve software design and development.

  • Encapsulation allows us to hide the internal details of an object and expose only the necessary functionalities through well-defined interfaces, enhancing security and modularity.

  • Inheritance facilitates code reuse by allowing classes to inherit properties and behaviors from parent classes, promoting a hierarchical structure and facilitating the creation of more specialized subclasses.

  • Polymorphism allows us to perform a single action in different ways.

  • Abstraction simplifies complex systems by focusing on essential features and hiding implementation details, promoting clarity and maintainability.

Mastering SOLID principles in Java

· 10 min read

SOLID principles are one of the object-oriented approaches used in software development, intended to create quality software. The broad goal of the SOLID principles is to reduce dependencies, so that developers can change one area of the software without affecting others. Furthermore, they are intended to make designs easier to understand, maintain, reuse, and extend.

1. Single responsibility principle (SRP)

  • SRP states that, a class should have only one reason to change, meaning it should have a single responsibility.
  • This principle encourages you to create classes that do one thing and do it well.
  • Lots of responsibilities make the class highly coupled, harder to maintain and harder to understand.
  • For an example, consider the BankAccount class below:
public class BankAccount {
private double balance;
private String accountNo;
private String accountType;

//constructor
public BankAccount(double balance, String accountNo, String accountType) {
this.balance = balance;
this.accountNo = accountNo;
this.accountType = accountType;
}

public void deposit() {
//code to deposit amount
}

public void withdraw(double amount) {
//code to withdraw amount
}

public double calculateInterest() {
//code to calculate interest
}

public void saveBankAccountDetails() {
//save account information to database
}

public void sendSmsNotification() {
//code to send SMS notification to customer
}
}
  • In the context of ‘BankAccount’ class, managing deposits, withdrawals and interest are reasonable and related responsibilities to account management. But saveBankAccountDetails and sendSmsNotification methods are not related to bank account management’s behavior. Hence this class is violating SRP. The easiest way to fix this problem is create separate classes for managing bank accounts, save information to database and send SMS notifications, so that each class having only one responsibility.
// BankAccount class will handle account related responsibilities
public class BankAccount {
private double balance;
private String accountNo;
private String accountType;
private SQLBankAccountRepository sqlBankAccountRepository;
private NotificationService notificationService;

//constructor
public BankAccount(double balance, String accountNo, String accountType, SQLBankAccountRepository sqlBankAccountRepository, NotificationService notificationService) {
this.balance = balance;
this.accountNo = accountNo;
this.accountType = accountType;
this.sqlBankAccountRepository = sqlBankAccountRepository;
this.notificationService = notificationService;
}

public void deposit() {
//code to deposit amount
}

public void withdraw(double amount) {
//code to withdraw amount
}

public double calculateInterest() {
//code to calculate interest
}
}
// SQLBankAccountRepository class will handle database related responsibilities
public class SQLBankAccountRepository {
public void saveBankAccountDetails(BankAccount bankAccount) {
//save account information to database
}
}
// NotificationService class will handle notification related responsibilities
public class NotificationService {
public void sendSmsNotification(BankAccount bankAccount) {
//code to send SMS notification to customer
}
}

2. Open closed principle (OCP)

  • OCP states that, software entities (such as classes, modules, functions, etc.) should be open for extension but closed for modification.
  • In other words, you should be able to add new functionality or behavior to a system without altering the existing code.
  • Adding a new feature to software entities by modifying it, can lead new bugs, poor readability and hard to maintain.
  • For an example consider the calculateInterest method of BankAccount class.
public class BankAccount {
private double balance;
private String accountNo;
private String accountType;
private SQLBankAccountRepository sqlBankAccountRepository;
private NotificationService notificationService;

//constructor
public BankAccount(double balance, String accountNo, String accountType, SQLBankAccountRepository sqlBankAccountRepository, NotificationService notificationService) {
this.balance = balance;
this.accountNo = accountNo;
this.accountType = accountType;
this.sqlBankAccountRepository = sqlBankAccountRepository;
this.notificationService = notificationService;
}

public void deposit() {
//code to deposit amount
}

public void withdraw() {
//code to withdraw amount
}

public double calculateInterest() {
if(this.accountType.equals(‘Savings’))
return this.balance * 0.03;

else if(this.accountType.equals(‘Checking’))
return this.balance * 0.01;

else if(this.accountType.equals(‘FixedDeposit’))
return this.balance * 0.05;
}

}
  • There is a problem with the calculateInterest method. What if there is a new account type introduced with new interest requirement, We have to add another if condition in the calculateInterest method. It violates OCP. The easiest way to fix this problem is creating a common interface for all account types and implement it for every account types.
public interface BankAccount() {
public void deposit();
public void withdraw(double amount);
public double calculateInterest();
}
public class SavingsBankAccount implements BankAccount {
// attributes and constructor
// deposit and withdraw method declarations

@Override
public double calculateInterest() {
return this.balance * 0.03;
}

}
public class CheckingBankAccount implements BankAccount {
// attributes and constructor
// deposit and withdraw method declarations

@Override
public double calculateInterest() {
return this.balance * 0.01;
}

}
public class FixedDepositBankAccount implements BankAccount {
// attributes and constructor
// deposit and withdraw method declarations

@Override
public double calculateInterest() {
return this.balance * 0.05;
}

}
  • With new implementation, we can calculate interest by implementing BankAccount without modifying underlying logic. The class is open for extension (new account classes can be added) but closed for modification (existing calculateInterest methods remain untouched).
BankAccount savingsBankAccount = new SavingsBankAccount(); 
double savingsBankAccountInterest = savingsBankAccount. calculateInterest();

BankAccount checkingBankAccount = new CheckingBankAccount();
double checkingBankAccountInterest = checkingBankAccount. calculateInterest();

BankAccount fixedDepositBankAccount = new FixedDepositBankAccount();
double fixedDepositBankAccountInterest = fixedDepositBankAccount. calculateInterest();

3. Liskov Substitution Principle (LSP)

  • LSP states that objects of a derived class should be able to replace objects of the base class without affecting the correctness of the program.
  • In other words, if a class is a subclass of another class, it should be able to substitute its parent class without causing problems.
  • This principle ensures that inheritance relationships are well-designed and that the derived class adheres to the contract of the base class.
  • For an example, assume that, we have a superclass A and three subclasses B, C, and D.
A obj1 = new B();

A obj2 = new C();

A obj3 = new D();
  • To ensure a valid use of LSP, all of the above 3 statements should run perfectly without interrupting the program flow.

  • Let’s take another example,

class Bird {
public void Eat() {
System.out.println("This bird can eat.");
}

public void fly() {
System.out.println("This bird can fly.");
}
}
class Parrot extends Bird {
}

class Penguin extends Bird {
@Override
public void fly() {
throw new FlyException("Penguins cannot fly");
}
}
  • The Penguin class overrides the fly() method from the base class, but the behavior is fundamentally different from what’s expected by the base class. This is an LSP violation because when we try to substitute an instance of Penguin for a generic Bird, it will not behave as a typical bird in terms of flying. This could lead to unexpected behavior in the code.

  • To resolve this LSP violation, you should restructure the class hierarchy and ensure that derived classes confirm to the contract of the base class. One way to fix this issue is to use composition or interfaces to handle behaviors that don’t fit the base class’s contract.

class Bird{
public void Eat() {
System.out.println("This bird can eat.");
}
}
class FlyingBird extends Bird{
public void fly() {
System.out.println("This bird can fly.");
}
}

class Parrot extends FlyingBird {
}

class Penguin extends Bird{
}

4. Interface segregation principle (ISP)

  • ISP states that, clients should not be forced to depend on interfaces they do not use. This principle encourages you to create specific, fine-grained interfaces rather than large, monolithic ones, to avoid forcing clients to implement methods they don’t need.
  • For an example consider the withdraw method of LoanBankAccount class that implements previously discussed BankAccount class.
public interface BankAccount() {
public void deposit();
public void withdraw(double amount);
public double calculateInterest();
}

public class SavingsBankAccount implements BankAccount {
// attributes and constructor
// deposit and calculateInterest method declarations

public void withdraw(double amount) {
if (this.balance < double amount)
this.balance-=amount;
}

}

public class CheckingBankAccount implements BankAccount {
// attributes and constructor
// deposit and withdraw method declarations

public void withdraw(double amount) {
if (this.balance < double amount)
this.balance-=amount;
}

}
public class LoanBankAccount implements BankAccount {
// attributes and constructor
// deposit and withdraw method declarations

public double withdraw() {
//empty method – cannot withdraw from loan accounts
}

}
  • Here, withdraw method in SavingsBankAccount and CheckingBankAccount classes working fine. But LoanBankAccount have a empty withdraw method, because in loan account withdrawing process not allowed. The implementation classes should use only the methods that are required. We should not force the client to use the methods that they do not want to use. That’s why the principle states that the larger interfaces split into smaller ones.
public interface BankAccount() {
public void deposit();
public double calculateInterest();
}

public interface Withdrawable() {
public void withdraw();
}

public class SavingsBankAccount implements BankAccount, Withdrawable {
//deposit, calculateInterest, withdraw methods definitions
}

public class CheckingBankAccount implements BankAccount, Withdrawable {
// deposit, calculateInterest, withdraw methods definitions
}

public class LoanBankAccount implements BankAccount {
// deposit, calculateInterest methods definitions
}
  • Here, we created BankAccount interface for deposit and calculateInterest and Withdrawable interface for withdraw. So that implementation classes can implement necessary interfaces according to its need.

5. Dependency inversion principle (DIP)

  • The principle states that we must use abstraction (abstract classes and interfaces) instead of concrete implementations. The DIP states that:

  • High-level modules should not depend on low-level modules. Both should depend on abstractions.

  • Abstractions should not depend on details. Details should depend on abstractions.

  • In simpler terms, the principle encourages you to rely on interfaces or abstract classes to decouple your code and make it easier to extend, maintain, and test.

  • Let’s understand the principle through an example.

class PDFReportGenerator {
public void generatePDFReport() {
// PDF generation logic
}
}

class HTMLReportGenerator {
public void generateHTMLReport() {
// HTML generation logic
}
}

class ReportService {
private PDFReportGenerator pdfGenerator;
private HTMLReportGenerator htmlGenerator;

public ReportService() {
pdfGenerator = new PDFReportGenerator();
htmlGenerator = new HTMLReportGenerator();
}

public void generatePDFReport() {
pdfGenerator.generatePDFReport();
}

public void generateHTMLReport() {
htmlGenerator.generateHTMLReport();
}
}
  • In the above code, ReportService directly depends on concrete implementations of report generators (PDFReportGenerator and HTMLReportGenerator). This leads to high coupling between the high-level and low-level modules. To adhere to the Dependency Inversion Principle, you should introduce abstractions (interfaces or abstract classes) and rely on those abstractions instead.
interface ReportGenerator {
void generateReport();
}

class PDFReportGenerator implements ReportGenerator {
public void generateReport() {
// PDF generation logic
}
}

class HTMLReportGenerator implements ReportGenerator {
public void generateReport() {
// HTML generation logic
}
}

class ReportService {
private ReportGenerator reportGenerator;

public ReportService(ReportGenerator generator) {
this.reportGenerator = generator;
}

public void generateReport() {
reportGenerator.generateReport();
}
}
  • In this updated code, we introduced the ReportGenerator interface, and the ReportService now depends on this abstraction rather than concrete implementations. This decouples the high-level module from low-level modules, and you can easily swap out different report generators without modifying the ReportService class.

Conclusion

  • In conclusion, the SOLID principles are a set of fundamental guidelines for designing clean, maintainable, and extensible object-oriented software. Each principle addresses a specific aspect of software design and aims to promote good design practices and robust code.

  • Single Responsibility Principle (SRP): You ensure that each class or module has only one reason to change, making your code more focused and easier to understand and maintain.

  • Open-Closed Principle (OCP): You design software components that are open for extension but closed for modification, allowing you to add new features or behaviors without changing existing code.

  • Liskov Substitution Principle (LSP): You create inheritance hierarchies where derived classes can seamlessly replace their base classes, guaranteeing that code that depends on the base class continues to work as expected.

  • Interface Segregation Principle (ISP): You define fine-grained interfaces, avoiding clients’ dependency on methods they don’t use and keeping interfaces specific to their respective contexts.

  • Dependency Inversion Principle (DIP): You rely on abstractions and decouple high-level modules from low-level modules, promoting flexibility and testability in your code.

These principles provide a strong foundation for writing high-quality, adaptable, and scalable software systems.