Free Solved Question Paper of MCS021 - Data and File Structure for June 2023

Hey there! Welcome to KnowledgeKnot! Don't forget to share this with your friends and revisit often. Your support motivates us to create more content in the future. Thanks for being awesome!

1. (a) Write an algorithm for multiplication of two n Ɨ n matrices. Calculate both time and space complexity for this algorithm. [10 marks]

Answer :

Algorithm for Multiplication of Two n Ɨ n Matrices:
Step 1: Initialize an empty matrix C of size n Ɨ n to store the result.
Step 2: For each element cij in matrix C, do the following:

ā†’ Step 2.1: Set cij to 0.
ā†’ Step 2.2: For each k from 0 to n-1, add the product of the elements from matrices A and B to cij:


for (i = 0; i < n; i++) {
    for (j = 0; j < n; j++) {
        C[i][j] = 0;
        for (k = 0; k < n; k++) {
            C[i][j] += A[i][k] * B[k][j];
        }
    }
}
        

Explanation:
The above algorithm uses three nested loops to perform the matrix multiplication:
ā†’ The outer loop iterates over the rows of the resultant matrix C.
ā†’ The middle loop iterates over the columns of the resultant matrix C.
ā†’ The innermost loop calculates the dot product of the corresponding row from matrix A and the column from matrix B.

For each element cij in the resultant matrix C, the innermost loop performs the sum of products:
cij = Ī£ ( Aik * Bkj ) for k from 0 to n-1.

Time Complexity:
The time complexity of this algorithm is O(n3). This is because the algorithm contains three nested loops, each iterating n times. Thus, the total number of operations is proportional to n Ɨ n Ɨ n or O(n3).

Space Complexity:
The space complexity of this algorithm is O(n2). This is due to the storage required for the resultant matrix C, which is of size n Ɨ n. The input matrices A and B are also of size n Ɨ n, but since they are provided as input, they do not contribute to the additional space complexity.

1. (b) What is a sparse matrix? Write an algorithm that accepts a 6 Ɨ 5 sparse matrix and output 3-tuple representation of the matrix. [10 marks]

Answer :

Sparse Matrix:
A sparse matrix is a matrix in which most of the elements are zero. Typically, a matrix is considered sparse when the number of non-zero elements is less than 10% of the total elements in the matrix.

Algorithm for 3-tuple representation of a 6 Ɨ 5 sparse matrix:

ā†’ Step 1: Initialize a counter count to 0 for non-zero elements.
ā†’ Step 2: Create a 2D array tuple to store the 3-tuple representation.
ā†’ Step 3: Traverse the sparse matrix and for each non-zero element:
ā†’ ā†’ Step 3.1: Increment count.
ā†’ ā†’ Step 3.2: Add a new row to tuple with [row_index, column_index, value].
ā†’ Step 4: Set the first row of tuple as [row_count, column_count, count].
ā†’ Step 5: Output the tuple array.

Implementation in C-like pseudocode:

void sparse_to_3tuple(int matrix[6][5], int tuple[][3]) {
    int count = 0;
    
    // Count non-zero elements
    for (int i = 0; i < 6; i++) {
        for (int j = 0; j < 5; j++) {
            if (matrix[i][j] != 0) {
                count++;
            }
        }
    }
    
    // Set first row of tuple
    tuple[0][0] = 6;  // Number of rows
    tuple[0][1] = 5;  // Number of columns
    tuple[0][2] = count;  // Number of non-zero elements
    
    // Fill the tuple array
    int k = 1;
    for (int i = 0; i < 6; i++) {
        for (int j = 0; j < 5; j++) {
            if (matrix[i][j] != 0) {
                tuple[k][0] = i;
                tuple[k][1] = j;
                tuple[k][2] = matrix[i][j];
                k++;
            }
        }
    }
}

Explanation:
The algorithm first counts the number of non-zero elements in the matrix. It then creates a 3-tuple representation where:
ā†’ The first row contains: [number of rows, number of columns, number of non-zero elements]
ā†’ Each subsequent row represents a non-zero element as: [row index, column index, value]

Time Complexity: O(m Ɨ n), where m is the number of rows (6) and n is the number of columns (5).
Space Complexity: O(k), where k is the number of non-zero elements plus one (for the first row).

This 3-tuple representation is an efficient way to store sparse matrices, as it only stores information about non-zero elements, significantly reducing memory usage for matrices with many zero elements.

1. (c) Write an algorithm for array implementation of linked list. [10 marks]

Answer :

Array Implementation of Linked List:
An array implementation of a linked list uses an array to store the elements and another array to store the "next" pointers. This approach is also known as a "static linked list".

Algorithm for Array Implementation of Linked List:

1. Initialization:
ā†’ Create an array data[] to store the elements.
ā†’ Create an array next[] to store the indices of the next elements.
ā†’ Initialize a variable head to -1 (indicating an empty list).
ā†’ Initialize a variable availablePos to 0 (first available position in the array).

2. Insertion Algorithm:

ā†’ Check if availablePos is greater than or equal to the maximum size of the array.
ā€ƒā†’ If yes, return "List is full".
ā†’ Store the element at data[availablePos].
ā†’ Update the next[availablePos] to head.
ā†’ Update head to availablePos.
ā†’ Increment availablePos by 1.

3. Deletion Algorithm (deleting the first element):

ā†’ Check if head is -1.
ā€ƒā†’ If yes, return "List is empty".
ā†’ Store the element to be deleted from data[head].
ā†’ Move head to the next element indicated by next[head].
ā†’ Return the deleted element.

4. Traversal Algorithm:

ā†’ Initialize current to head.
ā†’ While current is not -1, do the following:
ā€ƒā†’ Print the element at data[current].
ā€ƒā†’ Move current to the next element indicated by next[current].

Advantages:
ā†’ Constant time insertion at the beginning of the list.
ā†’ No dynamic memory allocation required.
ā†’ Better cache performance due to contiguous memory allocation.

Disadvantages:
ā†’ Fixed size, cannot grow beyond the initial array size.
ā†’ Inefficient for large lists with many insertions and deletions.
ā†’ Does not support efficient insertion at arbitrary positions.

Time Complexity:
ā†’ Insertion at the beginning: O(1)
ā†’ Deletion from the beginning: O(1)
ā†’ Traversal: O(n), where n is the number of elements

Space Complexity: O(n), where n is the maximum number of elements that can be stored.

1. (d) What is a binary search? Write an algorithm for binary search and find its complexity. [10 marks]

Answer :

Binary Search:
Binary search is an efficient algorithm for searching an element in a sorted array by repeatedly dividing the search interval in half. It compares the target value to the middle element of the array; if they are unequal, the half in which the target cannot lie is eliminated, and the search continues on the remaining half until the target is found or it is clear the target is not in the array.

Algorithm for Binary Search:

binarySearch(arr, target, low, high):
    while low <= high:
        mid = (low + high) // 2
        if arr[mid] == target:
            return mid  // Target found
        elif arr[mid] < target:
            low = mid + 1  // Target is in the upper half
        else:
            high = mid - 1  // Target is in the lower half
    return -1  // Target not found

Explanation of the Algorithm:
ā†’ Initialize two pointers: low at the start of the array and high at the end.
ā†’ While low is less than or equal to high:
ā†’ ā†’ Calculate the middle index mid.
ā†’ ā†’ If the middle element is the target, return its index.
ā†’ ā†’ If the target is greater than the middle element, search the upper half.
ā†’ ā†’ If the target is less than the middle element, search the lower half.
ā†’ If the loop ends without finding the target, return -1.

Complexity Analysis:
Time Complexity: O(log n)
ā†’ In each step, the algorithm divides the search interval in half.
ā†’ The number of steps required is logarithmic in the size of the input array.
ā†’ Worst and average case: O(log n)
ā†’ Best case (when the middle element is the target): O(1)

Space Complexity: O(1)
ā†’ The algorithm uses a constant amount of extra space regardless of the input size.
ā†’ It only requires a few variables (low, high, mid) to keep track of the search interval.

Advantages of Binary Search:
ā†’ Very efficient for large sorted datasets.
ā†’ Significantly faster than linear search for large arrays.
ā†’ Useful in many algorithms and data structures (e.g., binary search trees).

Limitations:
ā†’ Requires the array to be sorted beforehand.
ā†’ Not efficient for small arrays or frequently changing datasets where sorting overhead is significant.

2. (a) What is a Splay Tree? Explain how it is different from a binary tree. [10 marks]

Answer :

Splay Tree:
A splay tree is a self-adjusting binary search tree with the additional property that recently accessed elements are quick to access again. It performs basic operations such as insertion, search and deletion in O(log n) amortized time.

Key Features of Splay Trees:
ā†’ Self-adjusting: After an access, the tree is restructured using a splay operation.
ā†’ Splay operation: Moves the accessed node to the root through a series of rotations.
ā†’ Amortized efficiency: While individual operations can be O(n), the average over a sequence of operations is O(log n).

Differences from a Regular Binary Tree:

1. Structure Modification:
ā†’ Binary Tree: Structure remains static after insertion unless explicitly balanced.
ā†’ Splay Tree: Structure changes with each access, bringing the accessed node to the root.

2. Balance:
ā†’ Binary Tree: Can become highly unbalanced, leading to O(n) operations in worst case.
ā†’ Splay Tree: Maintains a rough balance through splaying, ensuring amortized O(log n) operations.

3. Access Patterns:
ā†’ Binary Tree: Performance doesn't adapt to access patterns.
ā†’ Splay Tree: Automatically brings frequently accessed items closer to the root, improving future access times.

4. Complexity:
ā†’ Binary Tree: Operations have O(h) worst-case time, where h is the height of the tree.
ā†’ Splay Tree: Operations have O(log n) amortized time, regardless of the tree's current shape.

5. Implementation:
ā†’ Binary Tree: Simpler implementation with straightforward insert, delete, and search operations.
ā†’ Splay Tree: More complex implementation due to the splay operation and various rotation cases.

6. Memory Usage:
ā†’ Binary Tree: Consistent memory usage.
ā†’ Splay Tree: May require more memory operations due to frequent restructuring.

7. Use Cases:
ā†’ Binary Tree: General-purpose tree structure, good for stable datasets.
ā†’ Splay Tree: Excellent for applications with locality of reference or where recent items are likely to be accessed again.

Conclusion:
While both are binary search trees, splay trees offer a unique self-adjusting property that can provide significant performance benefits in certain scenarios, especially when access patterns exhibit temporal locality. However, this comes at the cost of more complex implementation and potentially more frequent tree restructuring operations.

2. (b) Traverse the following binary tree in Pre-order and In-order: [10 marks]

Answer :

image

Pre-order Traversal:

In pre-order traversal, we visit the root node first, then the left subtree, and finally the right subtree. The process is as follows:

A ā†’ B ā†’ C ā†’ D ā†’ E ā†’ G ā†’ F

Explanation of Pre-order traversal:

1. Start at root A and visit it
2. Move to left child B and visit it
3. Move to B's left child C and visit it
4. Return to A and move to right child D, visit it
5. Move to D's left child E and visit it
6. Move to E's left child G and visit it
7. Return to D and visit its right child F

In-order Traversal:

In in-order traversal, we visit the left subtree first, then the root node, and finally the right subtree. The process is as follows:

C ā†’ B ā†’ A ā†’ G ā†’ E ā†’ D ā†’ F

Explanation of In-order traversal:

1. Start at the leftmost node C and visit it
2. Move up to parent B and visit it
3. Move up to root A and visit it
4. Move to A's right subtree, then to the leftmost node G and visit it
5. Move up to parent E and visit it
6. Move up to D and visit it
7. Finally, visit D's right child F

Key Points:

ā†’ Pre-order traversal is useful for creating a copy of the tree or prefix expression of an expression tree.
ā†’ In-order traversal of a binary search tree gives nodes in non-decreasing order.
ā†’ These traversals are fundamental in many tree-based algorithms and operations.

3. (a) Explain Quick sort algorithm. Sort the following set of data using this algorithm. Show intermediate steps of sorting: 20, 6, 8, 19, 36, 4, 28, 50 [10 marks]

Answer :

Quick Sort Algorithm:
Quick sort is a divide-and-conquer algorithm that works by selecting a 'pivot' element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot. The sub-arrays are then sorted recursively.

Steps of Quick Sort:
1. Choose a pivot element from the array.
2. Partition the array around the pivot, such that:
ā†’ Elements smaller than the pivot are on the left.
ā†’ Elements greater than the pivot are on the right.
3. Recursively apply steps 1-2 to the sub-array of elements with smaller values and the sub-array of elements with greater values.

Sorting the given data: 20, 6, 8, 19, 36, 4, 28, 50

Step 1 (First partition):
Choose 20 as pivot:
[6, 8, 19, 4, 20, 36, 28, 50]

Step 2 (Recursion on left sub-array):
[6, 8, 19, 4] - Choose 6 as pivot:
[4, 6, 8, 19] - Left side sorted

Step 3 (Recursion on right sub-array):
[36, 28, 50] - Choose 36 as pivot:
[28, 36, 50] - Right side sorted

Final sorted array:
[4, 6, 8, 19, 20, 28, 36, 50]

Time Complexity:
ā†’ Average case: O(n log n)
ā†’ Worst case: O(nĀ²) - occurs when the pivot is always the smallest or largest element
ā†’ Best case: O(n log n) - occurs when the pivot always divides the array in half

Space Complexity: O(log n) due to the recursive call stack

Advantages of Quick Sort:
ā†’ In-place sorting (requires little additional memory)
ā†’ Very efficient for large datasets
ā†’ Good cache performance

Disadvantages:
ā†’ Unstable sort (doesn't preserve the relative order of equal elements)
ā†’ Worst-case time complexity of O(nĀ²)

3. (b) What is an Indexed Sequential File Organization? How is it different from direct file organization? Explain. [10 marks]

Answer :

Indexed Sequential File Organization:
Indexed Sequential File Organization is a method of organizing and accessing data in a file that combines features of both sequential and indexed file organizations. It allows both sequential access and direct access to records.

Key Features of Indexed Sequential File Organization:
ā†’ Records are stored in sequential order based on a key field.
ā†’ An index is maintained to allow direct access to records.
ā†’ Supports both sequential and random access to records.

Differences from Direct File Organization:

1. Record Organization:
ā†’ Indexed Sequential: Records are stored in a sorted order based on a key field.
ā†’ Direct: Records are stored at locations determined by a hashing function applied to the key.

2. Access Method:
ā†’ Indexed Sequential: Supports both sequential and direct access.
ā†’ Direct: Primarily supports direct access to records.

3. Index Structure:
ā†’ Indexed Sequential: Uses a multi-level index structure (cylinder index, track index, etc.).
ā†’ Direct: Typically doesn't use an index structure; relies on the hashing function.

4. Search Efficiency:
ā†’ Indexed Sequential: Efficient for both range queries and individual record retrieval.
ā†’ Direct: Very efficient for individual record retrieval, less so for range queries.

5. Space Utilization:
ā†’ Indexed Sequential: May have some unused space due to the need for sequential ordering.
ā†’ Direct: Can have better space utilization but may suffer from collisions.

6. Insertion and Deletion:
ā†’ Indexed Sequential: Insertions and deletions can be complex, often requiring reorganization.
ā†’ Direct: Insertions and deletions are generally simpler and don't require reorganization.

7. Suitability:
ā†’ Indexed Sequential: Suited for applications requiring both sequential processing and random access.
ā†’ Direct: Best for applications requiring frequent, rapid access to individual records.

Conclusion:
Indexed Sequential File Organization offers a balance between the efficiency of direct access and the flexibility of sequential access. It's particularly useful in scenarios where both types of access are required, such as in database management systems. Direct File Organization, on the other hand, excels in situations where rapid access to individual records is the primary requirement, but it lacks the sequential access capabilities of Indexed Sequential organization.

4. (a) What is a spanning tree? What are its applications? Write Kruskal's algorithm to find minimum cost spanning tree and explain it in terms of its complexity. [10 marks]

Answer :

Spanning Tree:
A spanning tree of an undirected graph is a subgraph that includes all the vertices of the graph and is a tree (i.e., it has no cycles). A minimum spanning tree (MST) is a spanning tree with weight less than or equal to the weight of every other spanning tree.

Applications of Spanning Trees:
ā†’ Network design (e.g., computer networks, telecommunications)
ā†’ Circuit design in electrical engineering
ā†’ Transportation networks and civil engineering
ā†’ Clustering and classification in data analysis
ā†’ Image processing and computer vision

Kruskal's Algorithm for Minimum Spanning Tree:

KruskalMST(Graph G):
    A = āˆ… (A will contain the edges of the MST)
    Sort all edges of G in non-decreasing order of weight
    Create a disjoint set for each vertex
    For each edge (u,v) in the sorted edge list:
        If Find(u) ā‰  Find(v):
            Add edge (u,v) to A
            Union(u,v)
    Return A

Explanation of Kruskal's Algorithm:
1. Start with an empty set A to store MST edges.
2. Sort all edges in non-decreasing order of weight.
3. For each edge, check if it forms a cycle with the spanning tree formed so far:
ā†’ If no cycle is formed, add the edge to A.
ā†’ If a cycle is formed, discard the edge.
4. Repeat step 3 until (n-1) edges are added to A, where n is the number of vertices.

Complexity Analysis:
Time Complexity: O(E log E) or O(E log V)
ā†’ Sorting edges: O(E log E)
ā†’ Find and Union operations: O(log V) each
ā†’ Total: O(E log E) + O(E log V) = O(E log E) (since E ā‰¤ VĀ²)

Space Complexity: O(E + V)
ā†’ O(E) for sorting edges
ā†’ O(V) for disjoint set data structure

Optimizations:
ā†’ Using efficient disjoint set data structures (e.g., union by rank, path compression) can improve the time complexity to nearly O(E).
ā†’ For dense graphs, Prim's algorithm might be more efficient.

Advantages of Kruskal's Algorithm:
ā†’ Simple to implement
ā†’ Works well for sparse graphs
ā†’ Can be easily adapted for distributed systems

Limitations:
ā†’ Not as efficient for dense graphs compared to Prim's algorithm
ā†’ Requires sorting of all edges, which can be memory-intensive for large graphs

4. (b) Define AVL tree. Write any two applications of AVL tree. [10 marks]

Answer :

AVL Tree:
An AVL tree is a self-balancing binary search tree where the heights of the left and right subtrees of any node differ by at most one. This balance is maintained through rotation operations performed after insertions and deletions.

Key Properties:
ā†’ Balance Factor: For each node, the height difference between left and right subtrees is at most 1.
ā†’ Self-Balancing: Automatically rebalances after insertions and deletions.
ā†’ Height: The height of an AVL tree is always O(log n), where n is the number of nodes.

Two Applications of AVL Trees:

1. Database Indexing:
ā†’ AVL trees are used in database systems to create and maintain indices.
ā†’ Ensures fast data retrieval, insertion, and deletion operations.
ā†’ Maintains balance even with frequent updates, ensuring consistent performance.

2. In-memory Sorting and Searching:
ā†’ Used in memory management systems for efficient allocation and deallocation.
ā†’ Provides fast in-memory sorting and searching capabilities.
ā†’ Useful in applications requiring frequent lookups and modifications, such as spell checkers or symbol tables in compilers.

Advantages of AVL Trees:
ā†’ Guaranteed O(log n) time complexity for search, insert, and delete operations.
ā†’ Automatic balancing ensures consistent performance regardless of input order.
ā†’ Efficient for applications with frequent lookups and less frequent modifications.

Limitations:
ā†’ More complex implementation compared to simple binary search trees.
ā†’ Higher memory overhead due to balance factor storage and more frequent rotations.
ā†’ May be overkill for small datasets or applications with infrequent updates.

5. (a) Write algorithms for the following:
(i) To create doubly linked list.
(ii) To delete an element from a doubly linked list. [10 marks]

Answer :

(i) Algorithm to Create a Doubly Linked List:

CreateDoublyLinkedList():
    head = null
    tail = null

InsertNode(data):
    newNode = new Node(data)
    if head is null:
        head = newNode
        tail = newNode
    else:
        tail.next = newNode
        newNode.prev = tail
        tail = newNode

Explanation:
ā†’ Initialize empty list with head and tail as null.
ā†’ For each insertion, create a new node.
ā†’ If list is empty, set both head and tail to the new node.
ā†’ Otherwise, add the new node at the end and update pointers.

(ii) Algorithm to Delete an Element from a Doubly Linked List:

DeleteNode(key):
    if head is null:
        return  // List is empty

    current = head
    while current is not null and current.data != key:
        current = current.next

    if current is null:
        return  // Key not found

    if current == head:
        head = current.next
    else:
        current.prev.next = current.next

    if current == tail:
        tail = current.prev
    else:
        current.next.prev = current.prev

    delete current

Explanation:
ā†’ Traverse the list to find the node with the given key.
ā†’ If found, update the pointers of adjacent nodes.
ā†’ Handle special cases: deleting head, tail, or the only node.
ā†’ Free the memory of the deleted node.

Time Complexity:
ā†’ Creation (inserting n elements): O(n)
ā†’ Deletion: O(n) in worst case (searching for the element)

Space Complexity:
ā†’ O(1) for both operations (excluding the space for the list itself)

5. (b) What is a stack? Explain PUSH and POP operations of stack with the help of algorithms for each operation. [10 marks]

Answer :

Stack:
A stack is a linear data structure that follows the Last In First Out (LIFO) principle. Elements can be added or removed only from one end, called the top of the stack.

Key Characteristics:
ā†’ LIFO (Last In First Out) ordering
ā†’ Operations are performed at one end only (top)
ā†’ Main operations: PUSH (insert) and POP (remove)

1. PUSH Operation:
PUSH adds an element to the top of the stack.

PUSH(stack, max_size, element):
    if top >= max_size - 1:
        return "Stack Overflow"
    top = top + 1
    stack[top] = element

Explanation of PUSH:
ā†’ Check if stack is full (top == max_size - 1).
ā†’ If not full, increment top.
ā†’ Add the new element at the top position.

2. POP Operation:
POP removes and returns the top element from the stack.

POP(stack):
    if top < 0:
        return "Stack Underflow"
    element = stack[top]
    top = top - 1
    return element

Explanation of POP:
ā†’ Check if stack is empty (top < 0).
ā†’ If not empty, store the top element.
ā†’ Decrement top.
ā†’ Return the stored element.

Time Complexity:
ā†’ PUSH: O(1)
ā†’ POP: O(1)

Space Complexity:
ā†’ O(1) for both operations

Applications of Stack:
ā†’ Function call management (Call Stack)
ā†’ Expression evaluation and syntax parsing
ā†’ Undo mechanisms in text editors
ā†’ Backtracking algorithms
ā†’ Browser history (back button functionality)

Advantages of Stack:
ā†’ Simple and efficient implementation
ā†’ Fast access to the most recently added element
ā†’ Useful for algorithms that need to backtrack or undo operations

Limitations:
ā†’ Limited access (only top element is directly accessible)
ā†’ Fixed size in array implementation (can lead to stack overflow)
ā†’ Not suitable for problems requiring random access to elements

Suggetested Articles