Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • loctv 8:32 pm on December 16, 2019 Permalink | Reply
    Tags: ,   

    Topological Sort and Tasks Scheduling (Kahn’s algorithm) 

    Topological Sort of a directed graph (a graph with unidirectional edges) is a linear ordering of its vertices such that for every directed edge (U, V) from vertex U to vertex VU comes before V in the ordering.

    Given a directed graph, find the topological ordering of its vertices.

    Example 1:

    Input: Vertices=4, Edges=[3, 2], [3, 0], [2, 0], [2, 1]
    Output: Following are the two valid topological sorts for the given graph:
    1) 3, 2, 0, 1
    2) 3, 2, 1, 0

    Capture

    Example 2:

    Input: Vertices=5, Edges=[4, 2], [4, 3], [2, 0], [2, 1], [3, 1]
    Output: Following are all valid topological sorts for the given graph:
    1) 4, 2, 3, 0, 1
    2) 4, 3, 2, 0, 1
    3) 4, 3, 2, 1, 0
    4) 4, 2, 3, 1, 0
    5) 4, 2, 0, 3, 1

    Capture

    Here are a few fundamental concepts related to topological sort:

    1. Source: Any node that has no incoming edge and has only outgoing edges is called a source.
    2. Sink: Any node that has only incoming edges and no outgoing edge is called a sink.
    3. So, we can say that a topological ordering starts with one of the sources and ends at one of the sinks.
    4. A topological ordering is possible only when the graph has no directed cycles, i.e. if the graph is a Directed Acyclic Graph (DAG). If the graph has a cycle, some vertices will have cyclic dependencies which makes it impossible to find a linear ordering among vertices.

    To find the topological sort of a graph we can traverse the graph in either a Breadth First Search (BFS) way using Kahn’s algorithm or Depth First Search (DFS). We will start with all the sources, and in a stepwise fashion, save all sources to a sorted list. We will then remove all sources and their edges from the graph. After the removal of the edges, we will have new sources, so we will repeat the above process until all vertices are visited.

    [preserved_text 1fbfd1112ea81554cf8bd39aca16ff45 /]

    Time Complexity

    In step ‘d’, each vertex will become a source only once and each edge will be accessed and removed once. Therefore, the time complexity of the above algorithm will be O(V+E), where ‘V’ is the total number of vertices and ‘E’ is the total number of edges in the graph.

    Space Complexity

    The space complexity will be O(V+E), since we are storing all of the edges for each vertex in an adjacency list.

    Tasks Scheduling

    There are ‘N’ tasks, labeled from ‘0’ to ‘N-1’. Each task can have some prerequisite tasks which need to be completed before it can be scheduled. Given the number of tasks and a list of prerequisite pairs, find out if it is possible to schedule all the tasks.

    Example 1:

    Input: Tasks=3, Prerequisites=[0, 1], [1, 2]
    Output: true
    Explanation: To execute task '1', task '0' needs to finish first. Similarly, task '1' needs to finish 
    before '2' can be scheduled. A possible scheduling of tasks is: [0, 1, 2] 
    

    Example 2:

    Input: Tasks=3, Prerequisites=[0, 1], [1, 2], [2, 0]
    Output: false
    Explanation: The tasks have cyclic dependency, therefore they cannot be scheduled.

    As you can see, we can represent the task and its dependency as an edge in a directed graph. There’s a little change in the above code that need to be changed to solve this problem so I let this as an exercise for you. Instead of returning a list, you just need to return a boolean.

    Another application of Topological Sort is to check if a directed graph contains a cycle.

     

     

     

     
  • loctv 5:34 pm on December 15, 2019 Permalink | Reply
    Tags:   

    Understanding how to generate permutation in Java 

    Given a set of distinct numbers, find all of its permutations.

    Permutation is defined as the re-arranging of the elements of the set. For example, {1, 2, 3} has the following six permutations:

    1. {1, 2, 3}
    2. {1, 3, 2}
    3. {2, 1, 3}
    4. {2, 3, 1}
    5. {3, 1, 2}
    6. {3, 2, 1}

    If a set has ‘n’ distinct elements it will have n! permutations.

    Example 1:

    Input: [1,3,5]
    Output: [1,3,5], [1,5,3], [3,1,5], [3,5,1], [5,1,3], [5,3,1]

    Let’s take the example-1 mentioned above to generate all the permutations. Following a BFS approach, we will consider one number at a time:

    1. If the given set is empty then we have only an empty permutation set: []
    2. Let’s add the first element (1), the permutations will be: [1]
    3. Let’s add the second element (3), the permutations will be: [3,1], [1,3]
    4. Let’s add the third element (5), the permutations will be: [5,3,1], [3,5,1], [3,1,5], [5,1,3], [1,5,3], [1,3,5]

    Let’s analyze the permutations in the 3rd and 4th step. How can we generate permutations in the 4th step from the permutations of the 3rd step?

    If we look closely, we will realize that when we add a new number (5), we take each permutation of the previous step and insert the new number in every position to generate the new permutations. For example, inserting ‘5’ in different positions of [3,1] will give us the following permutations:

    1. Inserting ‘5’ before ‘3’: [5,3,1]
    2. Inserting ‘5’ between ‘3’ and ‘1’: [3,5,1]
    3. Inserting ‘5’ after ‘1’: [3,1,5]

    Here is the visual representation of this algorithm:

    Capture

    Here’s the non-recursive version:

      public static List<List> findPermutations(int[] nums) {
        List<List> result = new ArrayList();
        Queue<List> permutations = new LinkedList();
        permutations.add(new ArrayList());
        for (int currentNumber : nums) {
          // we will take all existing permutations and add the current number to create new permutations
          int n = permutations.size();
          for (int i = 0; i < n; i++) {
            List oldPermutation = permutations.poll();
            // create a new permutation by adding the current number at every position
            for (int j = 0; j <= oldPermutation.size(); j++) {
              List newPermutation = new ArrayList(oldPermutation);
              newPermutation.add(j, currentNumber);
              if (newPermutation.size() == nums.length)
                result.add(newPermutation);
              else
                permutations.add(newPermutation);
            }
          }
        }
        return result;
      }
    

    and its recursive version:

      public static List<List> generatePermutations(int[] nums) {
        List<List> result = new ArrayList();
        generatePermutationsRecursive(nums, 0, new ArrayList(), result);
        return result;
      }
    
      private static void generatePermutationsRecursive(int[] nums, int index, List currentPermutation,
          List<List> result) {
        if (index == nums.length) {
          result.add(currentPermutation);
        } else {
          // create a new permutation by adding the current number at every position
          for (int i = 0; i <= currentPermutation.size(); i++) {
            List newPermutation = new ArrayList(currentPermutation);
            newPermutation.add(i, nums[index]);
            generatePermutationsRecursive(nums, index + 1, newPermutation, result);
          }
        }
      }
    
     
  • loctv 1:29 pm on December 14, 2019 Permalink | Reply
    Tags:   

    Knapsack: Top-down, Memoization & Bottom-up 

    Introduction

    Given the weights and profits of ‘N’ items, we are asked to put these items in a knapsack which has a capacity ‘C’. The goal is to get the maximum profit out of the items in the knapsack. Each item can only be selected once, as we don’t have multiple quantities of any item.

    Let’s take the example of Merry, who wants to carry some fruits in the knapsack to get maximum profit. Here are the weights and profits of the fruits:

    Items: { Apple, Orange, Banana, Melon }
    Weights: { 2, 3, 1, 4 }
    Profits: { 4, 5, 3, 7 }
    Knapsack capacity: 5

    Let’s try to put various combinations of fruits in the knapsack, such that their total weight is not more than 5:

    Apple + Orange (total weight 5) => 9 profit
    Apple + Banana (total weight 3) => 7 profit
    Orange + Banana (total weight 4) => 8 profit
    Banana + Melon (total weight 5) => 10 profit

    This shows that Banana + Melon is the best combination as it gives us the maximum profit and the total weight does not exceed the capacity.

    Problem Statement

    Given two integer arrays to represent weights and profits of ‘N’ items, we need to find a subset of these items which will give us maximum profit such that their cumulative weight is not more than a given number ‘C’. Each item can only be selected once, which means either we put an item in the knapsack or we skip it.

    Basic Solution

    A basic brute-force solution could be to try all combinations of the given items (as we did above), allowing us to choose the one with maximum profit and a weight that doesn’t exceed ‘C’. To try all the combinations, our algorithm will look like:

      public int solveKnapsack(int[] profits, int[] weights, int capacity) {
        return knapsackRecursive(profits, weights, capacity, 0);
      }
    
      private int knapsackRecursive(int[] profits, int[] weights, int capacity, int currentIndex) {
        // base checks
        if (capacity <= 0 || currentIndex >= profits.length)
          return 0;
    
        // recursive call after choosing the element at the currentIndex
        // if the weight of the element at currentIndex exceeds the capacity, we shouldn't process this
        int profit1 = 0;
        if( weights[currentIndex] <= capacity )
            profit1 = profits[currentIndex] + knapsackRecursive(profits, weights,
                    capacity - weights[currentIndex], currentIndex + 1);
    
        // recursive call after excluding the element at the currentIndex
        int profit2 = knapsackRecursive(profits, weights, capacity, currentIndex + 1);
    
        return Math.max(profit1, profit2);
      }
    

    Complexity:

    For recursive function with two separate calls of itself, the time complexity is O(2​^n). Since recursive calls work the same way as DF search, space complexity is O(n).

    Top-down Dynamic Programming with Memoization

    Memoization is when we store the results of all the previously solved sub-problems and return the results from memory if we encounter a problem that has already been solved.

    Since we have two changing values (capacity and currentIndex) in our recursive function knapsackRecursive(), we can use a two-dimensional array to store the results of all the solved sub-problems. As mentioned above, we need to store results for every sub-array (i.e. for every possible index ‘i’) and for every possible capacity ‘c’.

      public int solveKnapsack(int[] profits, int[] weights, int capacity) {
        Integer[][] dp = new Integer[profits.length][capacity + 1];
        return knapsackRecursive(dp, profits, weights, capacity, 0);
      }
    
      private int knapsackRecursive(Integer[][] dp, int[] profits, int[] weights, int capacity,
          int currentIndex) {
    
        // base checks
        if (capacity <= 0 || currentIndex >= profits.length)
          return 0;
    
        // if we have already solved a similar problem, return the result from memory
        if(dp[currentIndex][capacity] != null)
          return dp[currentIndex][capacity];
    
        // recursive call after choosing the element at the currentIndex
        // if the weight of the element at currentIndex exceeds the capacity, we shouldn't process this
        int profit1 = 0;
        if( weights[currentIndex] <= capacity )
            profit1 = profits[currentIndex] + knapsackRecursive(dp, profits, weights,
                    capacity - weights[currentIndex], currentIndex + 1);
    
        // recursive call after excluding the element at the currentIndex
        int profit2 = knapsackRecursive(dp, profits, weights, capacity, currentIndex + 1);
    
        dp[currentIndex][capacity] = Math.max(profit1, profit2);
        return dp[currentIndex][capacity];
      }
    

    Time and Space complexity

    Since our memoization array dp[profits.length][capacity+1] stores the results for all subproblems, we can conclude that we will not have more than  N*C subproblems (where ‘N’ is the number of items and ‘C’ is the knapsack capacity). This means that our time complexity will be O(N*C).

    The above algorithm will use O(N*C) space for the memoization array. Other than that we will use O(N) space for the recursion call-stack. So the total space complexity will be O(N*C + N), which is asymptotically equivalent to O(N*C)

    Bottom-up Dynamic Programming

    Let’s try to populate our dp[][] array from the above solution by working in a bottom-up fashion. Essentially, we want to find the maximum profit for every sub-array and for every possible capacity. This means that dp[i][c] will represent the maximum knapsack profit for capacity ‘c’ calculated for the first ‘i’ items.

    So, for each item at index ‘i’ (0 <= i < items.length) and capacity ‘c’ (0 <= c <= capacity), we have two options:

    1. Exclude the item at index ‘i’. In this case, we will take whatever profit we get from the sub-array excluding this item => dp[i-1][c]
    2. Include the item at index ‘i’ if its weight is not more than the capacity. In this case, we include its profit plus whatever profit we get from the remaining capacity and from previous items => profit[i] + dp[i-1][c-weight[i]]

    Finally, our optimal solution will be maximum of the above two values:

        dp[i][c] = max (dp[i-1][c], profit[i] + dp[i-1][c-weight[i]]) 
      public int solveKnapsack(int[] profits, int[] weights, int capacity) {
        // basic checks
        if (capacity &lt;= 0 || profits.length == 0 || weights.length != profits.length)
          return 0;
    
        int n = profits.length;
        int[][] dp = new int[n][capacity + 1];
    
        // populate the capacity=0 columns, with '0' capacity we have '0' profit
        for(int i=0; i &lt; n; i++)
          dp[i][0] = 0;
    
        // if we have only one weight, we will take it if it is not more than the capacity
        for(int c=0; c &lt;= capacity; c++) {
          if(weights[0] &lt;= c)
            dp[0][c] = profits[0];
        }
    
        // process all sub-arrays for all the capacities
        for(int i=1; i &lt; n; i++) {
          for(int c=1; c &lt;= capacity; c++) {
            int profit1= 0, profit2 = 0;
            // include the item, if it is not more than the capacity
            if(weights[i] &lt;= c)
              profit1 = profits[i] + dp[i-1][c-weights[i]];
            // exclude the item
            profit2 = dp[i-1][c];
            // take maximum
            dp[i][c] = Math.max(profit1, profit2);
          }
        }
    
        // maximum profit will be at the bottom-right corner.
        return dp[n-1][capacity];
      }
    

     

    Time and Space complexity

    The above solution has the time and space complexity of O(N*C), where ‘N’ represents total items and ‘C’ is the maximum capacity.

     
  • loctv 3:12 pm on December 7, 2019 Permalink | Reply
    Tags:   

    Folk/Join Framework & Merge Sort (Java) 

    We all know that Merge Sort is one of the fastest sorting algorithms out there (on average), it’s based on the divided & conquer technique. In today world of computing, computers usually have more than one CPU, and to take advantage of all of those resources, we need to implement Merge Sort in a way such that it will leverage all of those juicy CPUs.

    Today I am going to show you how to implement Merge Sort using Java’s Folk/Join framework. However before we start, let’s get some information about Folk/Join framework in Java.

    So what is Folk/Join framework?

    This framework is based on the ForkJoinPool class in Java, which is a special kind of executor, two operations, the fork() and join() methods (and their different variants), and an internal algorithm named the work-stealing algorithm. What this algorithm does is to determine which tasks to be executed. When a task is waiting for the finalization of a child task using the join() method, the thread that is executing that task steal another task from the pool of tasks that are waiting and starts its execution. There are disadvantages though, for example, it wont work if the child task is doing some IO operations.

    From Java 8 onward, with the introduction of the mighty Stream, behind the scene, it is implemented using Fork/Join framework.

    if ( problem.size() > DEFAULT_SIZE) {
        childTask1=new Task();
        childTask2=new Task();
        childTask1.fork();
        childTask2.fork();
        childTaskResults1=childTask1.join();
        childTaskResults2=childTask2.join();
        taskResults=makeResults(childTaskResults1, childTaskResults2);
        return taskResults;
    } else {
        taskResults=solveBasicProblem();
        return taskResults;
    }
    

    The above snippet is basically what it looks like. It’s very similar to your normal recursive function, we have a base case, in this case the data is small enough for us to calculate directly, otherwise we will split the data into two equal parts, and continue to process recursively on each part.

    Similarly, the Merge Sort algorithm stops when the data need to be processed is small enough (usually just 1 element).

    Here’s one of the way you can implement a normal Merge Sort.

    public class SerialMergeSort {
    
    	public void mergeSort (Comparable data[], int start, int end) { 
    		if (end-start > 1) { 
    			int middle = (end + start) >>> 1; 
    			mergeSort(data,start,middle); 
    			mergeSort(data,middle,end); 
    			merge(data,start,middle,end); 
    		} 
    	}
    
    	private void merge(Comparable[] data, int start, int middle, int end) {
    		int length=end-start;
    		Comparable[] tmp=new Comparable[length];
    
    		int i, j, index;
    		i=start;
    		j=middle;
    		index=0;
    
    		while ((i<middle) && (j<end)) {
    			if (data[i].compareTo(data[j])<=0) {
    				tmp[index]=data[i++];
    			} else {
    				tmp[index]=data[j++];
    			}
    			index++;
    		}
    
    		while (i<middle) {
    			tmp[index++]=data[i++];
    		}
    
    		while (j<end) {
    			tmp[index++]=data[j++];
    		}
    
    		for (index=0; index < length; index++) {
    			data[index+start]=tmp[index];
    		}
    	}
    }
    

    With Fork/Join framework, most of the time you will choose between RecursiveAction which does not return anything and RecursiveTask which is a generic abstract class which returns something when it’s done. With Merge Sort, we need to do an extract processing step when two child tasks are done (merge the sorted lists). In this case our class will extend CountedCompleter class. It has some important methods that we’re going to use.

    • compute(): this is where the body of the Merge Sort algorithm is implemented.
    • addToPendingCount(1): Increase the number of child tasks by 1 that need to be completed before onCompletion method can be executed.
    • onCompletion(CountedComplete caller): this method will be called as soon as all of the child tasks are finished.
    • tryComplete(): tell the parent task that one of its child tasks has been completed.

    Here is the most important part in the compute() method:

    @Override
    	public void compute() {
    		if (end - start >= 1024) {
    			middle = (end + start) >>> 1;
    			MergeSortTask task1 = new MergeSortTask(data, start, middle, this);
    			MergeSortTask task2 = new MergeSortTask(data, middle, end, this);
    			addToPendingCount(1);
    			task1.fork();
    			task2.fork();
    		} else {
    			new SerialMergeSort().mergeSort(data, start, end);
    			tryComplete();
    		}
    	}
    

    Let’s explain the code snippet above.

    As you can see, it’s very similar to the serial Merge Sort. We use theSerialMergeSort for the base case. Also we add the reference to the parent task in case we need it. tryComplete() internally will call onCompletion(), when onCompletion() is finished it will call tryComplete() over its parent to try to complete that task.

    If the data is still too big (more than 1024 elements), we will split it into two child MergeSortTask, addToPendingCount(1) to register this as one of the child task of the parent task. Then we call fork() to asynchronously invoke task1 and task2.

    Here’s the onCompletion() part that you’re looking for:

    	@Override
    	public void onCompletion(CountedCompleter caller) {
    
    		if (middle == 0) {
    			return;
    		}
    		int length = end - start ;
    		Comparable tmp[] = new Comparable[length];
    
    		int i, j, index;
    		i = start;
    		j = middle;
    		index = 0;
    
    		while ((i < middle) && (j < end)) {
    			if (data[i].compareTo(data[j]) <= 0) {
    				tmp[index] = data[i++];
    			} else {
    				tmp[index] = data[j++];
    			}
    			index++;
    		}
    
    		while (i < middle) {
    			tmp[index++] = data[i++];
    		}
    
    		while (j < end) {
    			tmp[index++] = data[j++];
    		}
    
    		for (index = 0; index < length; index++) {
    			data[index + start] = tmp[index];
    		}
    
    	}
    

    It's the "merge" part of the SerialMergeSort.

    You can download the full source code here.

    It’s already too long for a daily blog post and I’m also not a fan of a blog that’s too long. After running benchmark locally on my local laptop (4 cores – 8 threads), the improvement in term of time is usually more than x2 on average.

     
  • loctv 2:02 pm on March 3, 2019 Permalink | Reply
    Tags:   

    Hackerrank: Waiter 

    You are a waiter at a party. There are  stacked plates on pile A0. Each plate has a number written on it. Then there will be Q iterations. In i-th iteration, you start picking up the plates in  Ai-1 from the top one by one and check whether the number written on the plate is divisible by the i-th prime. If the number is divisible, you stack that plate on pile Bi . Otherwise, you stack that plate on pile Ai. After Q iterations, plates can only be on pile B1,B2,..,Bq,Aq. Output numbers on these plates from top to bottom of each piles in order of B1,B2,…,Bq,Aq.

    Input Format

    The first line contains two space separated integers, N and Q.
    The next line contains N space separated integers representing the initial pile of plates, i.e., A0. The leftmost value represents the bottom plate of the pile.

    Constraints

    • 1 <= N <=5 * 10^4
    • 2 <= number_ith <= 10^4
    • 1 <= Q <= 1200

    Output Format

    Output N lines. Each line contains a number written on the plate. Printing should be done in the order defined above.

    Sample Input 0

    5 1
    3 4 7 6 5
    

    Sample Output 0

    4
    6
    3
    7
    5
    

    Explanation 0

    Initially:

     A0 = [3, 4, 7, 6, 5]<-TOP

    After 1 iteration:

     A0 = []<-TOP

     B1 = [6, 4]<-TOP

     A1 = [5, 7, 3]<-TOP

    We should output numbers in B1  first from top to bottom, and then output numbers in A1 from top to bottom.

    Sample Input 1

    5 2
    3 3 4 4 9
    

    Sample Output 1

    4
    4
    9
    3
    3
    

    Explanation 1

    Initially:

    A0 = [3, 3, 4, 4, 9]<-TOP

    After  iteration:

    A0 = []<-TOP

    B1 = [4, 4]<-TOP

    B2 = [9, 3, 3]<-TOP

    After  iteration:

    A1 = []<-TOP

    B1 = [4, 4]<- TOP

    B2 = [3, 3, 9]<-TOP

    We should output numbers in B1  first from top to bottom, and then output numbers in B2 from top to bottom.


    You can tell that the Explanation 1 is wrong but it doesn’t matter. The problem description is quite confusing, but it’s really simple.

    You’re given a set and you have to split that set into two separate parts, one that is divisible by some prime numbers, and the other isn’t. The visible part are constructed by multiple stacks. Each iteration i-th, you go through the stack A, if any element in stack A is divisible by i-th prime, put it into a stack B i-th, otherwise put it back in a new stack of A, which will reverse the original stack.

    Later on, either when new stack A is empty or we run out of the iteration loop, we go through the stacks B, pop out each element in each stack and print it out. Then go through the final stack A, which might or might not be empty.


    This my the solution in Java.

    https://github.com/MrNocTV/Algorithms/blob/master/SolvedProblems/HackerRank/src/Waiter.java

     
  • loctv 10:20 am on March 3, 2019 Permalink | Reply  

    Hackerrank: Game of Two Stacks 

    Alexa has two stacks of non-negative integers, stack A = [a0, a1, …, an-1]  and stack  B = [b0, b1, …, bm-1] where index 0 denotes the top of the stack. Alexa challenges Nick to play the following game:

    • In each move, Nick can remove one integer from the top of either stack  A or stack B.
    • Nick keeps a running sum of the integers he removes from the two stacks.
    • Nick is disqualified from the game if, at any point, his running sum becomes greater than some integer  given at the beginning of the game.
    • Nick’s final score is the total number of integers he has removed from the two stacks.

    Given A, B, and x for g games, find the maximum possible score Nick can achieve (i.e., the maximum number of integers he can remove without being disqualified) during each game and print it on a new line.

    Input Format

    The first line contains an integer, g (the number of games). The 3*g subsequent lines describe each game in the following format:

    1. The first line contains three space-separated integers describing the respective values of n (the number of integers in stack A), m  (the number of integers in stack B), and x (the number that the sum of the integers removed from the two stacks cannot exceed).
    2. The second line contains n  space-separated integers describing the respective values of a0, a1, …, an-1.
    3. The third line contains m space-separated integers describing the respective values of b0, b1, …, bm-1.

    Constraints

    • 1 <= g <= 50
    • 1 <= n,m <= 10^5
    • 0 <= ai, bj <= 10^6
    • 1 <= x <= 10^9

    Subtasks

    • 1 <= n,m <= 100 for 50% for  of the maximum score.

    Output Format

    For each of the g  games, print an integer on a new line denoting the maximum possible score Nick can achieve without being disqualified.

    Sample Input 0

    1
    5 4 10
    4 2 4 6 1
    2 1 8 5
    

    Sample Output 0

    4
    

    Explanation 0

    The two stacks initially look like this:

    image

    The image below depicts the integers Nick should choose to remove from the stacks. We print  as our answer, because that is the maximum number of integers that can be removed from the two stacks without the sum exceeding .

    image

    (There can be multiple ways to remove the integers from the stack, the image shows just one of them.)


    To be honest, I spent an hour on this problem and couldn’t find the optimal solution. Obviously, you can say that there is a brute force solution, but it takes O(2^k), k is the longest steps that can be achieved. Because for each step, we can either go with the top of A or B. Hence, we need two recursive call each steps, or two “branches” each steps. According to Cracking the Coding Interview, time complexity for these problems is O(branches^depth). Replace depth with k, branches with 2.

    I was just thinking, to get the longest path, can we just chose the number from either A or B that from that point, we can continue with the furthest distance. Let say we can choose either 5 from A or 6 from B, if we choose 5, we can chop out 5 more elements in A, but if we choose 6 we can chop out 6 elements in B, hence we will choose B. This is short of a greedy solution, and greedy won’t yield optimal result all the time.

    So I click on the Editorial part. The editorial doesn’t tell you exactly what is the idea behind the solution, it just says “here is how you solve it”.

    The idea is, the longest sequence we can achieve can be formed by either chopping top of A or B continuously until we run out of gas. The format of the final sequence can be:

    + 5 top elements of A, 4 top elements of B
    + 4 top elements of A, 0 top elements of B

    The final form is:
    + k1 top elements of A, k2 top elements of B

    I think until this point you can probably think of a solution, but if you can’t, let’s continue reading.

    To get:
    + 5 top of A, 4 top of B.
    We can go from
    + 9 top of A, 0 top of B.
    + 8 top of A, 1 top of B.
    ….
    + 5 top of A, 4 top of B.

    You got the idea, right?


    This is the solution in Java, it might not be the most elegant solution but it gets the job done.

    https://github.com/MrNocTV/Algorithms/blob/master/SolvedProblems/HackerRank/src/GameOfTwoStacks.java

     
  • loctv 5:22 pm on February 27, 2019 Permalink | Reply
    Tags:   

    Hackerrank : Maximum Element 

    You have an empty sequence, and you will be given N queries. Each query is one of these three types:

    1 x  -Push the element x into the stack.
    2    -Delete the element present at the top of the stack.
    3    -Print the maximum element in the stack.
    

    Input Format

    The first line of input contains an integer, N. The next N lines each contain an above mentioned query. (It is guaranteed that each query is valid.)

    Constraints
    1 <= N <= 10^5
    1 <= x <= 10^9
    1 <= type <= 3

    Output Format

    For each type  query, print the maximum element in the stack on a new line.

    Sample Input

    10
    1 97
    2
    1 20
    2
    1 26
    1 20
    2
    3
    1 91
    3
    

    Sample Output

    26
    91

    First thing you should notice is the constraint:
    1 <= N <= 10^5
    Most of the time, if you see the constraint is 10^5 (ten to the power of 5), it means your solution must be at most O(n log n) or better.

    Okey, since it’s marked as “Easy” so there will be no hidden things that you need to figure out. Basically you need to build a stack and get the biggest elements in the stack and quick / fast as possible, so this is all about data structure.

    Obviously we need a stack, to push and pop items as required. But how can we tell which item in the stack is biggest.

    The easiest solution is go through the stack and get the max item. It works but it’s too slow. The time complexity will be O(n^2), roughly.

    Method1: 
    We can use an additional collection, like a Heap, more specifically, a MaxHeap to store the elements in order. To insert an item to a Heap, it is O(n log n), to remove an item in a heap, it’s aso O(n log n). Finally, this is what we want, to get the biggest element as fast as possible. Specifically for MaxHeap, the top will be the biggest element. Overall this will work and the time complexity will be roughly O(n log n). The hardest thing is to implement the MaxHeap yourself.

    Method2:
    This is more “language-oriented”. It removes the “re-invent the wheel” part, no need to implement MaxHeap yourself.
    In Java we have a SortedSet, it’s really fast for adding item as well as lookup smallest/largest item. The only problem though, is duplicated items. We need to watch out for duplicated items. Obviously, because set only stores non duplicated items. We can use a Map to store the item and its frequency. When the frequency goes down to 1, we know that we can remove the item from the set, as well as the frequency map. This is relatively fast, but it consumes a lot of spaces. It’s O(n) in space complexity.

    Method3:
    This is when “algorithm” comes to handy. Let’s say we need to insert an item, and also we need to know what is the maximum value after we insert the new item. If this still isn’t enough for you then please continue reading. So instead of just saving an item, you can also save the max value so far given the new item, also we need a global max variable to know the max value so far. The item on the top of the stack is guaranteed to be paired with the biggest element. When the stack is empty we need to reset the maximum value and when popping item out of stack, we need to update the global max value (if needed).

    There may be more solutions for this problem but these are what I’ve come up with.

    Here’s the sample code if you get trouble implementing the mentioned solutions.


    Method2:
    https://github.com/MrNocTV/Algorithms/blob/master/SolvedProblems/HackerRank/src/MaximumElement1.java

    Method3:
    https://github.com/MrNocTV/Algorithms/blob/master/SolvedProblems/HackerRank/src/MaximumElement2.java

     
  • loctv 3:58 pm on February 27, 2019 Permalink | Reply  

    A little talk for my comeback! 

    Hi every one, long time no see. It’s been a long time since the last time I did write some blogs. Recently I’ve just quit my job and I’ve been looking for a new one, so it’s about time to get back to problem solving.

    The best way to repair for coding test is to solve problems. The more problems you solve, the better and quicker you will perform. This time though, even if I get a new job, I’ll still continue keep practicing more and write more blogs.

    There are also some changes. For the “Problem solving” section, from now on, I will try to come up with the most elegant, succinct and easy to understand write-up that I can possibly think of.

    I consider myself to be an average problem solver, but no one is good without learning and practicing, right?

    I hope that the content on this site can help you to get an idea on solving some problems, also I will show case some of my projects that I’ve done, feel free to ping me for the source code. To be honest, the current projects on this site are quite “dump”, it’s my university projects, most of them use outdated technology, back-then I even didn’t know how to use git and github so I saved my code on GoogleDrive :V

    Please considering donating if you feel it’s worth:
    https://www.paypal.me/TruongLoc

     

     

     
  • loctv 8:54 pm on January 3, 2018 Permalink | Reply  

    The Power Sum 

    Find the number of ways that a given integer, X, can be expressed as sum of the Nth powers of unique, natural numbers.

    Input format:
    The first line contains an integer X
    The second line contains an integer N

    Constraints:
    1 <= X <= 1000
    2 <= N <= 10

    Output format:
    Output a single integer, the answer to the problem explained above.

    Sample Input 0

    10
    2
    

    Sample Output 0

    1
    

    Explanation 0

    If X = 10 and N = 2, we need to find the number of ways that 10 can be represented as the sum of squares of unique numbers.
    10 = 1^2 + 3^2
    This is the only way in which 10 can be expressed as the sum of unique square.

    Sample Input 1

    100
    2

    Sample Output 1

    3

    Explanation 1

    100 = 10^2 = 6^2 + 8^2 = 1^2 + 3^2 + 4^2 + 5^2 + 7^2

    Sample Input 2

    100
    3

    Sample Output 2

    1

    Explanation 2

    100 can be expressed as the sum of the cubes of 1,2,3,4.
    (1 + 8 + 27 + 64) = 100. There is no other way to express 100 as the sum of cubes.

    Approach 1:
    With X = 10, and N = 2, this approach will generate sequences like:
    [1]
    [1, 4]
    [1, 4, 9]
    [1, 9]
    [4]
    [4, 9]
    [9]
    [9, 4]
    and you can easily spot out the problem of this approach is it produces duplicated sequences. Here is the code:
    def powerSum1(X, N):
        max_base = int(math.sqrt(X))
        sequence = list()
        ways = set()
        base = 1
        _recursive_sum1(X, N, base, max_base, sequence, ways)
        return len(ways)
        
    def _recursive_sum1(X, N, base, max_base, sequence, ways):
        if sum(sequence) == X:
            # if set(sequence) not in ways:
            ways.add(tuple(sorted(sequence)))
            return False
        elif sum(sequence) > X:
            return False
        else:
            for i in range(base, max_base+1):
                tmp_val = i**N
                if tmp_val not in sequence:
                    sequence.append(tmp_val)
                    print(sequence)
                    if _recursive_sum1(X, N, base+1, max_base, sequence, ways):
                        return True
                    sequence.pop()
            return False
    
     If you remove the comment from the if statement, indent 1 tab to the right for the line above, it will work perfectly, but only for all possible small-enough numbers. Because it generates on possible sequences, sequences that are subsequently increment.
    Approach 2:
    This recursion type generate non-repeated sequences that are subsequently increment.
    1]
    [1, 2]
    [1, 2, 3]
    [1, 2, 3, 4]
    [1, 2, 3, 4, 5]
    [1, 2, 3, 4, 5, 6]
    [1, 2, 3, 4, 6]
    [1, 2, 3, 4, 7]
    [1, 2, 3, 4, 8]
    [1, 2, 3, 5]
    [1, 2, 3, 5, 6]
    [1, 2, 3, 5, 7]
    [1, 2, 3, 6]
    [1, 2, 3, 6, 7]
    [1, 2, 3, 7]
    [1, 2, 3, 8]
    [1, 2, 3, 9]
    [1, 2, 4]
    def powerSum(X, N):
        sequence = list()
        base = 1
        look_up = [0]*(101)
        for i in range(1, 101):
            look_up[i] = i**N
        cur_sum = 0
        ways = _recursive_sum(X, N, base, look_up, cur_sum, sequence)
        return ways
    
    def _recursive_sum(X, N, base, look_up, cur_sum, seq):
        ways = 0
        tmp_val = look_up[base]
        
        while cur_sum + tmp_val < X:
            seq.append(base)
            print(seq)
            ways += _recursive_sum(X, N, base+1, look_up, cur_sum+tmp_val, seq)
            seq.pop()
            base += 1
            tmp_val = look_up[base]
    
        if cur_sum + tmp_val == X:
            ways += 1
            seq.append(base)
            seq.pop()
        return ways
    

     

     
  • loctv 4:21 pm on January 3, 2018 Permalink | Reply  

    Problem: Connected Cells in a Grid (Python3) 

    Consider a matrix with n rows and m columns, where each cell contains either a 0 or a 1 and any cell containing a 1 is called a filled cell. Two cells are said to be connected if they are adjacent to each other horizontally, vertically, or diagonally; in other words, cell [i][j]  is connected to cells [i-1][j-1], [i-1][j], [i-1][j+1], [i][j+1], [i+1][j+1], [i+1][j], [i+1][j-1], and [i][i-1], provided that the location exists in the matrix for that [i][j].

    If one or more filled cells are also connected, they form a region. Note that each cell in a region is connected to zero or more cells in the region but is not necessarily directly connected to all the other cells in the region.

    Task
    Given an nxm matrix, find and print the number of cells in the largest region in the matrix. Note that there may be more than one region in the matrix.

    Input Format

    The first line contains an integer, n, denoting the number of rows in the matrix.
    The second line contains an integer, m, denoting the number of columns in the matrix.
    Each line i of the n subsequent lines contains m space-separated integers describing the respective values filling each row in the matrix.

    Constraints

     0 < n,m < 10

    Output Format

    Print the number of cells in the largest region in the given matrix.

    Sample Input

    4
    4
    1 1 0 0
    0 1 1 0
    0 0 1 0
    1 0 0 0
    

    Sample Output

    5
    

    Explanation

    The diagram below depicts two regions of the matrix; for each region, the component cells forming the region are marked with an X:

    X X 0 0     1 1 0 0
    0 X X 0     0 1 1 0
    0 0 X 0     0 0 1 0
    1 0 0 0     X 0 0 0
    

    The first region has five cells and the second region has one cell. Because we want to print the number of cells in the largest region of the matrix, we print 5.

    Approach:
    Since the constrains are small, we can use brute-force algorithm. The hole algorithm can be described in three lines.

    With each cell in the grid:
    —- if it is 1 and not visited:
    ——– get all of it neighbors into a group (if any)

    We need to implement the hard part. We need a groups list to store a list of groups (this is optional, you can use a list of group’s size), a collection to store visited cell (this should be set because in Python, O(1) is time complexity requires to check if a value in a set). We explore all 8 directions of a given cell, recursively call that method until we cant explore anymore. The algorithm has time complexity O(nxm), since we will never revisit a cell that is added into a group. Each cell is ensured to be visited only one.

     

    EIGHT_DIRECTIONS = [
        (-1, -1),
        (-1, 0),
        (-1, 1),
        (0, 1),
        (1, 1),
        (1, 0),
        (1, -1),
        (0, -1)
    ]
    
    def explore(grid, x, y, visited, groups):
        groups[-1].append((x, y))
        visited.add((x, y))
        for i, j in EIGHT_DIRECTIONS:
            next_x = x + i
            next_y = y + j
            if in_bound(grid, next_x, next_y) and \
               grid[next_x][next_y] == 1 and (next_x, next_y) not in visited:
               explore(grid, next_x, next_y, visited, groups)
    
    def in_bound(grid, i, j):
        return i >= 0 and i < len(grid) and \
               j >= 0 and j < len(grid[0])
    
    def connected_cells(grid, visited, groups):
        for i in range(len(grid)):
            for j in range(len(grid[i])):
                if grid[i][j] == 1 and (i,j) not in visited:
                    groups.append(list())
                    explore(grid, i, j, visited, groups)
    
    def largest_region(groups):
        return max(len(group) for group in groups)
    
    if __name__ == '__main__':
        n = int(input())
        m = int(input())
        grid = list()
        for i in range(n):
            line = input()
            grid.append([int(x) for x in line.strip().split()])
        groups = list()
        visited = set()
        connected_cells(grid, visited, groups)
        # for group in groups:
        #     print(group)
        print(largest_region(groups))
    
     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel