title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Python SQLite - Creating a New Database - GeeksforGeeks
20 May, 2021 In this article, we will discuss how to create a Database in SQLite using Python. You do not need any special permissions to create a database. The sqlite3 command used to create the database has the following basic syntax Syntax: $ sqlite3 <database_name_with_db_extension> The database name must always be unique in the RDBMS. Example: When we create a sqlite database. Similarly, we can create this database in python using the SQlite3 module. Python3 import sqlite3 # filename to form databasefile = "Sqlite3.db" try: conn = sqlite3.connect(file) print("Database Sqlite3.db formed.")except: print("Database Sqlite3.db not formed.") Output: Database Sqlite3.db formed. Creating a brand-new SQLite database is as easy as growing a connection to the usage of the sqlite3 module inside the Python preferred library. To establish a connection, all you have to do is to pass the file path to the connect(...) method in the sqlite3 module. If the database represented by the file does not exist, it will be created under this path. import sqlite3 connection = sqlite3.connect(<path_to_file_db.sqlite3>) Let’s see some description about connect() method, Syntax: sqlite3.connect(database [, timeout , other optional arguments]) The use of this API opens a connection to the SQLite database file. Use β€œ:memory:” to set up a connection to a database placed in RAM in place of the hard disk. A Connection object will be returned, when a database opened correctly. The database can be accessed through multiple connections, and one of the processes is modifying the database. The SQLite database will hang until the transaction is committed. The timeout parameter specifies how long the connection should wait to unlock before throwing an exception. Timeout parameter 5.0 (five seconds). If the specified database name does not exist, this call will create the database. If we want to create database data in a location other than the current directory, we can also use the desired path to specify the file name. 1. Creating a Connection between sqlite3 database and Python Program sqliteConnection = sqlite3.connect('SQLite_Retrieving_data.db') 2. If sqlite3 makes a connection with the python program then it will print β€œConnected to SQLite”, Otherwise it will show errors print("Connected to SQLite") 3. If the connection is open, we need to close it. Closing code is present inside the final Block. We will use close() method for closing the connection object. After closing the connection object, we will print β€œthe SQLite connection is closed” if sqliteConnection: sqliteConnection.close() print("the sqlite connection is closed") Python3 # Importing Sqlite3 Moduleimport sqlite3 try: # Making a connection between sqlite3 database and Python Program sqliteConnection = sqlite3.connect('SQLite_Retrieving_data.db') # If sqlite3 makes a connection with python program then it will print "Connected to SQLite" # Otherwise it will show errors print("Connected to SQLite")except sqlite3.Error as error: print("Failed to connect with sqlite3 database", error)finally: # Inside Finally Block, If connection is open, we need to close it if sqliteConnection: # using close() method, we will close the connection sqliteConnection.close() # After closing connection object, we will print "the sqlite connection is closed" print("the sqlite connection is closed") Output: Output for above python program Picked Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? How to drop one or multiple columns in Pandas Dataframe How To Convert Python Dictionary To JSON? Check if element exists in list in Python Defaultdict in Python Python | Get unique values from a list Python | os.path.join() method Selecting rows in pandas DataFrame based on conditions Create a directory in Python Python | Pandas dataframe.groupby()
[ { "code": null, "e": 24292, "s": 24264, "text": "\n20 May, 2021" }, { "code": null, "e": 24374, "s": 24292, "text": "In this article, we will discuss how to create a Database in SQLite using Python." }, { "code": null, "e": 24515, "s": 24374, "text": "You do n...
Valid Mountain Array in Python
Suppose we have an array A of integers; we have to check whether it is a valid mountain array or not. We know that A is a mountain array if and only if it satisfies the following situations βˆ’ size of A >= 3 There exists some index i in A such that βˆ’ A[0] < A[1] < ... A[i-1] < A[i] A[i] > A[i+1] > ... > A[A.length - 1] So, if the input is like [0,3,2,1], then the output will be True. To solve this, we will follow these steps βˆ’ if size of A < 3, thenreturn False return False i := 1 while i < size of A and A[i] > A[i-1], doi := i + 1 i := i + 1 if i is same as 1 or i is same as size of A , thenreturn False return False while i < size of A and A[i] < A[i-1], doi := i + 1 i := i + 1 return true when i is same as size of A Let us see the following implementation to get better understanding βˆ’ Live Demo class Solution: def validMountainArray(self, A): if(len(A)<3): return False i = 1 while(i<len(A) and A[i]>A[i-1]): i+=1 if(i==1 or i==len(A)): return False while(i<len(A) and A[i]<A[i-1]): i+=1 return i==len(A) ob = Solution() print(ob.validMountainArray([0,3,2,1])) [0,3,2,1] True
[ { "code": null, "e": 1269, "s": 1062, "text": "Suppose we have an array A of integers; we have to check whether it is a valid mountain array or not. We know that A is a mountain array if and only if it satisfies the following situations βˆ’ size of A >= 3" }, { "code": null, "e": 1312, ...
Find farthest node from each node in Tree - GeeksforGeeks
09 Aug, 2021 Given a Tree, the task is to find the farthest node from each node to another node in this tree. Example: Input: Given Adjacency List of Below Tree: Output: Farthest node from node 1: 6 Farthest node from node 2: 6 Farthest node from node 3: 6 Farthest node from node 4: 6 Farthest node from node 5: 1 Farthest node from node 6: 1 Input: Output: Farthest node from node 1: 4 Farthest node from node 2: 7 Farthest node from node 3: 4 Farthest node from node 4: 7 Farthest node from node 5: 7 Farthest node from node 6: 4 Farthest node from node 7: 4 Approach: First, we have to find two end vertices of the diameter and to find that, we will choose an arbitrary vertex and find the farthest node from this arbitrary vertex and this node will be one end of the diameter and then make it root to find farthest node from it, which will be the other end of diameter. Now for each node, the farthest node will be one of these two end vertices of the diameter of the tree. Why it works? Let x and y are the two end vertices of the diameter of the tree and a random vertex is u. Let the farthest vertex from u is v, not x or y. As v is the farthest from u then a new diameter will form having end vertices as x, v or y, v which has greater length but a tree has a unique length of the diameter, so it is not possible and the farthest vertex from u must be x or y. Below is the implementation of above approach: C++ Java Python3 C# Javascript // C++ implementation to find the// farthest node from each vertex// of the tree #include <bits/stdc++.h> using namespace std; #define N 10000 // Adjacency list to store edgesvector<int> adj[N]; int lvl[N], dist1[N], dist2[N]; // Add edge between// U and V in treevoid AddEdge(int u, int v){ // Edge from U to V adj[u].push_back(v); // Edge from V to U adj[v].push_back(u);} int end1, end2, maxi; // DFS to find the first// End Node of diametervoid findFirstEnd(int u, int p){ // Calculating level of nodes lvl[u] = 1 + lvl[p]; if (lvl[u] > maxi) { maxi = lvl[u]; end1 = u; } for (int i = 0; i < adj[u].size(); i++) { // Go in opposite // direction of parent if (adj[u][i] != p) { findFirstEnd(adj[u][i], u); } }} // Function to clear the levels// of the nodesvoid clear(int n){ // set all value of lvl[] // to 0 for next dfs for (int i = 0; i <= n; i++) { lvl[i] = 0; } // Set maximum with 0 maxi = 0; dist1[0] = dist2[0] = -1;} // DFS will calculate second// end of the diametervoid findSecondEnd(int u, int p){ // Calculating level of nodes lvl[u] = 1 + lvl[p]; if (lvl[u] > maxi) { maxi = lvl[u]; // Store the node with // maximum depth from end1 end2 = u; } for (int i = 0; i < adj[u].size(); i++) { // Go in opposite // direction of parent if (adj[u][i] != p) { findSecondEnd(adj[u][i], u); } }} // Function to find the distance// of the farthest distant nodevoid findDistancefromFirst(int u, int p){ // Storing distance from // end1 to node u dist1[u] = 1 + dist1[p]; for (int i = 0; i < adj[u].size(); i++) { if (adj[u][i] != p) { findDistancefromFirst(adj[u][i], u); } }} // Function to find the distance// of nodes from second end of diametervoid findDistancefromSecond(int u, int p){ // storing distance from end2 to node u dist2[u] = 1 + dist2[p]; for (int i = 0; i < adj[u].size(); i++) { if (adj[u][i] != p) { findDistancefromSecond(adj[u][i], u); } }} void findNodes(){ int n = 5; // Joining Edge between two // nodes of the tree AddEdge(1, 2); AddEdge(1, 3); AddEdge(3, 4); AddEdge(3, 5); // Find the one end of // the diameter of tree findFirstEnd(1, 0); clear(n); // Find the other end // of the diameter of tree findSecondEnd(end1, 0); // Find the distance // to each node from end1 findDistancefromFirst(end1, 0); // Find the distance to // each node from end2 findDistancefromSecond(end2, 0); for (int i = 1; i <= n; i++) { int x = dist1[i]; int y = dist2[i]; // Comparing distance between // the two ends of diameter if (x >= y) { cout << end1 << ' '; } else { cout << end2 << ' '; } }} // Driver codeint main(){ // Function Call findNodes(); return 0;} // Java implementation to find the// farthest node from each vertex// of the treeimport java.util.*; class GFG{ static int N = 10000; // Adjacency list to store edges@SuppressWarnings("unchecked")static Vector<Integer>[] adj = new Vector[N]; static int[] lvl = new int[N], dist1 = new int[N], dist2 = new int[N]; // Add edge between// U and V in treestatic void AddEdge(int u, int v){ // Edge from U to V adj[u].add(v); // Edge from V to U adj[v].add(u);} static int end1, end2, maxi; // DFS to find the first// End Node of diameterstatic void findFirstEnd(int u, int p){ // Calculating level of nodes lvl[u] = 1 + lvl[p]; if (lvl[u] > maxi) { maxi = lvl[u]; end1 = u; } for(int i = 0; i < adj[u].size(); i++) { // Go in opposite // direction of parent if (adj[u].elementAt(i) != p) { findFirstEnd(adj[u].elementAt(i), u); } }} // Function to clear the levels// of the nodesstatic void clear(int n){ // Set all value of lvl[] // to 0 for next dfs for(int i = 0; i <= n; i++) { lvl[i] = 0; } // Set maximum with 0 maxi = 0; dist1[0] = dist2[0] = -1;} // DFS will calculate second// end of the diameterstatic void findSecondEnd(int u, int p){ // Calculating level of nodes lvl[u] = 1 + lvl[p]; if (lvl[u] > maxi) { maxi = lvl[u]; // Store the node with // maximum depth from end1 end2 = u; } for(int i = 0; i < adj[u].size(); i++) { // Go in opposite // direction of parent if (adj[u].elementAt(i) != p) { findSecondEnd(adj[u].elementAt(i), u); } }} // Function to find the distance// of the farthest distant nodestatic void findDistancefromFirst(int u, int p){ // Storing distance from // end1 to node u dist1[u] = 1 + dist1[p]; for(int i = 0; i < adj[u].size(); i++) { if (adj[u].elementAt(i) != p) { findDistancefromFirst( adj[u].elementAt(i), u); } }} // Function to find the distance// of nodes from second end of diameterstatic void findDistancefromSecond(int u, int p){ // Storing distance from end2 to node u dist2[u] = 1 + dist2[p]; for(int i = 0; i < adj[u].size(); i++) { if (adj[u].elementAt(i) != p) { findDistancefromSecond( adj[u].elementAt(i), u); } }} static void findNodes(){ int n = 5; // Joining Edge between two // nodes of the tree AddEdge(1, 2); AddEdge(1, 3); AddEdge(3, 4); AddEdge(3, 5); // Find the one end of // the diameter of tree findFirstEnd(1, 0); clear(n); // Find the other end // of the diameter of tree findSecondEnd(end1, 0); // Find the distance // to each node from end1 findDistancefromFirst(end1, 0); // Find the distance to // each node from end2 findDistancefromSecond(end2, 0); for(int i = 1; i <= n; i++) { int x = dist1[i]; int y = dist2[i]; // Comparing distance between // the two ends of diameter if (x >= y) { System.out.print(end1 + " "); } else { System.out.print(end2 + " "); } }} // Driver Codepublic static void main(String[] args){ for(int i = 0; i < N; i++) { adj[i] = new Vector<>(); } // Function call findNodes();}} // This code is contributed by sanjeev2552 # Python3 implementation to find the# farthest node from each vertex# of the tree # Add edge between# U and V in treedef AddEdge(u, v): global adj # Edge from U to V adj[u].append(v) # Edge from V to U adj[v].append(u) # DFS to find the first# End Node of diameterdef findFirstEnd(u, p): global lvl, adj, end1, maxi # Calculating level of nodes lvl[u] = 1 + lvl[p] if (lvl[u] > maxi): maxi = lvl[u] end1 = u for i in range(len(adj[u])): # Go in opposite # direction of parent if (adj[u][i] != p): findFirstEnd(adj[u][i], u) # Function to clear the levels# of the nodesdef clear(n): global lvl, dist1, dist2 # Set all value of lvl[] # to 0 for next dfs for i in range(n + 1): lvl[i] = 0 # Set maximum with 0 maxi = 0 dist1[0] = dist2[0] = -1 # DFS will calculate second# end of the diameterdef findSecondEnd(u, p): global lvl, adj, maxi, end2 # Calculating level of nodes lvl[u] = 1 + lvl[p] if (lvl[u] > maxi): maxi = lvl[u] # Store the node with # maximum depth from end1 end2 = u for i in range(len(adj[u])): # Go in opposite # direction of parent if (adj[u][i] != p): findSecondEnd(adj[u][i], u) # Function to find the distance# of the farthest distant nodedef findDistancefromFirst(u, p): global dist1, adj # Storing distance from # end1 to node u dist1[u] = 1 + dist1[p] for i in range(len(adj[u])): if (adj[u][i] != p): findDistancefromFirst(adj[u][i], u) # Function to find the distance# of nodes from second end of diameterdef findDistancefromSecond(u, p): global dist2, adj # Storing distance from end2 to node u dist2[u] = 1 + dist2[p] for i in range(len(adj[u])): if (adj[u][i] != p): findDistancefromSecond(adj[u][i], u) def findNodes(): global adj, lvl, dist1, dist2, end1, end2, maxi n = 5 # Joining Edge between two # nodes of the tree AddEdge(1, 2) AddEdge(1, 3) AddEdge(3, 4) AddEdge(3, 5) # Find the one end of # the diameter of tree findFirstEnd(1, 0) clear(n) # Find the other end # of the diameter of tree findSecondEnd(end1, 0) # Find the distance # to each node from end1 findDistancefromFirst(end1, 0) # Find the distance to # each node from end2 findDistancefromSecond(end2, 0) for i in range(1, n + 1): x = dist1[i] y = dist2[i] # Comparing distance between # the two ends of diameter if (x >= y): print(end1, end = " ") else: print(end2, end = " ") # Driver codeif __name__ == '__main__': adj = [[] for i in range(10000)] lvl = [0 for i in range(10000)] dist1 = [-1 for i in range(10000)] dist2 = [-1 for i in range(10000)] end1, end2, maxi = 0, 0, 0 # Function Call findNodes() # This code is contributed by mohit kumar 29 // C# implementation to find the// farthest node from each vertex// of the treeusing System;using System.Collections.Generic; class GFG{ static int N = 10000; // Adjacency list to store edgesstatic List<int>[] adj = new List<int>[N]; static int[] lvl = new int[N], dist1 = new int[N], dist2 = new int[N]; // Add edge between// U and V in treestatic void AddEdge(int u, int v){ // Edge from U to V adj[u].Add(v); // Edge from V to U adj[v].Add(u);} static int end1, end2, maxi; // DFS to find the first// End Node of diameterstatic void findFirstEnd(int u, int p){ // Calculating level of nodes lvl[u] = 1 + lvl[p]; if (lvl[u] > maxi) { maxi = lvl[u]; end1 = u; } for(int i = 0; i < adj[u].Count; i++) { // Go in opposite // direction of parent if (adj[u][i] != p) { findFirstEnd(adj[u][i], u); } }} // Function to clear the levels// of the nodesstatic void clear(int n){ // Set all value of lvl[] // to 0 for next dfs for(int i = 0; i <= n; i++) { lvl[i] = 0; } // Set maximum with 0 maxi = 0; dist1[0] = dist2[0] = -1;} // DFS will calculate second// end of the diameterstatic void findSecondEnd(int u, int p){ // Calculating level of nodes lvl[u] = 1 + lvl[p]; if (lvl[u] > maxi) { maxi = lvl[u]; // Store the node with // maximum depth from end1 end2 = u; } for(int i = 0; i < adj[u].Count; i++) { // Go in opposite // direction of parent if (adj[u][i] != p) { findSecondEnd(adj[u][i], u); } }} // Function to find the distance// of the farthest distant nodestatic void findDistancefromFirst(int u, int p){ // Storing distance from // end1 to node u dist1[u] = 1 + dist1[p]; for(int i = 0; i < adj[u].Count; i++) { if (adj[u][i] != p) { findDistancefromFirst(adj[u][i], u); } }} // Function to find the distance// of nodes from second end of diameterstatic void findDistancefromSecond(int u, int p){ // Storing distance from end2 to node u dist2[u] = 1 + dist2[p]; for(int i = 0; i < adj[u].Count; i++) { if (adj[u][i] != p) { findDistancefromSecond(adj[u][i], u); } }} static void findNodes(){ int n = 5; // Joining Edge between two // nodes of the tree AddEdge(1, 2); AddEdge(1, 3); AddEdge(3, 4); AddEdge(3, 5); // Find the one end of // the diameter of tree findFirstEnd(1, 0); clear(n); // Find the other end // of the diameter of tree findSecondEnd(end1, 0); // Find the distance // to each node from end1 findDistancefromFirst(end1, 0); // Find the distance to // each node from end2 findDistancefromSecond(end2, 0); for(int i = 1; i <= n; i++) { int x = dist1[i]; int y = dist2[i]; // Comparing distance between // the two ends of diameter if (x >= y) { Console.Write(end1 + " "); } else { Console.Write(end2 + " "); } }} // Driver Codepublic static void Main(String[] args){ for(int i = 0; i < N; i++) { adj[i] = new List<int>(); } // Function call findNodes();}} // This code is contributed by gauravrajput1 <script>// Javascript implementation to find the// farthest node from each vertex// of the treelet N = 10000; // Adjacency list to store edgeslet adj = new Array(N);let lvl = new Array(N);let dist1 = new Array(N);let dist2 = new Array(N);let end = 0, end2 = 0, maxi = 0; for(let i = 0; i < N; i++){ lvl[i] = 0; dist1[i] = -1; dist2[i] = -1;} // Add edge between// U and V in treefunction AddEdge(u,v){ // Edge from U to V adj[u].push(v); // Edge from V to U adj[v].push(u);} // DFS to find the first// End Node of diameterfunction findFirstEnd(u,p){ // Calculating level of nodes lvl[u] = 1 + lvl[p]; if (lvl[u] > maxi) { maxi = lvl[u]; end1 = u; } for(let i = 0; i < adj[u].length; i++) { // Go in opposite // direction of parent if (adj[u][i] != p) { findFirstEnd(adj[u][i], u); } }} // Function to clear the levels// of the nodesfunction clear(n){ // Set all value of lvl[] // to 0 for next dfs for(let i = 0; i <= n; i++) { lvl[i] = 0; } // Set maximum with 0 maxi = 0; dist1[0] = dist2[0] = -1;} // DFS will calculate second// end of the diameterfunction findSecondEnd(u,p){ // Calculating level of nodes lvl[u] = 1 + lvl[p]; if (lvl[u] > maxi) { maxi = lvl[u]; // Store the node with // maximum depth from end1 end2 = u; } for(let i = 0; i < adj[u].length; i++) { // Go in opposite // direction of parent if (adj[u][i] != p) { findSecondEnd(adj[u][i], u); } }} // Function to find the distance// of the farthest distant nodefunction findDistancefromFirst(u,p){ // Storing distance from // end1 to node u dist1[u] = 1 + dist1[p]; for(let i = 0; i < adj[u].length; i++) { if (adj[u][i] != p) { findDistancefromFirst( adj[u][i], u); } }} // Function to find the distance// of nodes from second end of diameterfunction findDistancefromSecond(u,p){ // Storing distance from end2 to node u dist2[u] = 1 + dist2[p]; for(let i = 0; i < adj[u].length; i++) { if (adj[u][i] != p) { findDistancefromSecond( adj[u][i], u); } }} function findNodes(){ let n = 5; // Joining Edge between two // nodes of the tree AddEdge(1, 2); AddEdge(1, 3); AddEdge(3, 4); AddEdge(3, 5); // Find the one end of // the diameter of tree findFirstEnd(1, 0); clear(n); // Find the other end // of the diameter of tree findSecondEnd(end1, 0); // Find the distance // to each node from end1 findDistancefromFirst(end1, 0); // Find the distance to // each node from end2 findDistancefromSecond(end2, 0); for(let i = 1; i <= n; i++) { let x = dist1[i]; let y = dist2[i]; // Comparing distance between // the two ends of diameter if (x >= y) { document.write(end1 + " "); } else { document.write(end2 + " "); } }} // Driver Codefor(let i = 0; i < N; i++) { adj[i] = []; } // Function call findNodes(); // This code is contributed by patel2127</script> 4 4 2 2 2 Time Complexity: O(V+E), where V is the number of vertices and E is the number of edges.Auxiliary Space: O(V + E). sanjeev2552 GauravRajput1 mohit kumar 29 patel2127 pankajsharmagfg Algorithms Analysis Tree Tree Algorithms Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments SDE SHEET - A Complete Guide for SDE Preparation DSA Sheet by Love Babbar Introduction to Algorithms Playfair Cipher with Examples How to write a Pseudo Code? Analysis of Algorithms | Set 1 (Asymptotic Analysis) Practice Questions on Time Complexity Analysis Understanding Time Complexity with Simple Examples Time Complexity and Space Complexity Analysis of Algorithms | Set 2 (Worst, Average and Best Cases)
[ { "code": null, "e": 25166, "s": 25138, "text": "\n09 Aug, 2021" }, { "code": null, "e": 25264, "s": 25166, "text": "Given a Tree, the task is to find the farthest node from each node to another node in this tree. " }, { "code": null, "e": 25273, "s": 25264, "...
Phone Number validation using Java Regular Expressions
The phone number can be validated using the java.util.regex.Pattern.matches() method. This method matches the regular expression for the phone number and the given input phone number and returns true if they match and false otherwise. Note: We are considering a demo number for our example since we cannot use our phone number publicly. A program that demonstrates this is given as follows: Live Demo public class Demo { public static void main(String args[]) { String phoneNumber = "9999999998"; String regex = "(0/91)?[7-9][0-9]{9}"; System.out.println("The phone number is: " + phoneNumber); System.out.println("Is the above phone number valid? " + phoneNumber.matches(regex)); } } The phone number is: 9999999998 Is the above phone number valid? True Now let us understand the above program. The phone number is printed. The Pattern.matches() method matches the regular expression for the phone number and the given input phone number and the result is printed. A code snippet which demonstrates this is as follows: String phoneNumber = "9999999998"; String regex = "(0/91)?[7-9][0-9]{9}"; System.out.println("The phone number is: " + phoneNumber); System.out.println("Is the above phone number valid? " + phoneNumber.matches(regex));
[ { "code": null, "e": 1297, "s": 1062, "text": "The phone number can be validated using the java.util.regex.Pattern.matches() method. This method matches the regular expression for the phone number and the given input phone number and returns true if they match and false otherwise." }, { "cod...
Compare *ptr++, *++ptr and ++*ptr in C++
In this section, we will see what are the differences between *ptr++, *++ptr and ++*ptr in C++. Here we will see the precedence of postfix++ and prefix++ in C or C++. The precedence of prefix ++ or -- has higher priority than dereference operator β€˜*’ and postfix ++ or -- has a priority higher than both prefix ++ and dereference operator β€˜*’. When ptr is a pointer, then *ptr++ indicates *(ptr++) and ++*prt refers ++(*ptr) Live Demo #include<iostream> using namespace std; int main() { char arr[] = "Hello World"; char *ptr = arr; ++*ptr; cout << *ptr; return 0; } I So here at first ptr is pointing β€˜H’. after using ++*ptr it increases H by 1, and now the value is β€˜I’. Live Demo #include<iostream> using namespace std; int main() { char arr[] = "Hello World"; char *ptr = arr; *ptr++; cout << *ptr; return 0; } e So here at first ptr is pointing β€˜H’. after using *ptr++ it increases the pointer, so ptr will point to the next element. so the result is β€˜e’. Live Demo #include<iostream> using namespace std; int main() { char arr[] = "Hello World"; char *ptr = arr; *++ptr; cout << *ptr; return 0; } e In this example also we are increasing the ptr using ++, where the precedence of pre-increment ++ is higher, then it increases the pointer first, then taking the value using *. so it is printing β€˜e’.
[ { "code": null, "e": 1158, "s": 1062, "text": "In this section, we will see what are the differences between *ptr++, *++ptr and ++*ptr in C++." }, { "code": null, "e": 1406, "s": 1158, "text": "Here we will see the precedence of postfix++ and prefix++ in C or C++. The precedence ...
Count number of squares in a rectangle in C++
We are given with a rectangle of length L and breadth B, such that L>=B. The goal is to find the number of squares that a rectangle of size LXB can accommodate. Above figure shows a rectangle of size 3 X 2. It has 2, 2X2 squares and 6,1X1 squares in it. Total squares= 6+2=8. Every rectangle of size LXB has L*B number of 1X1 squares. Every rectangle of size LXB has L*B number of 1X1 squares. Biggest squares are of size BXB. Biggest squares are of size BXB. For L=B=1, squares = 1. For L=B=1, squares = 1. For L=B=2, squares = 1 + 4 = 5. ( 1 of 2X2, 4 of 1X1 ) For L=B=2, squares = 1 + 4 = 5. ( 1 of 2X2, 4 of 1X1 ) For L=B=3, squares = 1 + 4 + 9 = 14. ( 1 of 3X3, 4 of 2X2, 9 of 1X1 ) For L=B=3, squares = 1 + 4 + 9 = 14. ( 1 of 3X3, 4 of 2X2, 9 of 1X1 ) For L=B=4, squares = 1 + 4 + 9 + 16 = 30 ( 1 of 4X4, 4 of 3X3, 9 of 2X2, 16 of 1X1 ) For L=B=4, squares = 1 + 4 + 9 + 16 = 30 ( 1 of 4X4, 4 of 3X3, 9 of 2X2, 16 of 1X1 ) ................. ................. For every BXB rectangle number of squares is For every BXB rectangle number of squares is for ( i=1 to B ) No. of squares + = i*i. When L>B. More squares will be added. When L=B+1 ( 1 extra column than B ). Then squares added will be L + ( L-1) + ....+3+2+1 = L(L+1)/2 So for extra L-B columns squares added will be ( L-B ) * (B)(B+1)/2 Total squares will be squares in BXB + (L-B) * (L)(L+1)/2. Total squares will be squares in BXB + (L-B) * (L)(L+1)/2. You can also use formula B(B+1)(2B+1)/6 for the series (1 + 4 + 9 +......BXB) in step 8. You can also use formula B(B+1)(2B+1)/6 for the series (1 + 4 + 9 +......BXB) in step 8. Let’s understand with examples. Input βˆ’ L=4, B=2 Output βˆ’ Count of squares in rectangle βˆ’ 11 Explanation βˆ’ 8 squares of 1X1 and 3 of 2X2. Input βˆ’ L=3, B=3 Output βˆ’ Count of squares in rectangle βˆ’ 14 Explanation βˆ’ 9 squares of 1X1 , 4 of 2X2 and 1 of 3X3. We take an integers length and breadth for dimensions of rectangle. We take an integers length and breadth for dimensions of rectangle. Function numofSquares(int l, int b) takes dimensions and returns the number of squares in the rectangle of size lXb. Function numofSquares(int l, int b) takes dimensions and returns the number of squares in the rectangle of size lXb. For biggest squares bXb. Use for loop from 1 to b and add each i*i to squares. For biggest squares bXb. Use for loop from 1 to b and add each i*i to squares. Now if l>b. New added squares will be (l-b)(b)(b+1)/2. Add this to squares. Now if l>b. New added squares will be (l-b)(b)(b+1)/2. Add this to squares. Return squares as desired result. Return squares as desired result. Note βˆ’ keeping length>=breadth Live Demo #include<iostream> using namespace std; int numofSquares(int l, int b){ int squares=0; for(int i=1;i<=b;i++) //squares inside biggest square of size breadth X breadth{ squares += i*i; } squares+=(l-b)*b*(b+1)/2; return squares; } int main(){ int length = 5, breadth = 4; //keep length always >= breadth cout << "Count of squares is :"<< numofSquares(length,breadth); } If we run the above code it will generate the following output βˆ’ Count of squares is :40
[ { "code": null, "e": 1223, "s": 1062, "text": "We are given with a rectangle of length L and breadth B, such that L>=B. The goal is to find the number of squares that a rectangle of size LXB can accommodate." }, { "code": null, "e": 1316, "s": 1223, "text": "Above figure shows a ...
How to remove duplicate values from a MySQL table using LEFT JOIN?
Let us first create a table βˆ’ mysql> create table DemoTable ( Id int NOT NULL AUTO_INCREMENT PRIMARY KEY, FirstName varchar(100) ); Query OK, 0 rows affected (0.46 sec) Insert some records in the table using insert command βˆ’ mysql> insert into DemoTable(FirstName) values('Chris'); Query OK, 1 row affected (0.18 sec) mysql> insert into DemoTable(FirstName) values('Robert'); Query OK, 1 row affected (0.12 sec) mysql> insert into DemoTable(FirstName) values('Robert'); Query OK, 1 row affected (0.07 sec) mysql> insert into DemoTable(FirstName) values('John'); Query OK, 1 row affected (0.14 sec) mysql> insert into DemoTable(FirstName) values('John'); Query OK, 1 row affected (0.11 sec) mysql> insert into DemoTable(FirstName) values('Mike'); Query OK, 1 row affected (0.09 sec) Display all records from the table using select statement βˆ’ mysql> select *from DemoTable; This will produce the following output βˆ’ +----+-----------+ | Id | FirstName | +----+-----------+ | 1 | Chris | | 2 | Robert | | 3 | Robert | | 4 | John | | 5 | John | | 6 | Mike | +----+-----------+ 6 rows in set (0.00 sec) Following is the query to remove duplicate values from MySQL table βˆ’ mysql> delete tbl from DemoTable tbl left join( select min(Id) as Id, FirstName from DemoTable group by FirstName ) tbl1 ON tbl.Id = tbl1.Id AND tbl.FirstName = tbl1.FirstName where tbl1.Id IS NULL; Query OK, 2 rows affected (0.16 sec) Let us check table records once again. mysql> select *from DemoTable; This will produce the following output βˆ’ +----+-----------+ | Id | FirstName | +----+-----------+ | 1 | Chris | | 2 | Robert | | 4 | John | | 6 | Mike | +----+-----------+ 4 rows in set (0.00 sec)
[ { "code": null, "e": 1092, "s": 1062, "text": "Let us first create a table βˆ’" }, { "code": null, "e": 1237, "s": 1092, "text": "mysql> create table DemoTable\n(\n Id int NOT NULL AUTO_INCREMENT PRIMARY KEY,\n FirstName varchar(100)\n);\nQuery OK, 0 rows affected (0.46 sec)" ...
Bitwise Operators in C/C++ - GeeksforGeeks
27 Apr, 2022 In C, the following 6 operators are bitwise operators (work at bit-level) The & (bitwise AND) in C or C++ takes two numbers as operands and does AND on every bit of two numbers. The result of AND is 1 only if both bits are 1. The | (bitwise OR) in C or C++ takes two numbers as operands and does OR on every bit of two numbers. The result of OR is 1 if any of the two bits is 1. The ^ (bitwise XOR) in C or C++ takes two numbers as operands and does XOR on every bit of two numbers. The result of XOR is 1 if the two bits are different. The << (left shift) in C or C++ takes two numbers, left shifts the bits of the first operand, the second operand decides the number of places to shift. The >> (right shift) in C or C++ takes two numbers, right shifts the bits of the first operand, the second operand decides the number of places to shift. The ~ (bitwise NOT) in C or C++ takes one number and inverts all bits of it. The & (bitwise AND) in C or C++ takes two numbers as operands and does AND on every bit of two numbers. The result of AND is 1 only if both bits are 1. The | (bitwise OR) in C or C++ takes two numbers as operands and does OR on every bit of two numbers. The result of OR is 1 if any of the two bits is 1. The ^ (bitwise XOR) in C or C++ takes two numbers as operands and does XOR on every bit of two numbers. The result of XOR is 1 if the two bits are different. The << (left shift) in C or C++ takes two numbers, left shifts the bits of the first operand, the second operand decides the number of places to shift. The >> (right shift) in C or C++ takes two numbers, right shifts the bits of the first operand, the second operand decides the number of places to shift. The ~ (bitwise NOT) in C or C++ takes one number and inverts all bits of it. Example: C++ C #include <iostream>using namespace std; int main() { // a = 5(00000101), b = 9(00001001) int a = 5, b = 9; // The result is 00000001 cout<<"a = " << a <<","<< " b = " << b <<endl; cout << "a & b = " << (a & b) << endl; // The result is 00001101 cout << "a | b = " << (a | b) << endl; // The result is 00001100 cout << "a ^ b = " << (a ^ b) << endl; // The result is 11111010 cout << "~a = " << (~a) << endl; // The result is 00010010 cout<<"b << 1" <<" = "<< (b << 1) <<endl; // The result is 00000100 cout<<"b >> 1 "<<"= " << (b >> 1 )<<endl; return 0;} // This code is contributed by sathiyamoorthics19 // C Program to demonstrate use of bitwise operators#include <stdio.h>int main(){ // a = 5(00000101), b = 9(00001001) unsigned char a = 5, b = 9; // The result is 00000001 printf("a = %d, b = %d\n", a, b); printf("a&b = %d\n", a & b); // The result is 00001101 printf("a|b = %d\n", a | b); // The result is 00001100 printf("a^b = %d\n", a ^ b); // The result is 11111010 printf("~a = %d\n", a = ~a); // The result is 00010010 printf("b<<1 = %d\n", b << 1); // The result is 00000100 printf("b>>1 = %d\n", b >> 1); return 0; } a = 5, b = 9 a & b = 1 a | b = 13 a ^ b = 12 ~a = -6 b << 1 = 18 b >> 1 = 4 Interesting facts about bitwise operators The left shift and right shift operators should not be used for negative numbers. If the second operand(which decides the number of shifts) is a negative number, it results in undefined behaviour in C. For example results of both 1 <<- 1 and 1 >> -1 is undefined. Also, if the number is shifted more than the size of the integer, the behaviour is undefined. For example, 1 << 33 is undefined if integers are stored using 32 bits. Another thing is, NO shift operation is performed if additive-expression(operand that decides no of shifts) is 0. See this for more details. Note: In C++, this behavior is well-defined.The bitwise XOR operator is the most useful operator from a technical interview perspective. It is used in many problems. A simple example could be β€œGiven a set of numbers where all elements occur even a number of times except one number, find the odd occurring number” This problem can be efficiently solved by just doing XOR of all numbers. The left shift and right shift operators should not be used for negative numbers. If the second operand(which decides the number of shifts) is a negative number, it results in undefined behaviour in C. For example results of both 1 <<- 1 and 1 >> -1 is undefined. Also, if the number is shifted more than the size of the integer, the behaviour is undefined. For example, 1 << 33 is undefined if integers are stored using 32 bits. Another thing is, NO shift operation is performed if additive-expression(operand that decides no of shifts) is 0. See this for more details. Note: In C++, this behavior is well-defined. The bitwise XOR operator is the most useful operator from a technical interview perspective. It is used in many problems. A simple example could be β€œGiven a set of numbers where all elements occur even a number of times except one number, find the odd occurring number” This problem can be efficiently solved by just doing XOR of all numbers. C++ C #include <iostream>using namespace std; // Function to return the only odd// occurring elementint findOdd(int arr[], int n){ int res = 0, i; for (i = 0; i < n; i++) res ^= arr[i]; return res;} // Driver Methodint main(void){ int arr[] = { 12, 12, 14, 90, 14, 14, 14 }; int n = sizeof(arr) / sizeof(arr[0]); cout << "The odd occurring element is "<< findOdd(arr, n); return 0;} // This code is contributed by shivanisinghss2110 #include <stdio.h> // Function to return the only odd// occurring elementint findOdd(int arr[], int n){ int res = 0, i; for (i = 0; i < n; i++) res ^= arr[i]; return res;} // Driver Methodint main(void){ int arr[] = { 12, 12, 14, 90, 14, 14, 14 }; int n = sizeof(arr) / sizeof(arr[0]); printf("The odd occurring element is %d ", findOdd(arr, n)); return 0;} The odd occurring element is 90 The following are many other interesting problems using XOR operator. Find the Missing Numberswap two numbers without using a temporary variableA Memory Efficient Doubly Linked ListFind the two non-repeating elements.Find the two numbers with odd occurences in an unsorted-array.Add two numbers without using arithmetic operators.Swap bits in a given number.Count number of bits to be flipped to convert a to b .Find the element that appears once.Detect if two integers have opposite signs. Find the Missing Number swap two numbers without using a temporary variable A Memory Efficient Doubly Linked List Find the two non-repeating elements. Find the two numbers with odd occurences in an unsorted-array. Add two numbers without using arithmetic operators. Swap bits in a given number. Count number of bits to be flipped to convert a to b . Find the element that appears once. Detect if two integers have opposite signs. The bitwise operators should not be used in place of logical operators. The result of logical operators (&&, || and !) is either 0 or 1, but bitwise operators return an integer value. Also, the logical operators consider any non-zero operand as 1. For example, consider the following program, the results of & and && are different for same operands. C++ C #include <iostream>using namespace std; int main(){ int x = 2, y = 5; (x & y) ? cout <<"True " : cout <<"False "; (x && y) ? cout <<"True " : cout <<"False "; return 0;} // This code is contributed by shivanisinghss2110 #include <stdio.h> int main(){ int x = 2, y = 5; (x & y) ? printf("True ") : printf("False "); (x && y) ? printf("True ") : printf("False "); return 0;} False True The left-shift and right-shift operators are equivalent to multiplication and division by 2 respectively. As mentioned in point 1, it works only if numbers are positive. C++ C #include <iostream>using namespace std; int main() { int x = 19; cout<<"x << 1 = "<< (x << 1) <<endl; cout<<"x >> 1 = "<< (x >> 1) <<endl; return 0;} // This code is contributed by sathiyamoorthics19 #include <stdio.h> int main(){ int x = 19; printf("x << 1 = %d\n", x << 1); printf("x >> 1 = %d\n", x >> 1); return 0;} x << 1 = 38 x >> 1 = 9 The & operator can be used to quickly check if a number is odd or even. The value of expression (x & 1) would be non-zero only if x is odd, otherwise the value would be zero. C++ C #include <iostream>using namespace std; int main() { int x = 19 ; (x & 1) ? cout<<"Odd" : cout<< "Even" ; return 0;} // This code is contributed by sathiyamoorthics19 #include <stdio.h> int main(){ int x = 19; (x & 1) ? printf("Odd") : printf("Even"); return 0;} Odd The ~ operator should be used carefully. The result of ~ operator on a small number can be a big number if the result is stored in an unsigned variable. And the result may be a negative number if the result is stored in a signed variable (assuming that the negative numbers are stored in 2’s complement form where the leftmost bit is the sign bit) C++ C #include <iostream>using namespace std; int main() { unsigned int x = 1; signed int a = 1; cout<<"Signed Result "<< ~a <<endl ; cout<<"Unsigned Result "<< ~x ; return 0;}// This code is contributed by sathiyamoorthics19 // Note that the output of the following// program is compiler dependent#include <stdio.h> int main(){ unsigned int x = 1; printf("Signed Result %d \n", ~x); printf("Unsigned Result %ud \n", ~x); return 0;} Signed Result -2 Unsigned Result 4294967294 Bits manipulation (Important tactics)Bitwise Hacks for Competitive ProgrammingBit Tricks for Competitive Programming Bits manipulation (Important tactics) Bitwise Hacks for Competitive Programming Bit Tricks for Competitive Programming Shubham Dhiman 1 prakash_ AmanRaj1608 VijaySingh8 shireenfatima12 mayunitp shivanisinghss2110 sathiyamoorthics19 harendrakumar123 Bitwise-XOR C-Operators cpp-operator Bit Magic C Language C++ cpp-operator Bit Magic CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Cyclic Redundancy Check and Modulo-2 Division Little and Big Endian Mystery Program to find whether a given number is power of 2 Binary representation of a given number Bit Fields in C Dynamic Memory Allocation in C using malloc(), calloc(), free() and realloc() Arrays in C/C++ std::sort() in C++ STL Multidimensional Arrays in C / C++ rand() and srand() in C/C++
[ { "code": null, "e": 25080, "s": 25052, "text": "\n27 Apr, 2022" }, { "code": null, "e": 25156, "s": 25080, "text": "In C, the following 6 operators are bitwise operators (work at bit-level) " }, { "code": null, "e": 26004, "s": 25156, "text": "The & (bitwise...
sr-only Bootstrap class
Hide an element to all devices except screen readers with the class .sr-only. You can try to run the following code to implement the sr-only Bootstrap βˆ’ Live Demo <!DOCTYPE html> <html> <head> <title>Bootstrap Example</title> <link href = "/bootstrap/css/bootstrap.min.css" rel = "stylesheet"> <script src = "/scripts/jquery.min.js"></script> <script src = "/bootstrap/js/bootstrap.min.js"></script> </head> <body> <div class = "row" style = "padding: 91px 100px 19px 50px;"> <form class = "form-inline" role = "form"> <div class = "form-group"> <label class = "sr-only" for = "email">Email address</label> <input type = "email" class = "form-control" placeholder = "Enter email"> </div> <div class = "form-group"> <label class = "sr-only" for = "pass">Password</label> <input type = "password" class = "form-control" placeholder = "Password"> </div> </form> </div> </body> </html>
[ { "code": null, "e": 1140, "s": 1062, "text": "Hide an element to all devices except screen readers with the class .sr-only." }, { "code": null, "e": 1215, "s": 1140, "text": "You can try to run the following code to implement the sr-only Bootstrap βˆ’" }, { "code": null, ...
PyQt - Multiple Document Interface
A typical GUI application may have multiple windows. Tabbed and stacked widgets allow to activate one such window at a time. However, many a times this approach may not be useful as view of other windows is hidden. One way to display multiple windows simultaneously is to create them as independent windows. This is called as SDI (single Document Interface). This requires more memory resources as each window may have its own menu system, toolbar, etc. MDI (Multiple Document Interface) applications consume lesser memory resources. The sub windows are laid down inside main container with relation to each other. The container widget is called QMdiArea. QMdiArea widget generally occupies the central widget of QMainWondow object. Child windows in this area are instances of QMdiSubWindow class. It is possible to set any QWidget as the internal widget of subWindow object. Sub-windows in the MDI area can be arranged in cascaded or tile fashion. The following table lists important methods of QMdiArea class and QMdiSubWindow class βˆ’ addSubWindow() Adds a widget as a new subwindow in MDI area removeSubWindow() Removes a widget that is internal widget of a subwindow setActiveSubWindow() Activates a subwindow cascadeSubWindows() Arranges subwindows in MDiArea in a cascaded fashion tileSubWindows() Arranges subwindows in MDiArea in a tiled fashion closeActiveSubWindow() Closes the active subwindow subWindowList() Returns the list of subwindows in MDI Area setWidget() Sets a QWidget as an internal widget of a QMdiSubwindow instance QMdiArea object emits subWindowActivated() signal whereas windowStateChanged() signal is emitted by QMdisubWindow object. In the following example, top level window comprising of QMainWindow has a menu and MdiArea. self.mdi = QMdiArea() self.setCentralWidget(self.mdi) bar = self.menuBar() file = bar.addMenu("File") file.addAction("New") file.addAction("cascade") file.addAction("Tiled") Triggered() signal of the menu is connected to windowaction() function. file.triggered[QAction].connect(self.windowaction) The new action of menu adds a subwindow in MDI area with a title having an incremental number to it. MainWindow.count = MainWindow.count+1 sub = QMdiSubWindow() sub.setWidget(QTextEdit()) sub.setWindowTitle("subwindow"+str(MainWindow.count)) self.mdi.addSubWindow(sub) sub.show() Cascaded and tiled buttons of the menu arrange currently displayed subwindows in cascaded and tiled fashion respectively. The complete code is as follows βˆ’ import sys from PyQt4.QtCore import * from PyQt4.QtGui import * class MainWindow(QMainWindow): count = 0 def __init__(self, parent = None): super(MainWindow, self).__init__(parent) self.mdi = QMdiArea() self.setCentralWidget(self.mdi) bar = self.menuBar() file = bar.addMenu("File") file.addAction("New") file.addAction("cascade") file.addAction("Tiled") file.triggered[QAction].connect(self.windowaction) self.setWindowTitle("MDI demo") def windowaction(self, q): print "triggered" if q.text() == "New": MainWindow.count = MainWindow.count+1 sub = QMdiSubWindow() sub.setWidget(QTextEdit()) sub.setWindowTitle("subwindow"+str(MainWindow.count)) self.mdi.addSubWindow(sub) sub.show() if q.text() == "cascade": self.mdi.cascadeSubWindows() if q.text() == "Tiled": self.mdi.tileSubWindows() def main(): app = QApplication(sys.argv) ex = MainWindow() ex.show() sys.exit(app.exec_()) if __name__ == '__main__': main() The above code produces the following output βˆ’ 146 Lectures 22.5 hours ALAA EID Print Add Notes Bookmark this page
[ { "code": null, "e": 2141, "s": 1926, "text": "A typical GUI application may have multiple windows. Tabbed and stacked widgets allow to activate one such window at a time. However, many a times this approach may not be useful as view of other windows is hidden." }, { "code": null, "e": 2...
File Handling through C++ Classes - GeeksforGeeks
28 Dec, 2021 In C++, files are mainly dealt by using three classes fstream, ifstream, ofstream available in fstream headerfile. ofstream: Stream class to write on files ifstream: Stream class to read from files fstream: Stream class to both read and write from/to files. Now the first step to open the particular file for read or write operation. We can open file by 1. passing file name in constructor at the time of object creation 2. using the open method For e.g. Open File by using constructor ifstream (const char* filename, ios_base::openmode mode = ios_base::in); ifstream fin(filename, openmode) by default openmode = ios::in ifstream fin(β€œfilename”); Open File by using open method Calling of default constructor ifstream fin;fin.open(filename, openmode) fin.open(β€œfilename”); Modes : Default Open Modes : Problem Statement : To read and write a File in C++. Examples: Input : Welcome in GeeksforGeeks. Best way to learn things. -1 Output : Welcome in GeeksforGeeks. Best way to learn things. Below is the implementation by using ifstream & ofstream classes. C++ /* File Handling with C++ using ifstream & ofstream class object*//* To write the Content in File*//* Then to read the content of file*/#include <iostream> /* fstream header file for ifstream, ofstream, fstream classes */#include <fstream> using namespace std; // Driver Codeint main(){ // Creation of ofstream class object ofstream fout; string line; // by default ios::out mode, automatically deletes // the content of file. To append the content, open in ios:app // fout.open("sample.txt", ios::app) fout.open("sample.txt"); // Execute a loop If file successfully opened while (fout) { // Read a Line from standard input getline(cin, line); // Press -1 to exit if (line == "-1") break; // Write line in file fout << line << endl; } // Close the File fout.close(); // Creation of ifstream class object to read the file ifstream fin; // by default open mode = ios::in mode fin.open("sample.txt"); // Execute a loop until EOF (End of File) while (fin) { // Read a Line from File getline(fin, line); // Print line in Console cout << line << endl; } // Close the file fin.close(); return 0;} Below is the implementation by using fstream class. C++ /* File Handling with C++ using fstream class object *//* To write the Content in File *//* Then to read the content of file*/#include <iostream> /* fstream header file for ifstream, ofstream, fstream classes */#include <fstream> using namespace std; // Driver Codeint main(){ // Creation of fstream class object fstream fio; string line; // by default openmode = ios::in|ios::out mode // Automatically overwrites the content of file, To append // the content, open in ios:app // fio.open("sample.txt", ios::in|ios::out|ios::app) // ios::trunc mode delete all content before open fio.open("sample.txt", ios::trunc | ios::out | ios::in); // Execute a loop If file successfully Opened while (fio) { // Read a Line from standard input getline(cin, line); // Press -1 to exit if (line == "-1") break; // Write line in file fio << line << endl; } // Execute a loop until EOF (End of File) // point read pointer at beginning of file fio.seekg(0, ios::beg); while (fio) { // Read a Line from File getline(fio, line); // Print line in Console cout << line << endl; } // Close the file fio.close(); return 0;} ruhelaa48 anikaseth98 anuragmourya975 cpp-file-handling C++ CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Vector in C++ STL Initialize a vector in C++ (6 different ways) Socket Programming in C/C++ std::sort() in C++ STL Virtual Function in C++ Bitwise Operators in C/C++ Templates in C++ with Examples Iterators in C++ STL unordered_map in C++ STL rand() and srand() in C/C++
[ { "code": null, "e": 26962, "s": 26934, "text": "\n28 Dec, 2021" }, { "code": null, "e": 27221, "s": 26962, "text": "In C++, files are mainly dealt by using three classes fstream, ifstream, ofstream available in fstream headerfile. ofstream: Stream class to write on files ifstrea...
Using Single-Row Functions Questions
1. What will be the outcome of the following query? SELECT ROUND(144.23,-1) FROM dual; 140 144 150 100 140 144 150 100 Answer: A. The ROUND function will round off the value 144.23 according to the specified precision -1 and returns 140. Examine the structure of the EMPLOYEES table as given and answer the questions 2 and 3 that follow. SQL> DESC employees Name Null? Type ----------------------- -------- ---------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) 2. You are currently located in New Jersey and have connected to a remote database in San Diego. You issue the following command. SELECT ROUND (sysdate-hire_date,0) FROM employees WHERE (sysdate-hire_date)/180 = 2; What is the outcome of this query? An error because the ROUND function cannot be used with Date arguments. An error because the WHERE condition expression is invalid. Number of days since the employee was hired based on the current San Diego date and time. Number of days since the employee was hired based on the current New Jersey date and time. An error because the ROUND function cannot be used with Date arguments. An error because the WHERE condition expression is invalid. Number of days since the employee was hired based on the current San Diego date and time. Number of days since the employee was hired based on the current New Jersey date and time. Answer: C. The SYSDATE function will take the current time of the database which it is connecting to remotely. You must perform basic arithmetic operation to adjust the time zone. 3. You need to display the names of the employees who have the letter 's' in their first name and the letter 't' at the second position in their last name. Which query would give the required output? SELECT first_name, last_name FROM employees WHERE INSTR(first_name,'s') <> 0 AND SUBSTR(last_name,2,1) = 't'; SELECT first_name, last_name FROM employees WHERE INSTR(first_name,'s') <> '' AND SUBSTR(last_name,2,1) = 't'; SELECT first_name, last_name FROM employees WHERE INSTR(first_name,'e') IS NOT NULL AND SUBSTR(last_name,2,1) = 't'; SELECT first_name, last_name FROM employees WHERE INSTR(first_name,'e') <> 0 AND SUBSTR(last_name,LENGTH(first_name),1) = 't'; SELECT first_name, last_name FROM employees WHERE INSTR(first_name,'s') <> 0 AND SUBSTR(last_name,2,1) = 't'; SELECT first_name, last_name FROM employees WHERE INSTR(first_name,'s') <> 0 AND SUBSTR(last_name,2,1) = 't'; SELECT first_name, last_name FROM employees WHERE INSTR(first_name,'s') <> '' AND SUBSTR(last_name,2,1) = 't'; SELECT first_name, last_name FROM employees WHERE INSTR(first_name,'s') <> '' AND SUBSTR(last_name,2,1) = 't'; SELECT first_name, last_name FROM employees WHERE INSTR(first_name,'e') IS NOT NULL AND SUBSTR(last_name,2,1) = 't'; SELECT first_name, last_name FROM employees WHERE INSTR(first_name,'e') IS NOT NULL AND SUBSTR(last_name,2,1) = 't'; SELECT first_name, last_name FROM employees WHERE INSTR(first_name,'e') <> 0 AND SUBSTR(last_name,LENGTH(first_name),1) = 't'; SELECT first_name, last_name FROM employees WHERE INSTR(first_name,'e') <> 0 AND SUBSTR(last_name,LENGTH(first_name),1) = 't'; Answer: A. The INSTR function returns the position of a given character in the required string. The SUBSTR function returns set of characters from the string from a given starting and end position. 4. Which of the following statements is true regarding the COUNT function? COUNT (*) counts duplicate values and NULL values in columns of any data type. COUNT function cannot work with DATE datatypes. COUNT (DISTINCT job_id) returns the number of rows excluding rows containing duplicates and NULL values in the job_id column. A SELECT statement using the COUNT function with a DISTINCT keyword cannot have a WHERE clause. COUNT (*) counts duplicate values and NULL values in columns of any data type. COUNT function cannot work with DATE datatypes. COUNT (DISTINCT job_id) returns the number of rows excluding rows containing duplicates and NULL values in the job_id column. A SELECT statement using the COUNT function with a DISTINCT keyword cannot have a WHERE clause. Answer: A. The COUNT(*) function returns the number of rows in a table that satisfy the criteria of the SELECT statement, including duplicate rows and rows containing null values in any of the columns. If a WHERE clause is included in the SELECT statement, COUNT(*) returns the number of rows that satisfy the condition in the WHERE clause. In contrast, COUNT(expr) returns the number of non-null values that are in the column identified by expr. COUNT(DISTINCT expr) returns the number of unique, non-null values that are in the column identified by expr. 5. Which of the following commands is used to count the number of rows and non-NULL values in Oracle database? NOT NULL INSTR SUBSTR COUNT NOT NULL INSTR SUBSTR COUNT Answer: D. The COUNT (ALL column_name) is used to count number of rows excluding NULLs. Similarly, COUNT(*) is used to count the column values including NULLs. 6. What will be the outcome of the query given below? SELECT 100+NULL+999 FROM dual; 100 999 NULL 1099 100 999 NULL 1099 Answer: C. Any arithmetic operation with NULL results in a NULL. 7. Which of the following statements are true regarding the single row functions? They accept only a single argument. They can be nested only to two levels. Arguments can only be column values or constants. They can return a data type value different from the one that is referenced. They accept only a single argument. They can be nested only to two levels. Arguments can only be column values or constants. They can return a data type value different from the one that is referenced. Answer: D. Single row functions can take more than one argument and the return type can be different from the data type of the inputs. 8. Which of the below queries will format a value 1680 as $16,80.00? SELECT TO_CHAR(1680.00,'$99G99D99') FROM dual; SELECT TO_CHAR(1680.00,'$9,999V99') FROM dual; SELECT TO_CHAR(1680.00,'$9,999D99') FROM dual; SELECT TO_CHAR(1680.00,'$99G999D99') FROM dual; SELECT TO_CHAR(1680.00,'$99G99D99') FROM dual; SELECT TO_CHAR(1680.00,'$99G99D99') FROM dual; SELECT TO_CHAR(1680.00,'$9,999V99') FROM dual; SELECT TO_CHAR(1680.00,'$9,999V99') FROM dual; SELECT TO_CHAR(1680.00,'$9,999D99') FROM dual; SELECT TO_CHAR(1680.00,'$9,999D99') FROM dual; SELECT TO_CHAR(1680.00,'$99G999D99') FROM dual; SELECT TO_CHAR(1680.00,'$99G999D99') FROM dual; Answer: A, D. The format model $99G999D99 formats given number into numeric, group separator, and decimals. Other format elements can be leading zeroes, decimal position, comma position, local currency, scientific notation, and sign. 9. Determine the output of the below query. SELECT RPAD(ROUND('78945.45'),10,'*') FROM dual; 78945***** **78945.45 The function RPAD cannot be nested with other functions 78945.45**** 78945***** **78945.45 The function RPAD cannot be nested with other functions 78945.45**** Answer: A. The LPAD(string, num, char) and RPAD(string, num, char) functions add a character to the left or right of a given string until it reaches the specified length (num) after padding. The ROUND function rounds the value 78945.45 to 78945 and then pads it with '*' until length of 10 is reached. 10. Which of the following commands allows you to substitute a value whenever a NULL or non-NULL value is encountered in an SQL query? NVL NVLIF NVL2 LNNVL NVL NVLIF NVL2 LNNVL Answer: C. The NVL2 function takes minimum three arguments. The NVL2 function checks the first expression. If it is not null, the NVL2 function returns the second argument. If the first argument is null, the third argument is returned. 11. Which of the following type of single-row functions cannot be incorporated in Oracle DB? Character Numeric Conversion None of the above Character Numeric Conversion None of the above Answer: D. The types of single-row functions like character, numeric, date, conversion and miscellaneous as well as programmer-written can be incorporated in Oracle DB. 12. Out of the below clauses, where can the single-row functions be used? SELECT WHERE ORDER BY All of the above SELECT WHERE ORDER BY All of the above Answer: D. Single row function can be used in SELECT statement, WHERE clause and ORDER BY clause. 13. What is true regarding the NVL function in Oracle DB? The syntax of NVL is NVL (exp1, exp2) where exp1 and exp2 are expressions. NVL (exp1, exp2) will return the value of exp2 if the expression exp1 is NULL. NVL (exp1, exp2) will return the value of the expression exp2 if exp1 is NOT NULL. NVL (exp1, exp2) will return exp1 if the expression exp2 is NULL. The syntax of NVL is NVL (exp1, exp2) where exp1 and exp2 are expressions. NVL (exp1, exp2) will return the value of exp2 if the expression exp1 is NULL. NVL (exp1, exp2) will return the value of the expression exp2 if exp1 is NOT NULL. NVL (exp1, exp2) will return exp1 if the expression exp2 is NULL. Answer: B. NVL function replaces a null value with an alternate value. Columns of data type date, character, and number can use NVL to provide alternate values. Data types of the column and its alternative must match. 14. Examine the structure of the EMPLOYEES table as given. SQL> DESC employees Name Null? Type ----------------------- -------- ---------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) SELECT last_name, NVL(job_id, 'Unknown') FROM employees WHERE last_name LIKE 'A%' ORDER BY last_name; It will throw an ORA error on execution. It will list the job IDs for all employees from EMPLOYEES table. It will list the job IDs of all employees and substitute NULL job IDs with a literal 'Unknown'. It will display the last names for all the employees and their job IDs including the NULL values in the job ID. It will throw an ORA error on execution. It will list the job IDs for all employees from EMPLOYEES table. It will list the job IDs of all employees and substitute NULL job IDs with a literal 'Unknown'. It will display the last names for all the employees and their job IDs including the NULL values in the job ID. Answer: C. The NVL function replaces a null value with an alternate value. Columns of data type date, character, and number can use NVL to provide alternate values. Data types of the column and its alternative must match. 15. What will the outcome of the following query? SELECT NVL (NULL,'1') FROM dual; NULL 1 0 Gives an error because NULL cannot be explicitly specified to NVL function NULL 1 0 Gives an error because NULL cannot be explicitly specified to NVL function Answer: B. The NVL will treat NULL as a value and returns the alternate argument i.e. 1 as the result. 16. What will be the outcome of the following query? (Consider the structure of the EMPLOYEES table as given) SQL> DESC employees Name Null? Type ----------------------- -------- ---------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) SELECT employee_id , NVL(salary, 0) FROM employees WHERE first_name like 'P%' ORDER BY first_name; It will display 0 in the salary column for all the employees whose first name starts with a 'P' It will display the salaries for the employees whose name start with a 'P' and 0 if the salaries are NULL. It will throw an ORA error as the ORDER BY clause should also contain the salary column. The NVL function should be correctly used as NVL (0, salary) It will display 0 in the salary column for all the employees whose first name starts with a 'P' It will display the salaries for the employees whose name start with a 'P' and 0 if the salaries are NULL. It will throw an ORA error as the ORDER BY clause should also contain the salary column. The NVL function should be correctly used as NVL (0, salary) Answer: B. NVL function replaces a null value with an alternate value. Columns of data type date, character, and number can use NVL to provide alternate values. Data types of the column and its alternative must match. 17. Which of the following statements is true regarding the NVL statement? SELECT NVL (arg1, arg2) FROM dual; The two expressions arg1 and arg2 should only be in VARCHAR2 or NUMBER data type format. The arguments arg1 and arg2 should have the same data type If arg1 is VARCHAR2, then Oracle DB converts arg2 to the datatype of arg1 before comparing them and returns VARCHAR2 in the character set of arg1. An NVL function cannot be used with arguments of DATE datatype. The two expressions arg1 and arg2 should only be in VARCHAR2 or NUMBER data type format. The arguments arg1 and arg2 should have the same data type If arg1 is VARCHAR2, then Oracle DB converts arg2 to the datatype of arg1 before comparing them and returns VARCHAR2 in the character set of arg1. An NVL function cannot be used with arguments of DATE datatype. Answer: C. If arg1 is of VARCHAR2 data type, Oracle does implicit type conversion for arg2 id arg2 is of NUMBER datatype. In all other cases, both the arguments must be of same datatype. 18. What will be the outcome of the following query? (Consider the structure of the EMPLOYEES table as given) SQL> DESC employees Name Null? Type ----------------------- -------- ---------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) SELECT NVL2(job_id,'Regular Employee','New Joinee') FROM employees; It will return the value 'Regular Employee' for all the employees who have NULL job IDs It will return the value 'New Joinee' for all the employees who have NULL job IDs It will return 'Regular Employee' if the job ID is NULL It will throw an ORA error on execution. It will return the value 'Regular Employee' for all the employees who have NULL job IDs It will return the value 'New Joinee' for all the employees who have NULL job IDs It will return 'Regular Employee' if the job ID is NULL It will throw an ORA error on execution. Answer: B. The NVL2 function examines the first expression. If the first expression is not null, the NVL2 function returns the second expression. If the first expression is null, the third expression is returned. 19. Which of the following is true for the statement given as under. NVL2 (arg1, arg2, arg3) Arg2 and Arg3 can have any data type Arg1 cannot have the LONG data type Oracle will convert the data type of expr2 according to Arg1 If Arg2 is a NUMBER, then Oracle determines the numeric precedence, implicitly converts the other argument to that datatype, and returns that datatype. Arg2 and Arg3 can have any data type Arg1 cannot have the LONG data type Oracle will convert the data type of expr2 according to Arg1 If Arg2 is a NUMBER, then Oracle determines the numeric precedence, implicitly converts the other argument to that datatype, and returns that datatype. Answer: D. The data types of the arg2 and arg3 parameters must be compatible, and they cannot be of type LONG. They must either be of the same type, or it must be possible to convert arg3 to the type of the arg2 parameter. The data type returned by the NVL2 function is the same as that of the arg2 parameter. 20. Examine the structure of the EMPLOYEES table as given. SQL> DESC employees Name Null? Type ----------------------- -------- ---------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) SeLECT first_name, salary, NVL2(commission_pct, salary + (salary * commission_pct), salary) "Income" FROM employees WHERE first_name like 'P%' ORDER BY first_name; Salary will be returned if the Commission for the employee is NOT NULL. Commission_pct will be returned if the Commission for the employee is NOT NULL. Employees with the first name starting with 'P' and salary+(salary*commission_pct) will be returned if the employee earns a commission. The query throws an error because a mathematical expression is written inside NVL2. Salary will be returned if the Commission for the employee is NOT NULL. Commission_pct will be returned if the Commission for the employee is NOT NULL. Employees with the first name starting with 'P' and salary+(salary*commission_pct) will be returned if the employee earns a commission. The query throws an error because a mathematical expression is written inside NVL2. Answer: C. The NVL2 function examines the first expression. If the first expression is not null, the NVL2 function returns the second expression. If the first expression is null, the third expression is returned. 21. What is true about the NULLIF function in Oracle DB? NULLIF(expr1,expr2) will return expr2 if the two expressions are NOT NULL. NULLIF(expr1,expr2) will return 0 if the two expressions are NULL. NULLIF(expr1,expr2) will return NULL if the two expressions are equal. Expr1 can be NULL in NULLIF(expr1, expr2) NULLIF(expr1,expr2) will return expr2 if the two expressions are NOT NULL. NULLIF(expr1,expr2) will return 0 if the two expressions are NULL. NULLIF(expr1,expr2) will return NULL if the two expressions are equal. Expr1 can be NULL in NULLIF(expr1, expr2) Answer: C. The NULLIF function tests two terms for equality. If they are equal the function returns a null, else it returns the first of the two terms tested. The NULLIF function takes two mandatory parameters of any data type. The syntax is NULLIF(arg1,arg2), where the arguments arg1 and arg2 are compared. If they are identical, then NULL is returned. If they differ, the arg1 is returned. 22. Pick the correct answer given after the statement shown as under. NULLIF (arg1,arg2) Arg1 and Arg2 can be of different data types. Arg1 and Arg2 have to be equal in order to be used in the NULLIF function. There is no internal conversion of data types if NULLIF used as in the case of NVL and NVL2. This is equivalent to CASE WHEN Arg1 = Arg22 THEN NULL ELSE Arg1 END. Arg1 and Arg2 can be of different data types. Arg1 and Arg2 have to be equal in order to be used in the NULLIF function. There is no internal conversion of data types if NULLIF used as in the case of NVL and NVL2. This is equivalent to CASE WHEN Arg1 = Arg22 THEN NULL ELSE Arg1 END. Answer: D. 23. Examine the structure of the EMPLOYEES table as given. SQL> DESC employees Name Null? Type ----------------------- -------- ---------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) You need to create a report from the HR schema displaying employees who have changed jobs since they were hired. You execute the query given below. SELECT e.last_name, NULLIF(e.job_id, j.job_id,"Old Job ID") FROM employees e, job_history j WHERE e.employee_id = j.employee_id ORDER BY last_name; What will be the outcome of the query given above? It will display the old job ID when the new job ID is NULL. It will execute successfully and produce the required output. It will display the new job ID if the new job ID is equal to the old job ID It will throw an ORA error on execution. It will display the old job ID when the new job ID is NULL. It will execute successfully and produce the required output. It will display the new job ID if the new job ID is equal to the old job ID It will throw an ORA error on execution. Answer: D. 24. Which of the following is not a property of functions? Perform calculations on data Convert column data types Modify individual data items None of the above Perform calculations on data Convert column data types Modify individual data items None of the above Answer: D. Functions can perform calculations, perform case conversions and type conversions. 25. What is the most appropriate about single row functions? They return no value They return one result per row and operate on all the rows of a table. They return one result per row with input arguments They return one result per set of rows and operate on multiple rows. They return no value They return one result per row and operate on all the rows of a table. They return one result per row with input arguments They return one result per set of rows and operate on multiple rows. Answer: B. Single row functions always return one result per row and they operate on single rows only; hence the name β€˜Single Row' is given to them. 26. What among the following is a type of Oracle SQL functions? Multiple-row functions Single column functions Single value functions Multiple columns functions Multiple-row functions Single column functions Single value functions Multiple columns functions Answer: A. There are basically two types of functions - Single row and Multiple row functions. 27. What among the following is a type of single-row function? VARCHAR2 Character LONG NULLIF VARCHAR2 Character LONG NULLIF Answer: B and D. As Character and NULLIF are single row function and rest are the datatypes. 28. What is the most appropriate about Multiple Row Functions? They return multiple values per each row. They return one result per group of rows and can manipulate groups of rows. They return one result per row and can manipulate groups of rows. They return multiple values per a group of row. They return multiple values per each row. They return one result per group of rows and can manipulate groups of rows. They return one result per row and can manipulate groups of rows. They return multiple values per a group of row. Answer: B. Multiple Row functions always work on a group of rows and return one value per group of rows. 29. Which of the following are also called Group functions? Single row functions Multi group functions Multiple row functions Single group functions. Single row functions Multi group functions Multiple row functions Single group functions. Answer: C. Group functions are same as Multi row functions and aggregate functions. 30. Which of the following is true about Single Row Functions? They can be nested They accept arguments and return more than one value. They cannot modify a data type They cannot accept expressions as arguments. They can be nested They accept arguments and return more than one value. They cannot modify a data type They cannot accept expressions as arguments. Answer: A. Single row functions can be nested up to multiple levels. 31. What is the number of arguments Single Row functions accept? 0 Only 1 Only 2 1 or more than 1 0 Only 1 Only 2 1 or more than 1 Answer: D. Single row functions can accept one or more arguments depending upon the objective they serve. 32. Which of the following can be an argument for a Single Row Function? Data types SELECT statements Expression Table name Data types SELECT statements Expression Table name Answer: C. A user-supplied constant, variable value, column value and expression are the types of arguments of a single row function. 33. What is true about Character functions? They return only character values They accept NUMBER values They accept character arguments and can return both character and number values They accept values of all data type They return only character values They accept NUMBER values They accept character arguments and can return both character and number values They accept values of all data type Answer: C. The character function INSTR accepts a string value but returns numeric position of a character in the string. 34. What is true about Number functions? They return both Character as well as Number values They can't accept expressions as input Number functions can't be nested. They accept Number arguments and return Number values only. They return both Character as well as Number values They can't accept expressions as input Number functions can't be nested. They accept Number arguments and return Number values only. Answer: D. 35. Which of the following is an exception to the return value of a DATE type single-row function? TO_DATE SYSDATE MONTHS_BETWEEN TO_NUMBER TO_DATE SYSDATE MONTHS_BETWEEN TO_NUMBER Answer: C. All the DATE data type functions return DATE as return values except MONTHS_BETWEEN which returns a number. 36. Which of the following is not a Conversion type Single Row function? TO_CHAR TO_DATE NVL TO_NUMBER TO_CHAR TO_DATE NVL TO_NUMBER Answer: C. Conversion functions convert a value from one data type to another. The NVL function replaces a null value with an alternate value. 37. Which of the following is a Case-Conversion Character function? CONCAT SUBSTR INITCAP REPLACE CONCAT SUBSTR INITCAP REPLACE Answer: C. The CONCAT, SUBSTR and REPLACE are Character-manipulation Character functions while INITCAP, LOWER and UPPER are case conversion character functions. 38. What will be the outcome of the following query? SELECT lower('HI WORLD !!!') FROM dual; Hi World !!! Hi WORLD !!! hi world !!! HI WORLD !!! Hi World !!! Hi WORLD !!! hi world !!! HI WORLD !!! Answer: C. The LOWER function converts a string to lower case characters. 39. What will be the outcome of the following query? SELECT lower(upper(initcap('Hello World') )) FROM dual; Hello World HELLO world hello World hello world Hello World HELLO world hello World hello world Answer: D. Case conversion characters can be nested in the SELECT queries. Examine the structure of the EMPLOYEES table as given and answer the questions 40 to 42 that follow. SQL> DESC employees Name Null? Type ----------------------- -------- ---------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) 40.Which of the following queries will give the same result as given in the query given below? SELECT CONCAT(first_name, last_name) FROM employees; SELECT first_name||last_name FROM employees; SELECT first_name||' ' || last_name FROM employees; SELECT last_name||', '||first_name FROM employees; SELECT first_name||','||last_name FROM employees; SELECT first_name||last_name FROM employees; SELECT first_name||' ' || last_name FROM employees; SELECT last_name||', '||first_name FROM employees; SELECT first_name||','||last_name FROM employees; Answer: A. The CONCAT function joins two strings without any space in between. 41. What will be the outcome of the following query? SELECT 'The job id for '||upper(last_name) ||' is a '||lower(job_id) FROM employees; The job id for ABEL is a sa_rep The job id forABEL is a sa_rep The job id for abel is SA_REP The job id for abel is sa_rep The job id for ABEL is a sa_rep The job id forABEL is a sa_rep The job id for abel is SA_REP The job id for abel is sa_rep Answer: A. 42. Assuming the last names of the employees are in a proper case in the table employees, what will be the outcome of the following query? SELECT employee_id, last_name, department_id FROM employees WHERE last_name = 'smith'; It will display the details of the employee with the last name as Smith It will give no result. It will give the details for the employee having the last name as 'Smith' in all Lower case. It will give the details for the employee having the last name as 'Smith' in all INITCAP case. It will display the details of the employee with the last name as Smith It will give no result. It will give the details for the employee having the last name as 'Smith' in all Lower case. It will give the details for the employee having the last name as 'Smith' in all INITCAP case. Answer: B. Provided the last names in the employees table are in a proper case, the condition WHERE last_name = 'smith' will not be satistified and hence no results will be displayed. 43. What is true about the CONCAT function in Oracle DB? It can have only characters as input. It can have only 2 input parameters. It can have 2 or more input parameters It joins values by putting a white space in between the concatenated strings by default. It can have only characters as input. It can have only 2 input parameters. It can have 2 or more input parameters It joins values by putting a white space in between the concatenated strings by default. Answer: B. The CONCAT function accepts only two arguments of NUMBER or VARCHAR2 datatypes. 44. What is true about the SUBSTR function in Oracle DB? It extracts a string of determined length It shows the length of a string as a numeric value It finds the numeric position of a named character It trims characters from one (or both) sides from a character string It extracts a string of determined length It shows the length of a string as a numeric value It finds the numeric position of a named character It trims characters from one (or both) sides from a character string Answer: A. The SUBSTR(string, x, y) function accepts three parameters and returns a string consisting of the number of characters extracted from the source string, beginning at the specified start position (x). When position is positive, then the function counts from the beginning of string to find the first character. When position is negative, then the function counts backward from the end of string. 45. What will be the outcome of the following query? SELECT length('hi') FROM dual; 2 3 1 hi 2 3 1 hi Answer: A. the LENGTH function simply gives the length of the string. 46. What is the difference between LENGTH and INSTR functions in Oracle DB? They give the same results when operated on a string. LENGTH gives the position of a particular character in a string INSTR gives the position of a particular character in a string while LENGTH gives the length of the string. LENGTH and INSTR can be used interchangeably. They give the same results when operated on a string. LENGTH gives the position of a particular character in a string INSTR gives the position of a particular character in a string while LENGTH gives the length of the string. LENGTH and INSTR can be used interchangeably. Answer: C. 47. Examine the structure of the EMPLOYEES table as given. SQL> DESC employees Name Null? Type ----------------------- -------- ---------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) SELECT upper(&jobid) FROM employees; It results in an error as substitution variables cannot be used with single row functions It prompts the user to input the jobid on each execution and then displays the job id in UPPER case It gives the jobid as it is present in the table EMPLOYEES without making any change It will not ask the user to input the job id and will convert all the job IDs in the table in UPPER case It results in an error as substitution variables cannot be used with single row functions It prompts the user to input the jobid on each execution and then displays the job id in UPPER case It gives the jobid as it is present in the table EMPLOYEES without making any change It will not ask the user to input the job id and will convert all the job IDs in the table in UPPER case Answer: B. Substitution variables can be used with the UPPER and LOWER functions. 48. What is false about the table DUAL in Oracle database? It is owned by the user SYS and can be access by all the users. It contains only one column and one row. The value in the DUMMY column of the DUAL table is 'X' The DUAL table is useful when you want to return a value only once It is owned by the user SYS and can be access by all the users. It contains only one column and one row. The value in the DUMMY column of the DUAL table is 'X' The DUAL table is useful when you want to return a value only once Answer: C. The DUAL table has one column named DUMMY and one row which has a value 'X'. 49. What will be the result of the following query? SELECT sysdate+4/12 FROM dual; The query produces error. No of hours to a date with date as the result. Sysdate arithmetic is ignored. Returns the system date as result. The query produces error. No of hours to a date with date as the result. Sysdate arithmetic is ignored. Returns the system date as result. Answer: B. Arithmetic operations can be performed on dates in the Oracle DB. 50. What will be the outcome of the following query? SELECT lower (100+100) FROM dual; 100 100+100 ORA error 200 100 100+100 ORA error 200 Answer: D. Arithmetic expressions can be specified within case conversion functions. 51. What will be the outcome of the following query if the SYSDATE = 20-MAY-13? SELECT upper (lower (sysdate)) FROM dual; 20-may-2013 ORA error as LOWER and UPPER cannot accept date values. 20-MAY-13 20-May-13 20-may-2013 ORA error as LOWER and UPPER cannot accept date values. 20-MAY-13 20-May-13 Answer: C. The functions UPPER and LOWER can accept date type inputs and will yield the same result as they do on Strings. 52. What is the result of the following query? SELECT INITCAP (24/6) FROM dual; 4 24 24/6 No result 4 24 24/6 No result Answer: A. Arithmetic expressions can be specified within case conversion functions. 53. Examine the structure of the EMPLOYEES table as given here. SQL> DESC employees Name Null? Type ----------------------- -------- ---------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) You need to display the last name of all employees which starts with the letter 'A'. Which of the following queries will yield the required result? SELECT INITCAP (last_name||' works as a '||job_id "Job Description" FROM employees WHERE initcap (last_name) like 'A%'; SELECT INITCAP (last_name) ||INITCAP(' works as a: ')|| INITCAP(job_id) "Job Description" FROM employees WHERE initcap (last_name) like 'A %'; SELECT INITCAP (last_name||' works as a '||INITCAP(job_id)) "Job Description" FROM employees WHERE initcap (last_name) = 'A'; SELECT UPPER (LOWER (last_name||' works as a '||job_id)) "Job Description" FROM employees WHERE lower (last_name) = 'A'; SELECT INITCAP (last_name||' works as a '||job_id "Job Description" FROM employees WHERE initcap (last_name) like 'A%'; SELECT INITCAP (last_name||' works as a '||job_id "Job Description" FROM employees WHERE initcap (last_name) like 'A%'; SELECT INITCAP (last_name) ||INITCAP(' works as a: ')|| INITCAP(job_id) "Job Description" FROM employees WHERE initcap (last_name) like 'A %'; SELECT INITCAP (last_name) ||INITCAP(' works as a: ')|| INITCAP(job_id) "Job Description" FROM employees WHERE initcap (last_name) like 'A %'; SELECT INITCAP (last_name||' works as a '||INITCAP(job_id)) "Job Description" FROM employees WHERE initcap (last_name) = 'A'; SELECT INITCAP (last_name||' works as a '||INITCAP(job_id)) "Job Description" FROM employees WHERE initcap (last_name) = 'A'; SELECT UPPER (LOWER (last_name||' works as a '||job_id)) "Job Description" FROM employees WHERE lower (last_name) = 'A'; SELECT UPPER (LOWER (last_name||' works as a '||job_id)) "Job Description" FROM employees WHERE lower (last_name) = 'A'; Answer: A, B. 54. Assuming the SYSDATE is 20-FEB-13, What will be the outcome of the following query? SELECT CONCAT ('Today is :', SYSDATE) FROM dual; Today is : 20-feb-13 The query throws error of incompatible type arguments. Today is : 20-Feb-13 Today is : 20-FEB-13 Today is : 20-feb-13 The query throws error of incompatible type arguments. Today is : 20-Feb-13 Today is : 20-FEB-13 Answer: D. The CONCAT function accepts arguments of all types. 55. What will be the result pattern of the following query? SELECT CONCAT(first_name, CONCAT (last_name, job_id)) FROM dual; First_namelast_namejob_id First_name, last_name, job_id Error as CONCAT cannot be nested First_namelast_name, job_id First_namelast_namejob_id First_name, last_name, job_id Error as CONCAT cannot be nested First_namelast_name, job_id Answer: A. The CONCAT function can be nested with self or other character function. 56. Examine the structure of the EMPLOYEES table as given here. SQL> DESC employees Name Null? Type ----------------------- -------- ---------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) You need to generate a report which shows the first name, last name and the salary for all the employees in the department 100. The report should show the results in the form 'Andy Smith earns 50000'. Which of the following queries will give the required output? SELECT concat (first_name,concat (' ', concat(last_name, concat(' earns ', SALARY)))) Concat_String FROM employees WHERE department_id = 100; SELECT concat (first_name, last_name||' '|| salary) FROM employees WHERE department_id = 100; SELECT concat (first_name, concat(last_name, ' '))||earns||salary FROM employees WHERE department_id = 100; SELECT concat (first_name, concat(last_name, 'earns salary') FROM employees WHERE department_id = 100; SELECT concat (first_name,concat (' ', concat(last_name, concat(' earns ', SALARY)))) Concat_String FROM employees WHERE department_id = 100; SELECT concat (first_name,concat (' ', concat(last_name, concat(' earns ', SALARY)))) Concat_String FROM employees WHERE department_id = 100; SELECT concat (first_name, last_name||' '|| salary) FROM employees WHERE department_id = 100; SELECT concat (first_name, last_name||' '|| salary) FROM employees WHERE department_id = 100; SELECT concat (first_name, concat(last_name, ' '))||earns||salary FROM employees WHERE department_id = 100; SELECT concat (first_name, concat(last_name, ' '))||earns||salary FROM employees WHERE department_id = 100; SELECT concat (first_name, concat(last_name, 'earns salary') FROM employees WHERE department_id = 100; SELECT concat (first_name, concat(last_name, 'earns salary') FROM employees WHERE department_id = 100; Answer: A. The CONCAT function can be nested with self or other character function. 57. What will the following query show as a result? SELECT LENGTH('It is a lovely day today!') FROM dual; 25 19 20 0 25 19 20 0 Answer: A. The LENGTH functions counts blank spaces, tabs and special characters too. 58. You need to display the country name from the COUNTRIES table. The length of the country name should be greater than 5 characters. Which of the following queries will give the required output? SELECT country_name FROM countries WHERE LENGTH (country_name)= 5; SELECT country_name FROM countries WHERE length (country_name)> 5; SELECT SUBSTR(country_name, 1,5) FROM countries WHERE length (country_name)< 5; SELECT country_name FROM countries WHERE length (country_name) <> 5; SELECT country_name FROM countries WHERE LENGTH (country_name)= 5; SELECT country_name FROM countries WHERE LENGTH (country_name)= 5; SELECT country_name FROM countries WHERE length (country_name)> 5; SELECT country_name FROM countries WHERE length (country_name)> 5; SELECT SUBSTR(country_name, 1,5) FROM countries WHERE length (country_name)< 5; SELECT SUBSTR(country_name, 1,5) FROM countries WHERE length (country_name)< 5; SELECT country_name FROM countries WHERE length (country_name) <> 5; SELECT country_name FROM countries WHERE length (country_name) <> 5; Answer: B. The LENGTH function can be used in WHERE clause. 59. How does the function LPAD works on strings? It aligns the string to the left hand side of a column It returns a string padded with a specified number of characters to the right of the source string It aligns character strings to the left and number strings to right of a column It returns a string padded with a specified number of characters to the left of the source string It aligns the string to the left hand side of a column It returns a string padded with a specified number of characters to the right of the source string It aligns character strings to the left and number strings to right of a column It returns a string padded with a specified number of characters to the left of the source string Answer: D. The LPAD(string, length after padding, padding string) and RPAD(string, length after padding, padding string) functions add a padding string of characters to the left or right of a string until it reaches the specified length after padding. 60. Which of the following options is true regarding LPAD and RPAD functions? The character strings used for padding include only characters. The character strings used for padding include only literals The character strings used for padding cannot include expressions. The character strings used for padding include literals, characters and expressions. The character strings used for padding include only characters. The character strings used for padding include only literals The character strings used for padding cannot include expressions. The character strings used for padding include literals, characters and expressions. Answer: D. 61. What is the maximum number of input arguments in LPAD and RPAD functions? 1 2 3 0 1 2 3 0 Answer: C. LPAD and RPAD take maximum of 3 arguments. If there are 2 arguments given, the padding happens by spaces. 62. What will be the outcome of the following query? SELECT lpad (1000 +300.66, 14, '*') FROM dual; *******1300.66 1300******* 1300.66 ****1300.66 *******1300.66 1300******* 1300.66 ****1300.66 Answer: A. To make the total length of 14 characters, the return value 1300.66 is padded with 7 asterisks (*) on the left. 63. What is true regarding the TRIM function? It is similar to SUBSTR function in Oracle It removes characters from the beginning or end of character literals, columns or expression TRIM function cannot be applied on expressions and NUMBERS TRIM function can remove characters only from both the sides of a string. It is similar to SUBSTR function in Oracle It removes characters from the beginning or end of character literals, columns or expression TRIM function cannot be applied on expressions and NUMBERS TRIM function can remove characters only from both the sides of a string. Answer: B. The TRIM function literally trims off leading or trailing (or both) character strings from a given source string. TRIM function when followed by TRAILING or LEADING keywords, can remove characters from one or both sides of a string. 64. You need to remove the occurrences of the character '.' and the double quotes '"' from the following titles of a book present in the table MAGAZINE. "HUNTING THOREAU IN NEW HAMPSHIRE" THE ETHNIC NEIGHBORHOOD." Which of the following queries will give the required result? SELECT LTRIM(Title,'"') FROM MAGAZINE; SELECT LTRIM(RTRIM(Title,'."'),'"') FROM MAGAZINE; SELECT LTRIM (Title,'"THE') FROM MAGAZINE; SELECT LTRIM(RTRIM(Title,'."THE'),'"') FROM MAGAZINE; SELECT LTRIM(Title,'"') FROM MAGAZINE; SELECT LTRIM(Title,'"') FROM MAGAZINE; SELECT LTRIM(RTRIM(Title,'."'),'"') FROM MAGAZINE; SELECT LTRIM(RTRIM(Title,'."'),'"') FROM MAGAZINE; SELECT LTRIM (Title,'"THE') FROM MAGAZINE; SELECT LTRIM (Title,'"THE') FROM MAGAZINE; SELECT LTRIM(RTRIM(Title,'."THE'),'"') FROM MAGAZINE; SELECT LTRIM(RTRIM(Title,'."THE'),'"') FROM MAGAZINE; Answer: B. The LTRIM and RTRIM functions can be used in combination with each other. 65. What will be returned as a result of the following query? SELECT INSTR('James','x') FROM dual; 1 2 0 3 1 2 0 3 Answer: C. INSTR function returns a 0 when the search string is absent in the given string. 66. What will be the outcome of the following query? SELECT INSTR('1$3$5$7$9$','$',3,4)FROM dual; 2 10 7 4 2 10 7 4 Answer: B. INSTR function search for the 4th occurrence of '$' starting from the 3rd position. 67. What will be the result of the following query? SELECT INSTR('1#3#5#7#9#', -3,2) FROM dual; #5 #3 #7 #9 #5 #3 #7 #9 Answer: D. SUBSTR function will search 3 places starting from the end of string and will give 2 characters in the forward direction giving #9. Examine the structure of the EMPLOYEES table as given below and answer the questions 68 and 69 that follow. SQL> DESC employees Name Null? Type ----------------------- -------- ---------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) 68. You need to extract a consistent 15 character string based on the SALARY column in the EMPLOYEES table. If the SALARY value is less than 15 characters long, zeros must be added to the left of the value to yield a 15 character string. Which query will fulfill this requirement? SELECT rpad(salary, 15,0) FROM employees; SELECT lpad(salary,15,0) FROM employees; SELECT ltrim(salary,15,0) FROM employees; SELECT trim(salary,15,0) FROM employees; SELECT rpad(salary, 15,0) FROM employees; SELECT rpad(salary, 15,0) FROM employees; SELECT lpad(salary,15,0) FROM employees; SELECT lpad(salary,15,0) FROM employees; SELECT ltrim(salary,15,0) FROM employees; SELECT ltrim(salary,15,0) FROM employees; SELECT trim(salary,15,0) FROM employees; SELECT trim(salary,15,0) FROM employees; Answer: B. The LPAD and RPAD functions add a padding string of characters to the left or right of a string until it reaches the specified length after padding. 69. You need to display the last 2 characters from the FIRST_NAME column in the EMPLOYEES table without using the LENGTH function. Which of the following queries can fulfill this requirement? SELECT SUBSTR(first_name, 2) FROM employees; SELECT SUBSTR(first_name, -2) FROM employees; SELECT RTRIM(first_name, 2) FROM employees; SELECT TRIM(first_name, 2) FROM employees; SELECT SUBSTR(first_name, 2) FROM employees; SELECT SUBSTR(first_name, 2) FROM employees; SELECT SUBSTR(first_name, -2) FROM employees; SELECT SUBSTR(first_name, -2) FROM employees; SELECT RTRIM(first_name, 2) FROM employees; SELECT RTRIM(first_name, 2) FROM employees; SELECT TRIM(first_name, 2) FROM employees; SELECT TRIM(first_name, 2) FROM employees; Answer: B. The SUBSTR(string, x, y) function accepts three parameters and returns a string consisting of the number of characters extracted from the source string, beginning at the specified start position (x). When position is positive, then the function counts from the beginning of string to find the first character. When position is negative, then the function counts backward from the end of string. 70. Assuming the SYSDATE is 13-JUN-13, what will be the outcome of the following query? SELECT SUBSTR(sysdate,10,7) FROM dual; 3 N-13 0 NULL 3 N-13 0 NULL Answer: D. The query will give a NULL as the position 10 to start with in the SYSDATE doesn't exist. 71. Which of the following is used to replace a specific character in a given string in Oracle DB? LTRIM TRIM TRUNC REPLACE LTRIM TRIM TRUNC REPLACE Answer: D. 72. What will be the outcome of the following query? SELECT replace(9999.00-1,'8',88) FROM dual; 999 9998 99988 9999.88 999 9998 99988 9999.88 Answer: C. The REPLACE function searches for '8' in 9998 and replaces it with '88'. 73. Examine the structure of the EMPLOYEES table as given here. SQL> DESC employees Name Null? Type ----------------------- -------- ---------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) You need to retrieve the first name, last name (separated by a space) and the formal names of employees where the combined length of the first name and last name exceeds 15 characters. A formal name is formed by the first letter of the First Name and the first 14 characters of the last name. Which of the following queries will fulfill this requirement? SELECT first_name, last_name ,SUBSTR(first_name, 1,1)||' '||SUBSTR(last_name, 1,14) formal_name FROM employees; SELECT first_name, last_name ,SUBSTR(first_name, 1,14)||' '||SUBSTR(last_name, 1,1) formal_name FROM employees WHERE length (first_name) + length(last_name) < 15; SELECT first_name, last_name ,SUBSTR(first_name, 1,1)||' '||SUBSTR(last_name, 1,14) formal_name FROM employees WHERE length (first_name) + length(last_name) =15; SELECT first_name, last_name ,SUBSTR(first_name, 1,1)||' '||SUBSTR(last_name, 1,14) formal_name FROM employees WHERE length (first_name) + length(last_name) > 15; SELECT first_name, last_name ,SUBSTR(first_name, 1,1)||' '||SUBSTR(last_name, 1,14) formal_name FROM employees; SELECT first_name, last_name ,SUBSTR(first_name, 1,1)||' '||SUBSTR(last_name, 1,14) formal_name FROM employees; SELECT first_name, last_name ,SUBSTR(first_name, 1,14)||' '||SUBSTR(last_name, 1,1) formal_name FROM employees WHERE length (first_name) + length(last_name) < 15; SELECT first_name, last_name ,SUBSTR(first_name, 1,14)||' '||SUBSTR(last_name, 1,1) formal_name FROM employees WHERE length (first_name) + length(last_name) < 15; SELECT first_name, last_name ,SUBSTR(first_name, 1,1)||' '||SUBSTR(last_name, 1,14) formal_name FROM employees WHERE length (first_name) + length(last_name) =15; SELECT first_name, last_name ,SUBSTR(first_name, 1,1)||' '||SUBSTR(last_name, 1,14) formal_name FROM employees WHERE length (first_name) + length(last_name) =15; SELECT first_name, last_name ,SUBSTR(first_name, 1,1)||' '||SUBSTR(last_name, 1,14) formal_name FROM employees WHERE length (first_name) + length(last_name) > 15; SELECT first_name, last_name ,SUBSTR(first_name, 1,1)||' '||SUBSTR(last_name, 1,14) formal_name FROM employees WHERE length (first_name) + length(last_name) > 15; Answer: D. 74. What will be the outcome of the following query? SELECT round(148.50) FROM dual; 148.50 140 150 149 148.50 140 150 149 Answer: D. if the decimal precision is absent, the default degree of rounding is 0 and the source is rounded to the nearest whole number. 75. Assuming the sysdate is 10-JUN-13, What will be the outcome of the following query? SELECT trunc (sysdate,'mon') FROM dual; 10-JUN-13 1-JUN-13 ORA error as the TRUNC function can't have an input parameter when used with dates. 31-JUN-13 10-JUN-13 1-JUN-13 ORA error as the TRUNC function can't have an input parameter when used with dates. 31-JUN-13 Answer: B. The date is truncated to the first day of the month. Similarly, it can be done for year also. 76. What will be the result of the following query? SELECT trunc(1902.92,-3) FROM dual; 2000 1000 1901 1901.00 2000 1000 1901 1901.00 Answer: B. 77. What is the syntax of the MOD function in Oracle DB? Mod(divisor,dividend) MOD(divisor,1) MOD(dividend,divisor) None of the above Mod(divisor,dividend) MOD(divisor,1) MOD(dividend,divisor) None of the above Answer: C. The MOD function is used to get the remainder of a division operation. 78. What will be outcome of the following query? SELECT mod(100.23,-3) FROM dual; ORA error 1.23 100 0 ORA error 1.23 100 0 Answer: B. The MOD function gives the same answer for a positive divisor as well as a negative divisor. 79. Which of the following functions are used to differentiate between even or odd numbers in Oracle DB? ROUND TRUNC MOD REPLACE ROUND TRUNC MOD REPLACE Answer: C. The MOD function can be used to check whether a given number is even or odd. If MOD (num,2) returns zero, the number 'num' is an even. If MOD (num,2) returns 1, the number 'num' is odd. 80. Examine the structure of the EMPLOYEES table as given below. SQL> DESC employees Name Null? Type ----------------------- -------- ---------------- EMPLOYEE_ID NOT NULL NUMBER(6) FIRST_NAME VARCHAR2(20) LAST_NAME NOT NULL VARCHAR2(25) EMAIL NOT NULL VARCHAR2(25) PHONE_NUMBER VARCHAR2(20) HIRE_DATE NOT NULL DATE JOB_ID NOT NULL VARCHAR2(10) SALARY NUMBER(8,2) COMMISSION_PCT NUMBER(2,2) MANAGER_ID NUMBER(6) DEPARTMENT_ID NUMBER(4) You need to allocate the first 12 employees to one of the four teams in a round-robin manner. The employee IDs start with a 100. Which of the following queries will fulfill the requirement? SELECT * FROM employees WHERE employee_id between 100 and 111 ORDER BY employee_id; SELECT first_name, last_name, employee_id, mod(employee_id, 4) Team# FROM employees WHERE employee_id between 100 and 111 ORDER BY employee_id; SELECT first_name, last_name,mod(employee_id, 2) Team# FROM employees WHERE employee_ID <> 100; SELECT first_name, last_name, mod(employee_id, 4) Team# FROM employees WHERE employee_ID = 100; SELECT * FROM employees WHERE employee_id between 100 and 111 ORDER BY employee_id; SELECT * FROM employees WHERE employee_id between 100 and 111 ORDER BY employee_id; SELECT first_name, last_name, employee_id, mod(employee_id, 4) Team# FROM employees WHERE employee_id between 100 and 111 ORDER BY employee_id; SELECT first_name, last_name, employee_id, mod(employee_id, 4) Team# FROM employees WHERE employee_id between 100 and 111 ORDER BY employee_id; SELECT first_name, last_name,mod(employee_id, 2) Team# FROM employees WHERE employee_ID <> 100; SELECT first_name, last_name,mod(employee_id, 2) Team# FROM employees WHERE employee_ID <> 100; SELECT first_name, last_name, mod(employee_id, 4) Team# FROM employees WHERE employee_ID = 100; SELECT first_name, last_name, mod(employee_id, 4) Team# FROM employees WHERE employee_ID = 100; Answer: B. 81. What will be the outcome of the following query? SELECT SUBSTR('Life is Calling',1) FROM dual; ORA error as there should be minimum 3 arguments to the SUBSTR function. Life is Calling NULL Life ORA error as there should be minimum 3 arguments to the SUBSTR function. Life is Calling NULL Life Answer: B. Calling the SUBSTR function with just the first two parameters results in the function extracting a string from a start position to the end of the given source string. 82. What is the default data format for the sysdate in SQL Developer? DD-MON-YY DD-MON-RR DD/MON/RR DD/MON/YYYY DD-MON-YY DD-MON-RR DD/MON/RR DD/MON/YYYY Answer: C. For SQL*PLUS the default date format is DD-MON-RR. 83. Assuming the SYSDATE to be 10-JUN-2013 12:05pm, what value is returned after executing the below query? SELECT add_months(sysdate,-1) FROM dual; 09-MAY-2013 12:05pm 10-MAY-2013 12:05pm 10-JUL-2013 12:05pm 09-JUL-2013 12:05pm 09-MAY-2013 12:05pm 10-MAY-2013 12:05pm 10-JUL-2013 12:05pm 09-JUL-2013 12:05pm Answer: B. The ADD_MONTHS(date, x) function adds 'x' number of calendar months to the given date. The value of 'x' must be an integer and can be negative. 84. What value will be returned after executing the following statement? Note that 01-JAN-2013 occurs on a Tuesday. SELECT next_day('01-JAN-2013','friday') FROM dual; 02-JAN-2013 Friday 04-JAN-2013 None of the above 02-JAN-2013 Friday 04-JAN-2013 None of the above Answer: C. The NEXT_DAY(date,'day') finds the date of the next specified day of the week ('day') following date. The value of char may be a number representing a day or a character string. 85. What is the maximum number of parameters the ROUND function can take? 0 1 2 3 0 1 2 3 Answer: C. If there is only one parameter present, then the rounding happens to the nearest whole number 86. Assuming the present date is 02-JUN-2007, what will be the century returned for the date 24-JUL-2004 in the DD-MON-RR format? 19 21 20 NULL 19 21 20 NULL Answer: C. If the two digits of the current year and the specified year lie between 0 and 49, the current century is returned. 87. Assuming the present date is 02-JUN-2007, what will be the century returned for the date 24-JUL-94 in the DD-MON-RR format? 19 21 20 NULL 19 21 20 NULL Answer: A. If the two digits of the current year lie between 0 and 49 and the specified year falls between 50 and 99, the previous century is returned. 88. Assuming the present date is 02-JUN-1975, what will be the century returned for the date 24-JUL-94 in the DD-MON-RR format? 19 21 20 NULL 19 21 20 NULL Answer: A. if the two digits of the current and specified years lie between 50 and 99, the current century is returned by default. 89. Assuming the present date is 02-JUN-1975, what will be the century returned for the date 24-JUL-07 in the DD-MON-RR format? 19 21 20 NULL 19 21 20 NULL Answer: C. if the two digits of the current year lie between 50 and 99 and the specified year falls between 0 and 49, the next century is returned. 90. How many parameters does the SYSDATE function take? 1 2 4 0 1 2 4 0 Answer: D. The SYSDATE is a pseudo column in Oracle. 91. What is true about the SYSDATE function in Oracle DB? It returns only the system date It takes 2 parameters at least. The default format is DD-MON-YY The default format of SYSDATE is DD-MON-RR and it returns the date and time of the system according to the database server. It returns only the system date It takes 2 parameters at least. The default format is DD-MON-YY The default format of SYSDATE is DD-MON-RR and it returns the date and time of the system according to the database server. Answer: D. 92. What will be the datatype of the result of the following operation? Date Num1 0 NULL Date Num1 0 NULL Answer: B. Subtraction of two dates results in number of days. 93. What will be the datatype of the result of the following operation? Date Num1 0 NULL Date Num1 0 NULL Answer: A. Subtraction of a number from a date value results in date. 94. What does a difference between two dates represent in Oracle DB? The number of days between them Difference in dates in not possible in Oracle DB A date NULL The number of days between them Difference in dates in not possible in Oracle DB A date NULL Answer: A. 95. What will be the outcome of the following query? SELECT months_between('21-JUN-13','19-JUN-13') FROM dual; ORA error A positive number A negative number 0 ORA error A positive number A negative number 0 Answer: C. If the first parameter is less than the second parameter, the MONTHS_BETWEEN returns a negative number. 96. What can be deduced if the result of MONTHS_BETWEEN (start_date,end_date) function is a fraction? It represents the difference in number between the start date and end date. The result cannot be a fractional number, it has to be a whole number. NULL It represents the days and the time remaining after the integer difference between years and months is calculated and is based on a 31-day month. It represents the difference in number between the start date and end date. The result cannot be a fractional number, it has to be a whole number. NULL It represents the days and the time remaining after the integer difference between years and months is calculated and is based on a 31-day month. Answer: D. 97. You are connected to a remote database in Switzerland from India. You need to find the Indian local time from the DB. Which of the following will give the required result? SELECT sysdate FROM dual; SELECT round(sysdate) FROM dual; SELECT trunc (sysdate) FROM dual; SELECT current_date FROM dual; SELECT sysdate FROM dual; SELECT sysdate FROM dual; SELECT round(sysdate) FROM dual; SELECT round(sysdate) FROM dual; SELECT trunc (sysdate) FROM dual; SELECT trunc (sysdate) FROM dual; SELECT current_date FROM dual; SELECT current_date FROM dual; Answer: D. 98. What will be the outcome of the following query? SELECT months_between (to_date ('29-feb-2008'), to_date ('29-feb-2008 12:00:00','dd-mon-yyyy hh24:mi:ss'))*31 FROM dual; Approximately 0 1 The query will throw an ORA error 0.5 days Approximately 0 1 The query will throw an ORA error 0.5 days Answer: D. The MONTHS_BETWEEN(date1, date2) finds the number of months between date1 and date2. The result can be positive or negative. If date1 is later than date2, the result is positive; if date1 is earlier than date2, the result is negative. The noninteger part of the result represents a portion of the month. 99. What will be the outcome of the following query? SELECT add_months ('31-dec-2008',2.5) FROM dual; 31-feb-2009 28-feb-2009 31-mar-2009 15-jan-2009 31-feb-2009 28-feb-2009 31-mar-2009 15-jan-2009 Answer: B. the fractional part of 2.5 will be ignored and 2 months will be added to 31-dec-2012 which is 31-feb-2013 but as it is not a valid date, the result is 28-feb-2009. 100. You need to identify the date in November when the staff will be paid. Bonuses are paid on the last Friday in November. Which of the following will fulfill the requirement? SELECT next_day ('30-nov-2012' , 'Friday') FROM dual; SELECT next_day ('30-nov-2012' , 'Friday') -7 FROM dual; SELECT last_day ('01-nov-2012' ) FROM dual; SELECT next_day ('30-nov-2012' , 'sat') -1 FROM dual; SELECT next_day ('30-nov-2012' , 'Friday') FROM dual; SELECT next_day ('30-nov-2012' , 'Friday') FROM dual; SELECT next_day ('30-nov-2012' , 'Friday') -7 FROM dual; SELECT next_day ('30-nov-2012' , 'Friday') -7 FROM dual; SELECT last_day ('01-nov-2012' ) FROM dual; SELECT last_day ('01-nov-2012' ) FROM dual; SELECT next_day ('30-nov-2012' , 'sat') -1 FROM dual; SELECT next_day ('30-nov-2012' , 'sat') -1 FROM dual; Answer: B. The NEXT_DAY(date,'day') and LAST_DAY (date,'day') functions find the date of the next or last specified day of the week ('day') following date. The value of char may be a number representing a day or a character string. 42 Lectures 5 hours Anadi Sharma 14 Lectures 2 hours Anadi Sharma 44 Lectures 4.5 hours Anadi Sharma 94 Lectures 7 hours Abhishek And Pukhraj 80 Lectures 6.5 hours Oracle Master Training | 150,000+ Students Worldwide 31 Lectures 6 hours Eduonix Learning Solutions Print Add Notes Bookmark this page
[ { "code": null, "e": 2515, "s": 2463, "text": "1. What will be the outcome of the following query?" }, { "code": null, "e": 2550, "s": 2515, "text": "SELECT ROUND(144.23,-1) FROM dual;" }, { "code": null, "e": 2568, "s": 2550, "text": "\n140\n144\n150\n100\n" ...
GWT - Applications
Before we start with creating actual "HelloWorld" application using GWT, let us see what are the actual parts of a GWT application are βˆ’ A GWT application consists of following four important parts out of which last part is optional but first three parts are mandatory. Module descriptors Public resources Client-side code Server-side code Sample locations of different parts of a typical gwt application HelloWord will be as shown below βˆ’ A module descriptor is the configuration file in the form of XML which is used to configure a GWT application. A module descriptor file extension is *.gwt.xml, where * is the name of the application and this file should reside in the project's root. Following will be a default module descriptor HelloWorld.gwt.xml for a HelloWorld application βˆ’ <?xml version = "1.0" encoding = "utf-8"?> <module rename-to = 'helloworld'> <!-- inherit the core web toolkit stuff. --> <inherits name = 'com.google.gwt.user.user'/> <!-- inherit the default gwt style sheet. --> <inherits name = 'com.google.gwt.user.theme.clean.Clean'/> <!-- specify the app entry point class. --> <entry-point class = 'com.tutorialspoint.client.HelloWorld'/> <!-- specify the paths for translatable code --> <source path = '...'/> <source path = '...'/> <!-- specify the paths for static files like html, css etc. --> <public path = '...'/> <public path = '...'/> <!-- specify the paths for external javascript files --> <script src = "js-url" /> <script src = "js-url" /> <!-- specify the paths for external style sheet files --> <stylesheet src = "css-url" /> <stylesheet src = "css-url" /> </module> Following is the brief detail about different parts used in module descriptor. <module rename-to = "helloworld"> This provides name of the application. <inherits name = "logical-module-name" /> This adds other gwt module in application just like import does in java applications. Any number of modules can be inherited in this manner. <entry-point class = "classname" /> This specifies the name of class which will start loading the GWT Application. Any number of entry-point classes can be added and they are called sequentially in the order in which they appear in the module file. So when the onModuleLoad() of your first entry point finishes, the next entry point is called immediately. <source path = "path" /> This specifies the names of source folders which GWT compiler will search for source compilation. <public path = "path" /> The public path is the place in your project where static resources referenced by your GWT module, such as CSS or images, are stored. The default public path is the public subdirectory underneath where the Module XML File is stored. <script src="js-url" /> Automatically injects the external JavaScript file located at the location specified by src. <stylesheet src="css-url" /> Automatically injects the external CSS file located at the location specified by src. These are all files referenced by your GWT module, such as Host HTML page, CSS or images. The location of these resources can be configured using <public path = "path" /> element in module configuration file. By default, it is the public subdirectory underneath where the Module XML File is stored. When you compile your application into JavaScript, all the files that can be found on your public path are copied to the module's output directory. The most important public resource is host page which is used to invoke actual GWT application. A typical HTML host page for an application might not include any visible HTML body content at all but it is always expected to include GWT application via a <script.../> tag as follows <html> <head> <title>Hello World</title> <link rel = "stylesheet" href = "HelloWorld.css"/> <script language = "javascript" src = "helloworld/helloworld.nocache.js"> </script> </head> <body> <h1>Hello World</h1> <p>Welcome to first GWT application</p> </body> </html> Following is the sample style sheet which we have included in our host page βˆ’ body { text-align: center; font-family: verdana, sans-serif; } h1 { font-size: 2em; font-weight: bold; color: #777777; margin: 40px 0px 70px; text-align: center; } This is the actual Java code written implementing the business logic of the application and that the GWT compiler translates into JavaScript, which will eventually run inside the browser. The location of these resources can be configured using <source path = "path" /> element in module configuration file. For example Entry Point code will be used as client side code and its location will be specified using <source path = "path" />. A module entry-point is any class that is assignable to EntryPoint and that can be constructed without parameters. When a module is loaded, every entry point class is instantiated and its EntryPoint.onModuleLoad() method gets called. A sample HelloWorld Entry Point class will be as follows βˆ’ public class HelloWorld implements EntryPoint { public void onModuleLoad() { Window.alert("Hello, World!"); } } This is the server side part of your application and its very much optional. If you are not doing any backend processing with-in your application then you do not need this part, but if there is some processing required at backend and your client-side application interact with the server then you will have to develop these components. Next chapter will make use of all the above mentioned concepts to create HelloWorld application using Eclipse IDE. Print Add Notes Bookmark this page
[ { "code": null, "e": 2160, "s": 2023, "text": "Before we start with creating actual \"HelloWorld\" application using GWT, let us see what are the actual parts of a GWT application are βˆ’" }, { "code": null, "e": 2293, "s": 2160, "text": "A GWT application consists of following fou...
Extreme Rare Event Classification: A Straight Forward Solution For a Real World Dataset | by Roberto Mansur | Towards Data Science
Initially, thank you Chitta Ranjan for the real-world dataset of a web break on a paper mill. It is a challenging time series dataset and a common problem on predictive maintenance domain. This article proposes a feasible and straight forward solution for the problem. I strongly recommend the reading of Chitta’s previous articles (Link). On the predictive maintenance domain, a challenge that many companies are facing is predicting failures before its occurrence based on the equipment behavior (Condition Based Maintenance). Curently, a common tool used by maintenance crew is APM (Asset Performance Management) which is based on risk and reliability techniques (such as Weibull curves for example). Summarizing: the idea of this type of curves is to separate the failures on a probability curve with 3 distinct areas: premature failures, random failures and end of life failures. This is a simple idea that can give us insights about features that are important on this domain, such as the running time (counter). First you need to create the frame of the problem with the following code: df['y_1'] = df['y'].shift(-1)df['y_2'] = df['y'].shift(-2)df = df.loc[df.y==0] #deleting the downtime eventdf['y'] = df.apply(lambda x: 1 if ((x['y_1'] == 1) | (x['y_2'] == 1)) else 0, axis=1)features = df.columns.tolist()# adding delayed infofeatures.remove('time')features.remove('y_1')features.remove('y_2')features.remove('x61')target = 'y'features.remove(target) Some highlights: the time features were eliminated since the period of data (less than a month) does not justify any new feature creation (hour, day, shift, quarter, day of week, etc.). Another variable eliminated were x61 that is completely freezed. The minority class is ~ 1.3% of all data. A very simple way to improve it a bit is to determine that 6 minutes before the event is as good as 4 minutes. So, it will improve 50% of minority class and it is fair to assert that it is not a terrible sin: df['y'] = df.apply(lambda x: 1 if ((x['y_1'] == 1) | (x['y_2'] == 1) | (x['y_3'] == 1) ) else 0, axis=1) The counter(run time) called count_y for now on, is the feature created to contextualize the model’s lapse of time. The simplest strategy for anomaly detection is to use one class algorithms such as SVM or Mahalanobis distance to understand what is normal in data and what is not normal. First the data will be separated into train and test. #Splittingfrom sklearn.model_selection import train_test_splitx_train, x_test, y_train, y_test = train_test_split(df[features],df[target],shuffle=True,random_state=10)#Scallingfrom sklearn.preprocessing import StandardScalerscale = StandardScaler()x_train = pd.DataFrame(data=scale.fit_transform(x_train),columns=features,index=y_train.index)x_test = pd.DataFrame(data=scale.transform(x_test),columns=features,index=y_test.index) After data separation comes the SVM as a candidate model: from sklearn.svm import OneClassSVMfrom sklearn.metrics import classification_reportfrom sklearn.metrics import confusion_matrixsvm1 = OneClassSVM(kernel='rbf', gamma='auto',nu=0.05)svm1.fit(x_train_pos[features])y_pred = pd.DataFrame(data=svm1.predict(x_test[features]))# need to convert to the same standard, 0 == normal and 1 for breaky_pred.loc[y_pred[0]==1] = 0 # normal y_pred.loc[y_pred[0]==-1] = 1 #breakprint(classification_report(y_test,y_pred,digits=3))confusion_matrix(y_test,y_pred) Result bellow the benchmark (F1<0.1) precision recall f1-score support 0 0.984 0.947 0.965 4486 1 0.040 0.123 0.061 81 micro avg 0.933 0.933 0.933 4567 macro avg 0.512 0.535 0.513 4567weighted avg 0.967 0.933 0.949 4567 The Mahalabonis distance provided a similar result: clf_total = MahalanobisOneclassClassifier(x_train_pos[features], significance_level=0.001)mahalanobis_dist_total = clf_total.predict_proba(x_test_pos[features].values)print(classification_report(y_test,clf_total.predict(x_test[features].values),digits=3))precision recall f1-score support 0 0.984 0.886 0.932 4486 1 0.032 0.210 0.056 81 micro avg 0.874 0.874 0.874 4567 macro avg 0.508 0.548 0.494 4567weighted avg 0.967 0.874 0.917 4567 Both algorithms were bellow benchmark using all the features as input. The dimension of the problem is surely a complication for this type of algorithm. One possible strategy is to divide the features into affinity groups. Usually a good working session with the domain expert can help with that. An alternative is to use the GradientBoost Tree (features importance) to support this definition. By using the top 20 tags, separating them into 2-tag groups and calculating the Mahalanobis distance in a one class approach, we artificially created 10 new features that represent the distance from the standard operation of training data. df['maha_dist'] = clf.predict_proba(df[feat_maha].values)df['maha_dist2'] = clf2.predict_proba(df[feat_maha2].values)df['maha_dist3'] = clf3.predict_proba(df[feat_maha3].values)df['maha_dist4'] = clf4.predict_proba(df[feat_maha4].values)df['maha_dist5'] = clf5.predict_proba(df[feat_maha5].values)df['maha_dist6'] = clf6.predict_proba(df[feat_maha6].values)df['maha_dist7'] = clf7.predict_proba(df[feat_maha7].values)df['maha_dist8'] = clf8.predict_proba(df[feat_maha8].values)df['maha_dist9'] = clf9.predict_proba(df[feat_maha9].values)df['maha_dist10'] = clf10.predict_proba(df[feat_maha10].values) New features for the problem frame: ['x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7', 'x8', 'x9', 'x10', 'x11', 'x12', 'x13', 'x14', 'x15', 'x16', 'x17', 'x18', 'x19', 'x20', 'x21', 'x22', 'x23', 'x24', 'x25', 'x26', 'x27', 'x28', 'x29', 'x30', 'x31', 'x32', 'x33', 'x34', 'x35', 'x36', 'x37', 'x38', 'x39', 'x40', 'x41', 'x42', 'x43', 'x44', 'x45', 'x46', 'x47', 'x48', 'x49', 'x50', 'x51', 'x52', 'x53', 'x54', 'x55', 'x56', 'x57', 'x58', 'x59', 'x60', 'count_y', 'maha_dist', 'maha_dist2', 'maha_dist3', 'maha_dist4', 'maha_dist5', 'maha_dist6', 'maha_dist7', 'maha_dist8', 'maha_dist9', 'maha_dist10'] It will be used the GradientBoostClassifier as the main classifier receiving this new features (Figure 4). Some cautions should be taken to avoid mixing the train and test samples using the same random state: x_train, x_test, y_train, y_test = train_test_split(df[features],df[target],shuffle=True,random_state=10)gbc = XGBClassifier(n_estimators=1000,subsample=0.9,max_depth=6,random_state=10, max_features=0.9, n_jobs=2)%time gbc.fit(x_train,y_train)Wall time: 1min 4s The F1 result for minority class ~0.5 is above the benchmark for the problem. precision recall f1-score support 0 0.989 1.000 0.994 4486 1 0.968 0.370 0.536 81 accuracy 0.989 4567 macro avg 0.978 0.685 0.765 4567weighted avg 0.988 0.989 0.986 4567 A simpler strategy is creating AD (anomaly detectors) for each feature on the model input using the same top 20 tags pointed by the first XGBoost feature importance. #top 20 tagsoriginal_features = ['count_y','x51', 'x28','x26', 'x43','x60', 'x8','x54', 'x50','x14', 'x18','x24', 'x53','x23', 'x15','x22', 'x52','x42', 'x21', 'x36', 'x3']features_individual_ad = original_features.copy()ad_maha = []for feat in original_features: if not feat == 'count_y': _model = MahalanobisOneclassClassifier(x_train_pos[[feat]], significance_level=0.01) ad_maha.append(_model) _ad_name = feat + '_ad' df[_ad_name] = _model.predict_proba(df[[feat]].values) features_individual_ad.append(_ad_name) And the result was even better than the combination of paired features. precision recall f1-score support 0 0.992 0.998 0.995 4486 1 0.865 0.556 0.677 81 accuracy 0.991 4567 macro avg 0.929 0.777 0.836 4567weighted avg 0.990 0.991 0.990 4567 A very straight forward solution using artificial features (run time feature and Mahalanobis distance) and a good ML classifier algorithm (Gradient Boost) provided an above the initial benchmark result for the problem (F1>0.1). The best topology for the algorithm was able to achieve 0.67 F1 for the minority class. A more robust separation between train and test (cross validation) is recommended to validate this result and also a fine tune on model parameters. The complete code is available on GIT .
[ { "code": null, "e": 512, "s": 172, "text": "Initially, thank you Chitta Ranjan for the real-world dataset of a web break on a paper mill. It is a challenging time series dataset and a common problem on predictive maintenance domain. This article proposes a feasible and straight forward solution for...
Interpretable Machine Learning. Extracting human understandable... | by Parul Pandey | Towards Data Science
It’s time to get rid of the black boxes and cultivate trust in Machine Learning In his book β€˜Interpretable Machine Learning’, Christoph Molnar beautifully encapsulates the essence of ML interpretability through this example: Imagine you are a Data Scientist and in your free time you try to predict where your friends will go on vacation in the summer based on their Facebook and Twitter data you have. Now, if the predictions turn out to be accurate, your friends might be impressed and could consider you to be a magician who could see the future. If the predictions are wrong, it would still bring no harm to anyone except to your reputation of being a β€œData Scientist”. Now let’s say it wasn’t a fun project and there were investments involved. Say, you wanted to invest in properties where your friends were likely to holiday. What would happen if the model’s predictions went awry? You would lose money. As long as the model is having no significant impact, its interpretability doesn’t matter so much but when there are implications involved based on a model’s prediction, be it financial or social, interpretability becomes relevant. Interpret means to explain or to present in understandable terms. In the context of ML systems, interpretability is the ability to explain or to present in understandable terms to a human[Finale Doshi-Velez] Machine Learning models have been branded as β€˜Black Boxes’ by many. This means that though we can get accurate predictions from them, we cannot clearly explain or identify the logic behind these predictions. But how do we go about extracting important insights from the models? What things are to be kept in mind and what features or tools will we need to achieve that? These are the important questions that come to mind when the issue of Model interpretability is raised. The question that some people often ask is why aren’t we just content with the results of the model and why are we so hell-bent on knowing why a particular decision was made? A lot of this has to do with the impact that a model might have in the real world. For models which are merely meant to recommend movies will have a far less impact than the ones created to predict the outcome of a drug. β€œThe problem is that a single metric, such as classification accuracy, is an incomplete description of most real-world tasks.” (Doshi-Velez and Kim 2017) Here is a big picture of explainable machine learning. In a way, we capture the world by collecting raw data and use that data to make further predictions. Essentially, Interpretability is just another layer on the model that helps humans to understand the process. Some of the benefits that interpretability brings along are: Reliability Debugging Informing feature engineering Directing future data collection Informing human decision-making Building Trust Theory only makes sense as long as we can put it into practice. In case you want a real hang of this topic, you can try the Machine Learning Explainability crash course from Kaggle. It has the right amount of theory and code to put the concepts into perspective and helps to apply model explainability concepts to practical, real-world problems. Click on the screenshot below to go directly to the course page. In case you want a brief overview of the contents first, you can continue to read further. To interpret a model, we require the following insights : Features in the model are most important. For any single prediction from a model, the effect of each feature in the data on that particular prediction. Effect of each feature over a large number of possible predictions Let’s discuss a few techniques that help in extracting the above insights from a model: What features does a model think are important? Which features might have a greater impact on the model predictions than the others? This concept is called feature importance and Permutation Importance is a technique used widely for calculating feature importance. It helps us to see when our model produces counterintuitive results, and it helps to show the others when our model is working as we’d hope. Permutation Importance works for many scikit-learn estimators. The idea is simple: Randomly permutate or shuffle a single column in the validation dataset leaving all the other columns intact. A feature is considered β€œimportant” if the model’s accuracy drops a lot and causes an increase in error. On the other hand, a feature is considered β€˜unimportant’ if shuffling its values doesn’t affect the model’s accuracy. Working Consider a model that predicts whether a soccer team will have a β€œMan of the Game” winner or not based on certain parameters. The player who demonstrates the best play is awarded this title. Permutation importance is calculated after a model has been fitted. So, let’s train and fit a RandomForestClassifier model denoted as my_model on the training data. Permutation Importance is calculated using the ELI5 library. ELI5 is a Python library which allows to visualize and debug various Machine Learning models using unified API. It has built-in support for several ML frameworks and provides a way to explain black-box models. Calculating and Displaying importance using the eli5 library: (Here val_X,val_y denote the validation sets respectively) import eli5from eli5.sklearn import PermutationImportanceperm = PermutationImportance(my_model, random_state=1).fit(val_X, val_y)eli5.show_weights(perm, feature_names = val_X.columns.tolist()) Interpretation The features at the top are most important and at the bottom, the least. For this example, goals scored was the most important feature. The number after the Β± measures how performance varied from one-reshuffling to the next. Some weights are negative. This is because in those cases predictions on the shuffled data were found to be more accurate than the real data. Practice And now, for the complete example and to test your understanding, go to the Kaggle page by clicking the link below: The partial dependence plot (short PDP or PD plot) shows the marginal effect one or two features have on the predicted outcome of a machine learning model( J. H. Friedman 2001). PDPs show how a feature affects predictions. PDP can show the relationship between the target and the selected features via 1D or 2D plots. Working PDPs are also calculated after a model has been fit. In the soccer problem that we discussed above, there were a lot of features like passes made, shots taken, goals scored, etc. We start by considering a single row. Say the row represents a team that had the ball 50% of the time, made 100 passes, took 10 shots, and scored 1 goal. We proceed by fitting our model and calculating the probability of a team having a player that won the β€œMan of the Game” which is our target variable. Next, we would choose a variable and continuously alter its value. For instance, we will calculate the outcome if the team scored 1 goal, 2 goals, 3 goals, and so on. All these values are then plotted and we get a graph of predicted Outcomes vs Goals Scored. The library to be used for plotting PDPs is called python partial dependence plot toolbox or simply PDPbox. from matplotlib import pyplot as pltfrom pdpbox import pdp, get_dataset, info_plots# Create the data that we will plotpdp_goals = pdp.pdp_isolate(model=my_model, dataset=val_X, model_features=feature_names, feature='Goal Scored')# plot itpdp.pdp_plot(pdp_goals, 'Goal Scored')plt.show() Interpretation The Y-axis represents the change in prediction from what it would be predicted at the baseline or leftmost value. The Blue area denotes the confidence interval For the β€˜Goal Scored’ graph, we observe that scoring a goal increases the probability of getting a β€˜Man of the game’ award but after a while saturation sets in. We can also visualize the partial dependence of two features at once using 2D Partial plots. Practise SHAP which stands for Shapley Additive exPlanation helps to break down a prediction to show the impact of each feature. It is based on Shapley values, a technique used in game theory to determine how much each player in a collaborative game has contributed to its success1. Normally, getting the trade-off between accuracy and interpretability just right can be a difficult balancing act but SHAP values can deliver both. Working Again, going with the soccer example where we wanted to predict the probability of a team having a player that won the β€œMan of the Game”. SHAP values interpret the impact of having a certain value for a given feature in comparison to the prediction we’d make if that feature took some baseline value. SHAP values are calculated using the Shap library which can be installed easily from PyPI or conda. Shap values show how much a given feature changed our prediction (compared to if we made that prediction at some baseline value of that feature). Let’s say we wanted to know what was the prediction when the team scored 3 goals instead of some fixed baseline no. If we are able to answer this, we could perform the same steps for other features as follows: sum(SHAP values for all features) = pred_for_team - pred_for_baseline_values Hence the prediction can be decomposed into a graph like this: Interpretation The above explanation shows features each contributing to pushing the model output from the base value (the average model output over the training dataset we passed) to the model output. Features pushing the prediction higher are shown in red, those pushing the prediction lower are in blue The base_value here is 0.4979 while our predicted value is 0.7. Goal Scored = 2 has the biggest impact on increasing the prediction, while ball possession the feature has the biggest effect in decreasing the prediction. Practice SHAP values have a deeper theory than what I have explained here. make sure to through the link below to get a complete understanding. Aggregating many SHAP values can provide even more detailed insights into the model. SHAP Summary Plots To get an overview of which features are most important for a model we can plot the SHAP values of every feature for every sample. The summary plot tells which features are most important, and also their range of effects over the dataset. For every dot: Vertical location shows what features it is depicting The color shows whether that feature was high or low for that row of the dataset Horizontal location shows whether the effect of that value caused a higher or lower prediction. The point in the upper left was for a team that scored few goals, reducing the prediction by 0.25. SHAP Dependence Contribution Plots While a SHAP summary plot gives a general overview of each feature, a SHAP dependence plot shows how the model output varies by a feature value. SHAP dependence contribution plots provide a similar insight to PDPs, but they add a lot more detail. The above Dependence Contribution plots suggest that having the ball increases a team’s chance of having their player win the award. But if they only score one goal, that trend reverses and the award judges may penalize them for having the ball so much if they score that little. Machine Learning doesn’t have to be a black box anymore. What use is a good model if we cannot explain the results to others? Interpretability is as important as creating a model. To achieve wider acceptance among the population, it is crucial that Machine learning systems are able to provide satisfactory explanations for their decisions. As Albert Einstein said,” If you can’t explain it simply, you don’t understand it well enough”. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable.Christoph Molnar Machine Learning Explainability Micro Course: Kaggle
[ { "code": null, "e": 126, "s": 46, "text": "It’s time to get rid of the black boxes and cultivate trust in Machine Learning" }, { "code": null, "e": 1188, "s": 126, "text": "In his book β€˜Interpretable Machine Learning’, Christoph Molnar beautifully encapsulates the essence of ML ...
Difference Between Go-Back-N and Selective Repeat Protocol - GeeksforGeeks
29 Jul, 2020 Both Go-Back-N Protocol and Selective Repeat Protocol are the types of sliding window protocols.The main difference between these two protocols is that after finding the suspect or damage in sent frames go-back-n protocol re-transmits all the frames whereas selective repeat protocol re-transmits only that frame which is damaged. Now, we shall see the difference between them: N/(1+2*a) N/(1+2*a) ankitagarwal10 absh2702 Computer Networks Difference Between GATE CS Computer Networks Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments RSA Algorithm in Cryptography TCP Server-Client implementation in C Differences between TCP and UDP Data encryption standard (DES) | Set 1 Types of Network Topology Difference between BFS and DFS Class method vs Static method in Python Differences between TCP and UDP Difference between var, let and const keywords in JavaScript Differences between Black Box Testing vs White Box Testing
[ { "code": null, "e": 24586, "s": 24558, "text": "\n29 Jul, 2020" }, { "code": null, "e": 24917, "s": 24586, "text": "Both Go-Back-N Protocol and Selective Repeat Protocol are the types of sliding window protocols.The main difference between these two protocols is that after findi...
How to split a data frame in R into multiple parts randomly?
When a data frame is large, we can split it into multiple parts randomly. This might be required when we want to analyze the data partially. We can do this with the help of split function and sample function to select the values randomly. Consider the trees data in base R βˆ’ > str(trees) 'data.frame': 31 obs. of 3 variables: $ Girth : num 8.3 8.6 8.8 10.5 10.7 10.8 11 11 11.1 11.2 ... $ Height: num 70 65 63 72 81 83 66 75 80 75 ... $ Volume: num 10.3 10.3 10.2 16.4 18.8 19.7 15.6 18.2 22.6 19.9 ... Splitting the trees data in three parts βˆ’ > split(trees, sample(rep(1:3,times=c(10,10,11)))) $`1` Girth Height Volume 2 8.6 65 10.3 3 8.8 63 10.2 10 11.2 75 19.9 12 11.4 76 21.0 13 11.4 76 21.4 16 12.9 74 22.2 21 14.0 78 34.5 22 14.2 80 31.7 25 16.3 77 42.6 26 17.3 81 55.4 $`2` Girth Height Volume 5 10.7 81 18.8 6 10.8 83 19.7 8 11.0 75 18.2 11 11.3 79 24.2 14 11.7 69 21.3 17 12.9 85 33.8 20 13.8 64 24.9 28 17.9 80 58.3 29 18.0 80 51.5 30 18.0 80 51.0 $`3` Girth Height Volume 1 8.3 70 10.3 4 10.5 72 16.4 7 11.0 66 15.6 9 11.1 80 22.6 15 12.0 75 19.1 18 13.3 86 27.4 19 13.7 71 25.7 23 14.5 74 36.3 24 16.0 72 38.3 27 17.5 82 55.7 31 20.6 87 77.0 Consider the women data in base R βˆ’ > str(women) 'data.frame': 15 obs. of 2 variables: $ height: num 58 59 60 61 62 63 64 65 66 67 ... $ weight: num 115 117 120 123 126 129 132 135 139 142 ... Splitting the women data in two parts βˆ’ > split(women, sample(rep(1:2,times=c(10,5)))) $`1` height weight 2 59 117 4 61 123 5 62 126 6 63 129 7 64 132 9 66 139 11 68 146 12 69 150 14 71 159 15 72 164 $`2` height weight 1 58 115 3 60 120 8 65 135 10 67 142 13 70 154
[ { "code": null, "e": 1301, "s": 1062, "text": "When a data frame is large, we can split it into multiple parts randomly. This might be required when we want to analyze the data partially. We can do this with the help of split function and sample function to select the values randomly." }, { ...
Uri.CheckHostName(String) Method in C#
The Uri.CheckHostName() method in C# is used to determine whether the specified hostname is a valid DNS name. Following is the syntax βˆ’ public static UriHostNameType CheckHostName (string host_name); Let us now see an example to implement the Uri.CheckHostName() method βˆ’ using System; public class Demo { public static void Main(){ string strURI = "http://localhost"; Console.WriteLine("URI = "+strURI); UriHostNameType res = Uri.CheckHostName(strURI); Console.WriteLine("Host type = "+res); } } This will produce the following output βˆ’ URI = http://localhost Host type = Unknown Let us now see another example to implement the Uri.CheckHostName() method βˆ’ using System; public class Demo { public static void Main(){ string strURI = "www.tutorialspoint.com"; Console.WriteLine("URI = "+strURI); UriHostNameType res = Uri.CheckHostName(strURI); Console.WriteLine("Host type = "+res); } } This will produce the following output βˆ’ URI = www.tutorialspoint.com Host type = Dns
[ { "code": null, "e": 1172, "s": 1062, "text": "The Uri.CheckHostName() method in C# is used to determine whether the specified hostname is a valid DNS name." }, { "code": null, "e": 1198, "s": 1172, "text": "Following is the syntax βˆ’" }, { "code": null, "e": 1262, ...
Difference between %d and %i format specifier in C language.
In C programming language, %d and %i are format specifiers as where %d specifies the type of variable as decimal and %i specifies the type as integer. In usage terms, there is no difference in printf() function output while printing a number using %d or %i but using scanf the difference occurs. scanf() function detects base using %i but assumes base 10 using %d. Live Demo #include <stdio.h> int main() { int num1 ,num2; int num3, num4; scanf("%i%d",&num1 , &num2); printf("%i\t%d\n",num1, num2); num3 = 010; num4 = 010; printf("%i\t%d",num3, num4); return 0; } 32767-498932064 8 8 Here 010 is an octal number. scanf read the number as 10 using %d and read the number as 8 using %i. printf is good in both case to read the number as octal.
[ { "code": null, "e": 1427, "s": 1062, "text": "In C programming language, %d and %i are format specifiers as where %d specifies the type of variable as decimal and %i specifies the type as integer. In usage terms, there is no difference in printf() function output while printing a number using %d or...
How to get the index and values of series in Pandas?
A pandas Series holds labeled data, by using these labels we can access series elements and we can do manipulations on our data. However, in some situations, we need to get all labels and values separately. Labels can be called indexes and data present in a series called values. If you want to get labels and values individually. Then we can use the index and values attributes of the Series object. Let’s take an example and see how these attributes will work. import pandas as pd # creating a series s = pd.Series({97:'a', 98:'b', 99:'c', 100:'d', 101:'e', 102:'f'}) print(s) # Getting values and index data index = s.index values = s.values print('\n') # displaying outputs print(index) print(values) Created a pandas Series using a python dictionary with integer keys and string values pairs. And index and values are the series attributes that will return a ndarray of indexes and values. The s.index and s.values will return an ndarray and those arrays are stored in index and values variables respectively. And at the end, we print the results by using the print function. 97 a 98 b 99 c 100 d 101 e 102 f dtype: object Int64Index([97, 98, 99, 100, 101, 102], dtype='int64') array(['a', 'b', 'c', 'd', 'e', 'f'], dtype=object) This block of output has a pandas Series which is created by using a python dictionary, the created series is having labeled data. and the second block the above output is representing values and index data in a ndarray format. We can see data type each output in the above block, here values are object dtype and indexes are int64 dtype. import pandas as pd # creating a series s = pd.Series([-2.3, np.nan, 9, 6.5, -5, -8, np.nan]) print(s) # Getting values and index data index = s.index values = s.values print('\n') # displaying outputs print(index) print(values) In the following example we have created a pandas Series using a python list and without index representation. 0 -2.3 1 NaN 2 9.0 3 6.5 4 -5.0 5 -8.0 6 NaN dtype: float64 RangeIndex(start=0, stop=7, step=1) [-2.3 nan 9. 6.5 -5. -8. nan] The s.index attribute will return an ndarray normally, but in this case we haven’t specified any index values at the time of series creation. Due to this we can see the index values as RangeIndex format.
[ { "code": null, "e": 1269, "s": 1062, "text": "A pandas Series holds labeled data, by using these labels we can access series elements and we can do manipulations on our data. However, in some situations, we need to get all labels and values separately." }, { "code": null, "e": 1463, ...
Linear Regression Models in Python | Towards Data Science
Arguably the best starting point for regression tasks are linear models: they can be trained quickly and are easily interpretable. Linear models make a prediction using a linear function of the input features. Here we’ll explore some popular linear models in Scikit-Learn. The full Jupyter notebook can be found here. Here we’ll use the SciKit-Learn diabetes dataset to review some popular linear regression algorithms. The dataset contains 10 features (that have already been mean centered and scaled) and a target value: a measure of disease progression one year after baseline. We import the data and prepare for modeling: from sklearn.datasets import load_diabetesfrom sklearn.model_selection import train_test_split# load regression datasetdiabetes, target = load_diabetes(return_X_y=True)diabetes = pd.DataFrame(diabetes)# Prepare data for modeling# Separate input features and targety = targetX = diabetes# setting up testing and training setsX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=27) R-Squared, or the coefficient of determination, is how much variance in the target variable that is explained by our model. Values can range from 0 to 1. Higher values indicate a model that is highly predictive. For example, a R2 value of 0.80 means that the model is accounting for 80% of the variability in the data. In general, the higher the R2 value the better. Low values indicate that our model is not very good at predicting the target. One caution, however, is that a very high R2 could be a sign of overfitting. We’ll use the following function to get cross validation scores for our models: from sklearn.model_selection import cross_val_score# function to get cross validation scoresdef get_cv_scores(model): scores = cross_val_score(model, X_train, y_train, cv=5, scoring='r2') print('CV Mean: ', np.mean(scores)) print('STD: ', np.std(scores)) print('\n') Linear regression finds the parameters to minimize the mean squared error or residuals between the predictions and the targets. Mean squared error is defined as the sum of the squared differences between the predictions and the true values, divided by the total number of samples. To generate a linear regression, we use Scikit-Learn’s LinearRegression class: from sklearn.linear_model import LinearRegression# Train modellr = LinearRegression().fit(X_train, y_train)# get cross val scoresget_cv_scores(lr)[out]### CV Mean: 0.4758231204137221### STD: 0.1412116836029729 We get a R2 value of 0.48 and standard deviation of 0.14. The low R2 value indicates that our model is not very accurate. The standard deviation value indicates we may be overfitting the training data. Overfitting occurs when the model makes much better predictions on known data than on unknown data. The model begins to memorize the training data and is unable to generalize to unseen test data. One option to combat overfitting is to simplify the model. We’ll attempt to simplify our linear regression model by introducing regularization. Regularization can be defined as explicitly restricting a model to prevent overfitting. As linear regression has no parameters, there is no way to control the complexity of the model. We’ll explore some variations that add regularization below. Ridge regression uses L2 regularization to minimize the magnitude of the coefficients. It reduces the size of the coefficients and helps reduce model complexity. We control the complexity of our model with the regularization parameter, ⍺. Higher values of ⍺ force coefficients to move towards zero and increases the restriction on the model. This decreases training performance, but also increases the generalizability of the model. Setting ⍺ too high could lead to a model that is too simple and underfits the data. With lower values of ⍺ the coefficients are less restricted. When ⍺ is very small the model becomes more similar to linear regression above and we risk overfitting. Let’s see if we can improve performance using Scikit-Learn’s Ridge class: from sklearn.linear_model import Ridge# Train model with default alpha=1ridge = Ridge(alpha=1).fit(X_train, y_train)# get cross val scoresget_cv_scores(ridge)[out]### CV Mean: 0.3826248703036134### STD: 0.09902564009167607 A mean R2 score of 0.38 means we are only able to explain 38% of the variance with our Ridge Regression model β€” definitely not an improvement from linear regression above. However, our standard deviation decreased which suggests we are less likely to be overfitting. We used the default value for alpha above, which might not give the best performance. The optimum value of alpha will vary with each dataset. Let’s see if we can improve the R2 score by adjusting our alpha value. We’ll use grid search to find an optimal alpha value: # find optimal alpha with grid searchalpha = [0.001, 0.01, 0.1, 1, 10, 100, 1000]param_grid = dict(alpha=alpha)grid = GridSearchCV(estimator=ridge, param_grid=param_grid, scoring='r2', verbose=1, n_jobs=-1)grid_result = grid.fit(X_train, y_train)print('Best Score: ', grid_result.best_score_)print('Best Params: ', grid_result.best_params_)[out]### Best Score: 0.4883436188936269### Best Params: {'alpha': 0.01} Our R2 score increased by optimizing for alpha! However, a R2 score of 0.48 is still not very good. Let’s see if we can improve this further using other types of regularization. Lasso regression uses L1 regularization to force some coefficients to be exactly zero. This means some features are completely ignored by the model. This can be thought of as a type of automatic feature selection! Lasso can be a good model choice when we have a large number of features but expect only a few to be important. This can make the model easier to interpret and reveal the most important features! Higher values of ⍺ force more coefficients to zero and can cause underfitting. Lower values of alpha lead to fewer non-zero features and can cause overfitting. Very low values of alpha will cause the model to resemble linear regression. Let’s try Lasso Regression on our diabetes dataset: from sklearn.linear_model import Lasso# Train model with default alpha=1lasso = Lasso(alpha=1).fit(X_train, y_train)# get cross val scoresget_cv_scores(lasso)[out]### CV Mean: 0.3510033961713952### STD: 0.08727927390128883 We used the default value for alpha above, which might not give the best performance. The optimum value of alpha will vary with each dataset. Let’s see can improve the R2 score by adjusting our alpha value. We’ll use grid search to find an optimal alpha value: # find optimal alpha with grid searchalpha = [0.001, 0.01, 0.1, 1, 10, 100, 1000]param_grid = dict(alpha=alpha)grid = GridSearchCV(estimator=lasso, param_grid=param_grid, scoring='r2', verbose=1, n_jobs=-1)grid_result = grid.fit(X_train, y_train)print('Best Score: ', grid_result.best_score_)print('Best Params: ', grid_result.best_params_)[out]### Best Score: 0.48813139496070573### Best Params: {'alpha': 0.01} Our score improved by optimizing alpha! We can examine the coefficients to see if any have been set to zero: # match column names to coefficientsfor coef, col in enumerate(X_train.columns): print(f'{col}: {lasso.coef_[coef]}')[out]age: 20.499547879943435sex: -252.36006394772798bmi: 592.1488111417586average_bp: 289.434686266713s1: -195.9273869617746s2: 0.0s3: -96.91157736328506s4: 182.01914264519363s5: 518.6445047270033s6: 63.76955009503193 The coefficient for s2 is zero. It is completely ignored by the model! We’ll try one last type of regression to see if we can further improve the R2 score. Elastic-net is a linear regression model that combines the penalties of Lasso and Ridge. We use the l1_ratio parameter to control the combination of L1 and L2 regularization. When l1_ratio = 0 we have L2 regularization (Ridge) and when l1_ratio = 1 we have L1 regularization (Lasso). Values between zero and one give us a combination of both L1 and L2 regularization. We first fit elastic-net with default parameters and then use grid search to find optimal values for alpha and l1_ratio: from sklearn.linear_model import ElasticNet# Train model with default alpha=1 and l1_ratio=0.5elastic_net = ElasticNet(alpha=1, l1_ratio=0.5).fit(X_train, y_train)# get cross val scoresget_cv_scores(elastic_net)[out]### CV Mean: -0.05139208284143739### STD: 0.07297997198698156# find optimal alpha with grid searchalpha = [0.001, 0.01, 0.1, 1, 10, 100, 1000]l1_ratio = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]param_grid = dict(alpha=alpha, l1_ratio=l1_ratio)grid = GridSearchCV(estimator=elastic_net, param_grid=param_grid, scoring='r2', verbose=1, n_jobs=-1)grid_result = grid.fit(X_train, y_train)print('Best Score: ', grid_result.best_score_)print('Best Params: ', grid_result.best_params_)[out]### Best Score: 0.48993062619187755### Best Params: {'alpha': 0.001, 'l1_ratio': 0.8} Again, after finding optimal hyperparameter values our R2 score increased. We explored four different linear models for regression: Linear Regression Ridge Lasso Elastic-Net We simplified our model with regularization. Unfortunately our R2 score remains low. In future articles, we’ll explore assumptions of linear regression and more ways to improve model performance.
[ { "code": null, "e": 303, "s": 172, "text": "Arguably the best starting point for regression tasks are linear models: they can be trained quickly and are easily interpretable." }, { "code": null, "e": 445, "s": 303, "text": "Linear models make a prediction using a linear function...
Construct an array of first N natural numbers having no triplet (i, j, k) such that a[i] + a[j] = 2* a[k] where i < j< k - GeeksforGeeks
11 Jun, 2021 Given a positive integer N, the task is to construct an array a[] using first N natural numbers which contains no such triplet (i, j, k) satisfying a[k] * 2 = a[i] + a[j] and i < j < k. Examples: Input: N = 3 Output: {2, 3, 1 } Explanation: Since no such triplet exists in the array satisfying the condition, the required output is { 2, 3, 1 }. Input: N = 10 Output: { 8, 4, 6, 10, 2, 7, 3, 5, 9, 1 } Approach: The problem can be solved using Greedy technique. Follow the steps below to solve the problem: Recursively find the first (N / 2) elements of the resultant array and the last (N / 2) elements of the resultant array. Merge both halves of the array such that the first half of the array contains even numbers and the last half of the array contains the odd numbers. Finally, print the resultant array. C++ Java Python3 C# Javascript // C++ program to implement// the above approach #include <bits/stdc++.h>using namespace std; // Function to construct the array of size N// that contains no such triplet satisfying// the given conditionsvector<int> constructArray(int N){ // Base case if (N == 1) { return { 1 }; } // Stores the first half // of the array vector<int> first = constructArray(N / 2); // Stores the last half // of the array vector<int> last = constructArray(N - (N / 2)); // Stores the merged array vector<int> ans; // Insert even numbers for (auto e : first) { // Insert 2 * e ans.push_back(2 * e); } // Insert odd numbers for (auto o : last) { // Insert (2 * o - 1) ans.push_back((2 * o) - 1); } return ans;} // Function to print the resultant arrayvoid printArray(vector<int> ans, int N){ // Print resultant array cout << "{ "; for (int i = 0; i < N; i++) { // Print current element cout << ans[i]; // If i is not the last index // of the resultant array if (i != N - 1) { cout << ", "; } } cout << " }";} // Driver Codeint main(){ int N = 10; // Store the resultant array vector<int> ans = constructArray(N); printArray(ans, N); return 0;} // Java program to implement// the above approachimport java.io.*;import java.util.*; class GFG{ // Function to construct the array of size N// that contains no such triplet satisfying// the given conditionsstatic ArrayList<Integer> constructArray(int N){ // Base case if (N == 1) { ArrayList<Integer> a = new ArrayList<Integer>(1); a.add(1); return a; } // Stores the first half // of the array ArrayList<Integer> first = new ArrayList<Integer>(N); first = constructArray(N / 2); // Stores the last half // of the array ArrayList<Integer> last = new ArrayList<Integer>(N); last = constructArray(N - N / 2); ArrayList<Integer> ans = new ArrayList<Integer>(N); // Insert even numbers for(int i = 0; i < first.size(); i++) { // Insert 2 * first[i] ans.add(2 * first.get(i)); } // Insert odd numbers for(int i = 0; i < last.size(); i++) { // Insert (2 * last[i] - 1) ans.add(2 * last.get(i) - 1); } return ans;} // Driver codepublic static void main(String[] args){ int N = 10; ArrayList<Integer> answer = new ArrayList<Integer>(N); answer = constructArray(N); System.out.print("{"); for(int i = 0; i < answer.size(); i++) { System.out.print(answer.get(i)); System.out.print(", "); } System.out.print("}");} } // This code is contributed by koulick_sadhu # Python3 program to implement# the above approach # Function to construct the array of size N# that contains no such triplet satisfying# the given conditionsdef constructArray(N) : # Base case if (N == 1) : a = [] a.append(1) return a; # Stores the first half # of the array first = constructArray(N // 2); # Stores the last half # of the array last = constructArray(N - (N // 2)); # Stores the merged array ans = []; # Insert even numbers for e in first : # Insert 2 * e ans.append(2 * e); # Insert odd numbers for o in last: # Insert (2 * o - 1) ans.append((2 * o) - 1); return ans; # Function to print the resultant arraydef printArray(ans, N) : # Print resultant array print("{ ", end = ""); for i in range(N) : # Print current element print(ans[i], end = ""); # If i is not the last index # of the resultant array if (i != N - 1) : print(", ",end = ""); print(" }", end = ""); # Driver Codeif __name__ == "__main__" : N = 10; # Store the resultant array ans = constructArray(N); printArray(ans, N); # This code is contributed by AnkThon // C# program to implement// the above approachusing System;using System.Collections.Generic; class GFG{ // Function to construct the array of size N// that contains no such triplet satisfying// the given conditionsstatic List<int> constructArray(int N){ // Base case if (N == 1) { List<int> a = new List<int>(1); a.Add(1); return a; } // Stores the first half // of the array List<int> first = new List<int>(); first = constructArray(N / 2); // Stores the last half // of the array List<int> last = new List<int>(); last = constructArray(N - N / 2); List<int> ans = new List<int>(); // Insert even numbers for(int i = 0; i < first.Count; i++) { // Insert 2 * first[i] ans.Add(2 * first[i]); } // Insert odd numbers for(int i = 0; i < last.Count; i++) { // Insert (2 * last[i] - 1) ans.Add(2 * last[i] - 1); } return ans;} // Driver codepublic static void Main(){ int N = 10; List<int> answer = new List<int>(N); answer = constructArray(N); Console.Write("{"); for(int i = 0; i < answer.Count; i++) { Console.Write(answer[i]); Console.Write(", "); } Console.Write("}");} } // This code is contributed by sanjoy_62 <script> // JavaScript program to implement the above approach // Function to construct the array of size N // that contains no such triplet satisfying // the given conditions function constructArray(N) { // Base case if (N == 1) { let a = []; a.push(1); return a; } // Stores the first half // of the array let first = []; first = constructArray(parseInt(N / 2, 10)); // Stores the last half // of the array let last = []; last = constructArray(N - parseInt(N / 2, 10)); let ans = []; // Insert even numbers for(let i = 0; i < first.length; i++) { // Insert 2 * first[i] ans.push(2 * first[i]); } // Insert odd numbers for(let i = 0; i < last.length; i++) { // Insert (2 * last[i] - 1) ans.push(2 * last[i] - 1); } return ans; } let N = 10; let answer = []; answer = constructArray(N); document.write("{"); for(let i = 0; i < answer.length; i++) { document.write(answer[i]); document.write(", "); } document.write("}"); </script> { 8, 4, 6, 10, 2, 7, 3, 5, 9, 1 } Time Complexity : O(N * log(N)) Auxiliary Space: O(N) koulick_sadhu sanjoy_62 ankthon divyeshrabadiya07 array-rearrange Greedy Algorithms Arrays Divide and Conquer Greedy Arrays Greedy Divide and Conquer Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Program to find sum of elements in a given array Trapping Rain Water Move all negative numbers to beginning and positive to end with constant extra space Reversal algorithm for array rotation Window Sliding Technique Merge Sort QuickSort Binary Search Program for Tower of Hanoi Divide and Conquer Algorithm | Introduction
[ { "code": null, "e": 24405, "s": 24377, "text": "\n11 Jun, 2021" }, { "code": null, "e": 24591, "s": 24405, "text": "Given a positive integer N, the task is to construct an array a[] using first N natural numbers which contains no such triplet (i, j, k) satisfying a[k] * 2 = a[i]...
Python Pandas – Merge DataFrame with one-to-many relation
To merge Pandas DataFrame, use the merge() function. The one-to-many relation is implemented on both the DataFrames by setting under the β€œvalidate” parameter of the merge() function i.e. βˆ’ validate = β€œone-to-many” or validate = β€œ1:m” The one-to-many relation checks if merge keys are unique in left dataset. At first, let us create our 1st DataFrame βˆ’ dataFrame1 = pd.DataFrame( { "Car": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Bentley', 'Jaguar'],"Units": [100, 150, 110, 80, 110, 90] } ) Now, let us create our 2nd DataFrame βˆ’ dataFrame2 = pd.DataFrame( { "Car": ['BMW', 'Lexus', 'Tesla', 'Mustang', 'Mercedes', 'Jaguar'],"Reg_Price": [7000, 1500, 5000, 8000, 9000, 6000] } ) Following is the code βˆ’ import pandas as pd # Create DataFrame1 dataFrame1 = pd.DataFrame( { "Car": ['BMW', 'Lexus', 'Audi', 'Mustang', 'Bentley', 'Jaguar'],"Units": [100, 150, 110, 80, 110, 90] } ) print("DataFrame1 ...\n",dataFrame1) # Create DataFrame2 dataFrame2 = pd.DataFrame( { "Car": ['BMW', 'Lexus', 'Tesla', 'Mustang', 'Mercedes', 'Jaguar'],"Reg_Price": [7000, 1500, 5000, 8000, 9000, 6000] } ) print("\nDataFrame2 ...\n",dataFrame2) # merge DataFrames with "one-to-many" in "validate" parameter mergedRes = pd.merge(dataFrame1, dataFrame2, validate ="one_to_many") print("\nMerged dataframe with one-to-many relation...\n", mergedRes) This will produce the following output βˆ’ DataFrame1 ... Car Units 0 BMW 100 1 Lexus 150 2 Audi 110 3 Mustang 80 4 Bentley 110 5 Jaguar 90 DataFrame2 ... Car Reg_Price 0 BMW 7000 1 Lexus 1500 2 Tesla 5000 3 Mustang 8000 4 Mercedes 9000 5 Jaguar 6000 Merged dataframe with one-to-many realtion ... Car Units Reg_Price 0 BMW 100 7000 1 Lexus 150 1500 2 Mustang 80 8000 3 Jaguar 90 6000
[ { "code": null, "e": 1251, "s": 1062, "text": "To merge Pandas DataFrame, use the merge() function. The one-to-many relation is implemented on both the DataFrames by setting under the β€œvalidate” parameter of the merge() function i.e. βˆ’" }, { "code": null, "e": 1296, "s": 1251, "t...
C++ | Constructors | Question 1 - GeeksforGeeks
04 Jan, 2013 Which of the followings is/are automatically added to every class, if we do not write our own.(A) Copy Constructor(B) Assignment Operator(C) A constructor without any parameter(D) All of the aboveAnswer: (D) Explanation: In C++, if we do not write our own, then compiler automatically creates a default constructor, a copy constructor and a assignment operator for every class. C++ Quiz Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments C++ | Constructors | Question 17 C++ | Exception Handling | Question 7 C++ | Exception Handling | Question 6 How to find Size of std::forward_list in C++ STL C++ | Operator Overloading | Question 10 C++ | Operator Overloading | Question 7 C++ | Inheritance | Question 12 C++ | Constructors | Question 16 C++ | Exception Handling | Question 5 C++ | Misc C++ | Question 5
[ { "code": null, "e": 24330, "s": 24302, "text": "\n04 Jan, 2013" }, { "code": null, "e": 24538, "s": 24330, "text": "Which of the followings is/are automatically added to every class, if we do not write our own.(A) Copy Constructor(B) Assignment Operator(C) A constructor without ...
How to make a cylinder in React using reactthree-fiber
In this article, we will see how to create a basic cylinder-like shape in React using react-three-fiber. Three.js is a cross-browser JavaScript library and application programming interface used to create and display animated 3D computer graphics in a web browser using WebGL. First download the react-three-fiber package βˆ’ npm i --save @react-three/fiber three threejs and react-three/fiber will be used to add webGL renderer to the website. Three-fiber will be used to connect threejs and React. Add the following lines of code in App.js βˆ’ import React, { useEffect, useRef } from "react"; import { Canvas, useFrame } from "@react-three/fiber"; import * as THREE from "three"; function Cylinder() { const myref = useRef(); useFrame(() => (myref.current.rotation.x = myref.current.rotation.y += 0.01)); return ( <mesh ref={myref}> <cylinderBufferGeometry attach="geometry" args={[2, 2, 2]} /> <meshBasicMaterial attach="material" color="hotpink" /> </mesh> ); } export default function App() { return ( <Canvas> <ambientLight /> <Cylinder /> </Canvas> ); } Here we simply created cylinder geometry and mesh for coloring. Always use a separate function to make a shape and then render it inside canvas. The Cylinder function takes 3 arguments: top radius, bottom radius, and height. <mesh> is used to create the threeJS object, and inside it, we made a box using CylinderGeometry which is used to define the size, shape, and other structural properties. We used meshStandardMaterial to design our geometrical structure. <mesh> is used to combine the structure and the design together. We created a functional component inside which we made a reference. Then we made a Frame which is required to change the position of our mesh object or cylinder. Then we added the frame to reference and gave reference to mesh. We used the position argument to position the object. <AmbientLight> is used to make the color visible; in its absence, the <Cylinder> will look completely black. On execution, it will produce the following output βˆ’ Your browser does not support HTML5 video.
[ { "code": null, "e": 1339, "s": 1062, "text": "In this article, we will see how to create a basic cylinder-like shape in React using react-three-fiber. Three.js is a cross-browser JavaScript library and application programming interface used to create and display animated 3D computer graphics in a w...
Various Types of Keys in DBMS
The different types of keys in DBMS are βˆ’ Candidate Key - The candidate keys in a table are defined as the set of keys that is minimal and can uniquely identify any data row in the table. Primary Key - The primary key is selected from one of the candidate keys and becomes the identifying key of a table. It can uniquely identify any data row of the table. Super Key - Super Key is the superset of primary key. The super key contains a set of attributes, including the primary key, which can uniquely identify any data row in the table. Composite Key - If any single attribute of a table is not capable of being the key i.e it cannot identify a row uniquely, then we combine two or more attributes to form a key. This is known as a composite key. Secondary Key - Only one of the candidate keys is selected as the primary key. The rest of them are known as secondary keys. Foreign Key - A foreign key is an attribute value in a table that acts as the primary key in another table. Hence, the foreign key is useful in linking together two tables. Data should be entered in the foreign key column with great care, as wrongly entered data can invalidate the relationship between the two tables. An example to explain the different keys is βˆ’ <STUDENT> <SUBJECT> <ENROLL> The Super Keys in <Student> table are βˆ’ {Student_Number} {Student_Phone} {Student_Number,Student_Name} {Student_Number,Student_Phone} {Student_Number,Subject_Number} {Student_Phone,Student_Name} {Student_Phone,Subject_Number} {Student_Number,Student_Name,Student_Phone} {Student_Number,Student_Phone,Subject_Number} {Student_Number,Student_Name,Subject_Number} {Student_Phone,Student_Name,Subject_Number} The Super Keys in <Subject> table are βˆ’ {Subject_Number} {Subject_Number,Subject_Name} {Subject_Number,Subject_Instructor} {Subject_Number,Subject_Name,Subject_Instructor} {Subject_Name,Subject_Instructor} The Super Key in <Enroll> table is βˆ’ {Student_Number,Subject_Number} The Candidate Key in <Student> table is {Student_Number} or {Student_Phone} The Candidate Key in <Subject> table is {Subject_Number} or {Subject_Name,Subject_Instructor} The Candidate Key in <Student> table is {Student_Number, Subject_Number} The Primary Key in <Student> table is {Student_Number} The Primary Key in <Subject> table is {Subject_Number} The Primary Key in <Enroll> table is {Student_Number, Subject_Number} The Composite Key in <Enroll> table is {Student_Number, Subject_Number} The Secondary Key in <Student> table is {Student_Phone} The Secondary Key in <Subject> table is {Subject_Name,Subject_Instructor} {Subject_Number} is the Foreign Key of <Student> table and Primary key of <Subject> table.
[ { "code": null, "e": 1104, "s": 1062, "text": "The different types of keys in DBMS are βˆ’" }, { "code": null, "e": 1250, "s": 1104, "text": "Candidate Key - The candidate keys in a table are defined as the set of keys that is minimal and can uniquely identify any data row in the t...
C - Basic Syntax
You have seen the basic structure of a C program, so it will be easy to understand other basic building blocks of the C programming language. A C program consists of various tokens and a token is either a keyword, an identifier, a constant, a string literal, or a symbol. For example, the following C statement consists of five tokens βˆ’ printf("Hello, World! \n"); The individual tokens are βˆ’ printf ( "Hello, World! \n" ) ; In a C program, the semicolon is a statement terminator. That is, each individual statement must be ended with a semicolon. It indicates the end of one logical entity. Given below are two different statements βˆ’ printf("Hello, World! \n"); return 0; Comments are like helping text in your C program and they are ignored by the compiler. They start with /* and terminate with the characters */ as shown below βˆ’ /* my first program in C */ You cannot have comments within comments and they do not occur within a string or character literals. A C identifier is a name used to identify a variable, function, or any other user-defined item. An identifier starts with a letter A to Z, a to z, or an underscore '_' followed by zero or more letters, underscores, and digits (0 to 9). C does not allow punctuation characters such as @, $, and % within identifiers. C is a case-sensitive programming language. Thus, Manpower and manpower are two different identifiers in C. Here are some examples of acceptable identifiers βˆ’ mohd zara abc move_name a_123 myname50 _temp j a23b9 retVal The following list shows the reserved words in C. These reserved words may not be used as constants or variables or any other identifier names. A line containing only whitespace, possibly with a comment, is known as a blank line, and a C compiler totally ignores it. Whitespace is the term used in C to describe blanks, tabs, newline characters and comments. Whitespace separates one part of a statement from another and enables the compiler to identify where one element in a statement, such as int, ends and the next element begins. Therefore, in the following statement βˆ’ int age; there must be at least one whitespace character (usually a space) between int and age for the compiler to be able to distinguish them. On the other hand, in the following statement βˆ’ fruit = apples + oranges; // get the total fruit no whitespace characters are necessary between fruit and =, or between = and apples, although you are free to include some if you wish to increase readability. Print Add Notes Bookmark this page
[ { "code": null, "e": 2226, "s": 2084, "text": "You have seen the basic structure of a C program, so it will be easy to understand other basic building blocks of the C programming language." }, { "code": null, "e": 2421, "s": 2226, "text": "A C program consists of various tokens a...
Various examples in Basis Path Testing - GeeksforGeeks
10 Jul, 2020 Prerequisite – Basis Path TestingWe have seen the steps involved in designing the test cases for a program using the basis path testing in the previous article. Now, let’s solve an example following the same steps. Question : Consider the given program that checks if a number is prime or not. For the following program : Draw the Control Flow GraphCalculate the Cyclomatic complexity using all the methodsList all the Independent PathsDesign test cases from independent paths Draw the Control Flow Graph Calculate the Cyclomatic complexity using all the methods List all the Independent Paths Design test cases from independent paths int main(){ int n, index; cout << "Enter a number: " << endl; cin >> n; index = 2; while (index <= n - 1) { if (n % index == 0) { cout << "It is not a prime number" << endl; break; } index++; } if (index == n) cout << "It is a prime number" << endl;} // end main Solution :1. Draw the Control Flow Graph – Step-1:Start numbering the statements after declaration of the variables (if no variables have been initialized in that statement). However, if a variable has been initialized and declared in the same line, then numbering should start from that line itself.For the given program, this is how numbering will be done:int main() { int n, index; 1 cout << "Enter a number: " <> n; 3 index = 2; 4 while (index <= n - 1) 5 { 6 if (n % index == 0) 7 { 8 cout << "It is not a prime number" << endl; 9 break; 10 } 11 index++; 12 } 13 if (index == n) 14 cout << "It is a prime number" << endl; 15 } // end main For the given program, this is how numbering will be done: int main() { int n, index; 1 cout << "Enter a number: " <> n; 3 index = 2; 4 while (index <= n - 1) 5 { 6 if (n % index == 0) 7 { 8 cout << "It is not a prime number" << endl; 9 break; 10 } 11 index++; 12 } 13 if (index == n) 14 cout << "It is a prime number" << endl; 15 } // end main Step-2:Put the sequential statements into one single node. For example, statements 1, 2 and 3 are all sequential statements and hence should be combined into a single node. And for other statements, we will follow the notations as discussed here.Note –Use alphabetical numbering on nodes for simplicity.The graph obtained will be as follows : Note –Use alphabetical numbering on nodes for simplicity. The graph obtained will be as follows : 2. Calculate the Cyclomatic complexity : Method-1:V(G) = e - n + 2*p In the above control flow graph,where, e = 10, n = 8 and p = 1 Therefore, Cyclomatic Complexity V(G) = 10 - 8 + 2 * 1 = 4 V(G) = e - n + 2*p In the above control flow graph, where, e = 10, n = 8 and p = 1 Therefore, Cyclomatic Complexity V(G) = 10 - 8 + 2 * 1 = 4 Method-2:V(G) = d + p In the above control flow graph,where, d = 3 (Node B, C and F) and p = 1 Therefore, Cyclomatic Complexity V(G) = 3 + 1 = 4 V(G) = d + p In the above control flow graph, where, d = 3 (Node B, C and F) and p = 1 Therefore, Cyclomatic Complexity V(G) = 3 + 1 = 4 Method-3:V(G) = Number of Regions In the above control flow graph, there are 4 regions as shown below :Therefore, there are 4 regions: R1, R2, R3 and R4 Cyclomatic Complexity V(G) = 1 + 1 + 1 + 1 = 4 V(G) = Number of Regions In the above control flow graph, there are 4 regions as shown below : Therefore, there are 4 regions: R1, R2, R3 and R4 Cyclomatic Complexity V(G) = 1 + 1 + 1 + 1 = 4 It is important to note that all three methods give same value for cyclomatic complexity V(G). 3. Independent Paths :As the cyclomatic complexity V(G) for the graph has come out to be 4, therefore there are 4 independent paths.Edges covered (marked with red) by Path 1 are: Path 1 : A - B - F - G - H Edges covered by Path 1 and Path 2 are shown below : Path 2 : A - B - F - H Edges covered by Path 1, Path 2 and Path 3 are : Path 3 : A - B - C - E - B - F - G - H Now only 2 edges are left uncovered i.e. edge C-D and edge D-F. Hence, Path 4 must include these two edges. Path 4 : A - B - C - D - F - H Each of these paths have introduced at least one new edge which has not been traversed before. Note –Independent paths are not necessarily unique. 4. Test cases :To derive test cases, we have to use the independent paths obtained previously. To design a test case, provide input to the program such that each independent path is executed.For the given program, the following test cases will be obtained: Software Testing Software Engineering Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Functional vs Non Functional Requirements Software Engineering | Classical Waterfall Model Differences between Verification and Validation Software Requirement Specification (SRS) Format Levels in Data Flow Diagrams (DFD) Software Engineering | Requirements Engineering Process Software Engineering | White box Testing Software Testing | Basics Software Engineering | Requirements Elicitation Software Testing Life Cycle (STLC)
[ { "code": null, "e": 24926, "s": 24898, "text": "\n10 Jul, 2020" }, { "code": null, "e": 25141, "s": 24926, "text": "Prerequisite – Basis Path TestingWe have seen the steps involved in designing the test cases for a program using the basis path testing in the previous article. No...
A Simple Example of Pipeline in Machine Learning with Scikit-learn | by Saptashwa Bhattacharyya | Towards Data Science
Today’s post will be short and crisp and I will walk you through an example of using Pipeline in machine learning with python. I will use some other important tools like GridSearchCV etc., to demonstrate the implementation of pipeline and finally explain why pipeline is indeed necessary in some cases. Let’s begin Definition of pipeline class according to scikit-learn is Sequentially apply a list of transforms and a final estimator. Intermediate steps of pipeline must implement fit and transform methods and the final estimator only needs to implement fit. The above statements will be more meaningful once we start to implement pipeline on a simple data-set. Here I’m using the red-wine data-set, where the β€˜label’ is quality of the wine, ranging from 0 to 10. In terms of data pre-processing, it’s a rather simple data-set as, it has no missing values. import pandas as pd winedf = pd.read_csv('winequality-red.csv',sep=';')# print winedf.isnull().sum() # check for missing dataprint winedf.head(3)>>> fixed ac. volat. ac. citric ac. res. sugar chlorides \0 7.4 0.70 0.00 1.9 0.076 1 7.8 0.88 0.00 2.6 0.098 2 7.8 0.76 0.04 2.3 0.092free sulfur diox. tot. sulfur diox. dens. pH sulphates \0 11.0 34.0 0.9978 3.51 0.56 1 25.0 67.0 0.9968 3.20 0.68 2 15.0 54.0 0.9970 3.26 0.65 alcohol quality 0 9.4 5 1 9.8 5 2 9.8 5 We can always check the correlation plots with seaborn or else we can plot some of the features using a scatter plot and below are two such plots.. As expected acidity and pH has a high negative correlation compared to residual sugar and acidity. Once we are familiar and have played around enough with the data-set, let’s discuss and implement pipeline. As the name suggests, pipeline class allows sticking multiple processes into a single scikit-learn estimator. pipeline class has fit, predict and score method just like any other estimator (ex. LinearRegression). To implement pipeline, as usual we separate features and labels from the data-set at first. X=winedf.drop(['quality'],axis=1)Y=winedf['quality'] If you have looked into the output of pd.head(3) then, you can see the features of the data-set vary over a wide range. As I have explained before, just like principal-component-analysis, some fitting algorithm needs scaling and here I will use one such, known as SVM (Support Vector Machine). For more on the theory of SVM, you can check my other post. from sklearn.svm import SVCfrom sklearn.preprocessing import StandardScaler Here we are using StandardScaler, which subtracts the mean from each features and then scale to unit variance. Now we are ready to create a pipeline object by providing with the list of steps. Our steps are β€” standard scalar and support vector machine. These steps are list of tuples consisting of name and an instance of the transformer or estimator. Let’s see the piece of code below for clarification - steps = [('scaler', StandardScaler()), ('SVM', SVC())]from sklearn.pipeline import Pipelinepipeline = Pipeline(steps) # define the pipeline object. The strings (β€˜scaler’, β€˜SVM’) can be anything, as these are just names to identify clearly the transformer or estimator. We can use make_pipeline instead of Pipeline to avoid naming the estimator or transformer. The final step has to be an estimator in this list of tuples. We divide the data-set into training and test-set with a random_state=30 . X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=0.2, random_state=30, stratify=Y) It’s necessary to use stratify as I’ve mentioned before that the labels are imbalanced as most of the wine quality falls in the range 5,6. You can check using pandas value_counts() which returns objects containing counts of unique values. print winedf['quality'].value_counts() >>> 5 681 6 638 7 199 4 53 8 18 3 10 SVM is usually optimized using two parameters gamma,C . I have discussed effect of these parameters in another post but now, let’s define a parameter grid that we will use in GridSearchCV . parameteres = {'SVM__C':[0.001,0.1,10,100,10e5], 'SVM__gamma':[0.1,0.01]} Now we instantiate the GridSearchCV object with pipeline and the parameter space with 5 folds cross validation. grid = GridSearchCV(pipeline, param_grid=parameteres, cv=5) We can use this to fit on the training data-set and test the algorithm on the test-data set. Also we can find the best fit parameters for the SVM as below grid.fit(X_train, y_train)print "score = %3.2f" %(grid.score(X_test,y_test))print grid.best_params_>>> score = 0.60 {'SVM__C': 100, 'SVM__gamma': 0.1} With this we have seen an example of effectively using pipeline with grid search to test support vector machine algorithm. On a separate post, I have discussed in great detail of applying pipeline and GridSearchCV and how to draw the decision function for SVM. You can use any other algorithm like logistic regression instead of SVM to test which learning algorithm works best for red-wine data-set. For applying Decision Tree algorithm in a pipeline including GridSearchCV on a more realistic data-set, you can check this post. I will finish this post with a simple intuitive explanation of why Pipeline can be necessary at times. It helps to enforce desired order of application steps, creating a convenient work-flow, which makes sure of the reproducibility of the work. But, there is something more to pipeline, as we have used grid search cross validation, we can understand it better. The pipeline object in the example above was created with StandardScalerand SVM . Instead of using pipeline if they were applied separately then for StandardScaler one can proceed as below scale = StandardScaler().fit(X_train)X_train_scaled = scale.transform(X_train)grid = GridSearchCV(SVC(), param_grid=parameteres, cv=5)grid.fit(X_train_scaled, y_train) Here we see the intrinsic problem of applying a transformer and an estimator separately where the parameters for estimator (SVM) are determined using GridSearchCV . The scaled features used for cross-validation is separated into test and train fold but the test fold within grid-search already contains the info about training set, as the whole training set (X_train) was used for standardization. In a simpler note when SVC.fit() is done using cross-validation, the features already include info from the test-fold as StandardScaler.fit() was done on the whole training set. One can bypass this oversimplification by using pipeline. Using pipeline we glue together the StandardScaler() and SVC() and this ensure that during cross validation the StandardScaler is fitted to only the training fold, exactly similar fold used for SVC.fit(). A fantastic pictorial representation of the above description is given in Andreas Muller book1. [1] Andreas Muller, Sarah Guido; Introduction to Machine Learning with Python ; pp-305–320; First Edition; Riley O’ publication; amazonlink You can find the complete code in github. Cheers ! Stay strong !!
[ { "code": null, "e": 487, "s": 172, "text": "Today’s post will be short and crisp and I will walk you through an example of using Pipeline in machine learning with python. I will use some other important tools like GridSearchCV etc., to demonstrate the implementation of pipeline and finally explain ...
How to clear the contents of a Tkinter Text widget?
Tkinter Text Widget is used to add the text writer in an application. It has many attributes and properties which are used to extend the functionality of a text editor. In order to delete the input content, we can use the delete("start", "end") method. #Import the tkinter library from tkinter import * #Create an instance of tkinter frame win = Tk() #Set the geometry win.geometry("600x250") #Define a function to clear the input text def clearToTextInput(): my_text.delete("1.0","end") #Create a text widget my_text=Text(win, height=10) my_text.pack() #Create a Button btn=Button(win,height=1,width=10, text="Clear",command=clearToTextInput) btn.pack() #Display the window without pressing key win.mainloop() Running the above code will create a text widget and a button that can be used to clear the Input text. Now Click on the β€œClear” button, it will clear the input text.
[ { "code": null, "e": 1315, "s": 1062, "text": "Tkinter Text Widget is used to add the text writer in an application. It has many attributes and properties which are used to extend the functionality of a text editor. In order to delete the input content, we can use the delete(\"start\", \"end\") meth...
Non static blocks in Java.
A static block is a block of code with a static keyword. In general, these are used to initialize the static members. JVM executes static blocks before the main method at the time of class loading. Live Demo public class MyClass { static{ System.out.println("Hello this is a static block"); } public static void main(String args[]){ System.out.println("This is main method"); } } Hello this is a static block This is main method Similar to static blocks, Java also provides instance initialization blocks which are used to initialize instance variables, as an alternative to constructors. Whenever you define an initialization block Java copies its code to the constructors. Therefore you can also use these to share code between the constructors of a class. Live Demo public class Student { String name; int age; { name = "Krishna"; age = 25; } public static void main(String args[]){ Student std = new Student(); System.out.println(std.age); System.out.println(std.name); } } 25 Krishna In a class, you can also have multiple initialization blocks. Live Demo public class Student { String name; int age; { name = "Krishna"; age = 25; } { System.out.println("Initialization block"); } public static void main(String args[]){ Student std = new Student(); System.out.println(std.age); System.out.println(std.name); } } Initialization block 25 Krishna You can also define an instance initialization block in the parent class. public class Student extends Demo { String name; int age; { System.out.println("Initialization block of the sub class"); name = "Krishna"; age = 25; } public static void main(String args[]){ Student std = new Student(); System.out.println(std.age); System.out.println(std.name); } } Initialization block of the super class Initialization block of the sub class 25 Krishna
[ { "code": null, "e": 1260, "s": 1062, "text": "A static block is a block of code with a static keyword. In general, these are used to initialize the static members. JVM executes static blocks before the main method at the time of class loading." }, { "code": null, "e": 1271, "s": 126...
How to play ringtone/alarm/notification sound in Android?
This example demonstrate about How to play ringtone/alarm/notification sound in Android. Step 1 βˆ’ Create a new project in Android Studio, go to File β‡’ New Project and fill all required details to create a new project. Step 2 βˆ’ Add the following code to res/layout/activity_main.xml. <? xml version= "1.0" encoding= "utf-8" ?> <android.support.constraint.ConstraintLayout xmlns: android = "http://schemas.android.com/apk/res/android" xmlns: app = "http://schemas.android.com/apk/res-auto" xmlns: tools = "http://schemas.android.com/tools" android :layout_width= "match_parent" android :layout_height= "match_parent" android :padding= "16dp" tools :context= ".MainActivity" > <Button android :id= "@+id/btnCreateNotification" android :layout_width= "0dp" android :layout_height= "wrap_content" android :text= "Create notification" app :layout_constraintBottom_toBottomOf= "parent" app :layout_constraintEnd_toEndOf= "parent" app :layout_constraintStart_toStartOf= "parent" app :layout_constraintTop_toTopOf= "parent" /> </android.support.constraint.ConstraintLayout> Step 3 βˆ’ Add the following code to src/MainActivity.java package app.tutorialspoint.com.notifyme; import android.app.NotificationManager; import android.content.Context; import android.media.MediaPlayer; import android.media.RingtoneManager; import android.net.Uri; import android.support.v4.app.NotificationCompat; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.Button; import java.util.Objects; public class MainActivity extends AppCompatActivity { private final static String default_notification_channel_id = "default"; @Override protected void onCreate (Bundle savedInstanceState) { super .onCreate(savedInstanceState); setContentView(R.layout. activity_main ); Button btnCreateNotification = findViewById(R.id. btnCreateNotification ); btnCreateNotification.setOnClickListener( new View.OnClickListener() { @Override public void onClick (View v) { Uri alarmSound = RingtoneManager. getDefaultUri (RingtoneManager. TYPE_NOTIFICATION ); MediaPlayer mp = MediaPlayer. create (getApplicationContext(), alarmSound); mp.start(); NotificationCompat.Builder mBuilder = new NotificationCompat.Builder(MainActivity. this, default_notification_channel_id ) .setSmallIcon(R.drawable. ic_launcher_foreground ) .setContentTitle( "Test" ) .setContentText( "Hello! This is my first push notification" ) ; NotificationManager mNotificationManager = (NotificationManager) getSystemService(Context. NOTIFICATION_SERVICE ); mNotificationManager.notify(( int ) System. currentTimeMillis () , mBuilder.build()); } }); } } Step 4 βˆ’ Add the following code to androidManifest.xml <? xml version= "1.0" encoding= "utf-8" ?> <manifest xmlns: android = "http://schemas.android.com/apk/res/android" package= "app.tutorialspoint.com.notifyme" > <uses-permission android :name= "android.permission.VIBRATE" /> <application android :allowBackup= "true" android :icon= "@mipmap/ic_launcher" android :label= "@string/app_name" android :roundIcon= "@mipmap/ic_launcher_round" android :supportsRtl= "true" android :theme= "@style/AppTheme" > <activity android :name= ".MainActivity" > <intent-filter> <action android :name= "android.intent.action.MAIN" /> <category android :name= "android.intent.category.LAUNCHER" /> </intent-filter> </activity> <service android :name= ".MyFirebaseMessagingService" android :exported= "false" > <intent-filter> <action android :name= "com.google.firebase.MESSAGING_EVENT" /> </intent-filter> </service> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –
[ { "code": null, "e": 1151, "s": 1062, "text": "This example demonstrate about How to play ringtone/alarm/notification sound in Android." }, { "code": null, "e": 1280, "s": 1151, "text": "Step 1 βˆ’ Create a new project in Android Studio, go to File β‡’ New Project and fill all requir...
Five Essential Skills for Transportation Data Science | by Mark Egge | Towards Data Science
Transportation touches the lives of nearly every living person every day. The public sector entities that build and operate the world’s roads, highways, and other public transportation networks are ravenous to hire data scientists with the skills to make sense of their voluminous data. Functional proficiency with the five skills described in this article will equip you to answer many of the questions that transportation agencies grapple with daily. Joining a department of transportation or metropolitan planning organization is a perfect entry-point for any new data scientist looking to apply her or his skills in service of the greater good. Interested? Here are five essential skills you’ll use as a data scientist supporting a transportation agency: Data Management & TransformationGISDecision TreesPlotting and MappingCount Regression Models Data Management & Transformation GIS Decision Trees Plotting and Mapping Count Regression Models This article illustrates the application of these skills through an applied investigation of the relationship between roadway lighting and traffic safety in Pennsylvania. This example uses R, which provides an excellent workbench for many transportation questions thanks to its rich ecosystem of users and analytical packages. Suppose you’re an analyst for the Pennsylvania Department of Transportation. The agency is weighing spending more of its annual roadway safety budget on installing highway lighting versus other safety improvements like guardrails. Your charge is to help answer the question: β€œwhat impact does street lighting have on crashes?” We investigate this question below. You can download the code and follow along yourself by cloning the repository from: https://github.com/markegge/safety-demo/ Data analysis is almost always proceeded by data preparation to transform source data into a format suitable for analysis. This includes operations like deriving new attributes, joining tables, aggregation, and reshaping tables. R’s data.table and dplyr packages are both powerful and versatile data transformation multi-tools. In this article, I use R’s data.table package, which I prefer for its speed and concise syntax. Safety analysis typically uses five years’ worth of historic crash data. Accordingly, we begin by loading in five years of crash history from the Pennsylvania Crash Information clearinghouse: Because transportation describes the movement of people and goods through space, transportation data is often spatial. To work with spatial data you’ll need to be familiar with basic GIS operations like buffering, dissolving, and joining, as well as reprojecting spatial data between Web Mercator, StatePlane, and UTM projections. Projections are a way of mapping spatial data from a round planet onto a flat computer screen. Web maps typically display data with distances measured in degrees of latitude and degrees of longitude, using a projection called Web Mercator (EPSG:4326). Since the length of a degree of longitude varies with latitude, measuring, buffering, and intersection operations are typically performed in StatePlane or UTM projections that measure distances in feet or meters rather than degrees. For this analysis, we’ll use a spatial representation of Pennsylvania’s highway road network provided by the Federal Highway Administration. Each roadway is divided into segments. To count the number of crashes per road segment, we buffer the road segments by 50' and then spatially join the crash points to the buffered lines. In R, the sf package provides an interface for to the powerful GDAL spatial library, allowing the use of the same spatial functions in R that you may already be familiar with from working with PostGIS or Python. The result looks like this: Next, we’ll tabulate our crash counts and join these results back to our spatial data. We also use the crash attributes to impute whether a given road segment is lighted or not, based on the lighting conditions reported in the joined crashes. Decision trees are useful tools for identifying structural relationships in data. Their principal use is to classify observations into two or more classes by deriving a rule set but, their internal algorithm for defining rulesets is also a useful exploratory data analysis tool for identifying relationships between predictors and outcome variables. In the segment below, a Decision Tree is used to fill in the β€œlighting” attribute for road segments without any crashes to impute from, and also to learn about what attributes in the dataset are predictive of crash rates. Data visualization enables humans to process and spot trends in vast volumes of data at a glance. For spatial data, this typically means mapping. For example, we can use R’s leaflet package (which provides an API to the popular Leaflet JavaScript web mapping library) to inspect if our crash-based roadway lighting assignment makes sense. For plotting tabular data, ggplot2 is R’s workhorse data visualization library. Below, we plot road segment crash counts versus exposure (vehicle miles travelled, defined as segment length times daily traffic). Regression is a highly useful statistical tool for identifying quantitative relationships in data. Linear Regression with Ordinary Least Squares is the most common type of regression (for predicting continuous outcome variables with linear predictor relationships), but regression is actually a family of models with many different types and applications. Expanding your regression repertoire to include count models has many applications in a transportation context. Statewide crash data is frequently modeled with Zero-Inflated Negative Binomial (ZINB) regression, which accounts for the probability that short or low traffic segments will have zero recorded crashes. We can investigate the relationship between roadway lighting and crashes by including lighting as an explanatory variable in a ZINB regression model. Our ZINB model predicts crashes based on exposure (VMT) and lighting. Here are the model outputs: Call:pscl::zeroinfl(formula = total_crashes ~ lighting + mvmt | mvmt, data = segments)Count model coefficients (poisson with log link): Estimate Std. Error z value Pr(>|z|) (Intercept) 2.1916955 0.0040869 536.28 <2e-16 ***lightingunlit -0.3935121 0.0056469 -69.69 <2e-16 ***mvmt 0.0370332 0.0001227 301.72 <2e-16 ***Zero-inflation model coefficients (binomial with logit link): Estimate Std. Error z value Pr(>|z|) (Intercept) 0.14028 0.03982 3.523 0.000427 ***mvmt -1.97937 0.06055 -32.690 < 2e-16 ***---Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 The model estimates that road segments without lighting (lightingunlit) are associated with 0.4 fewer crashes, on average, than segments without (everything else being equal). This counter-intuitive finding suggests the presence of confounding variables. Lighting isn’t installed at random, after all. We intended to investigate lighting’s impact on crashes; we seem to have found crashes’ impact on lighting (i.e. lighting seems to be installed at inherently dangerous locations). Our failure, however, points the way to additional approaches that may succeed. Since lighting is not installed at random, a better approach may be to find data where lighting conditions have changed over time, such as newly installed lighting or where maintenance records indicate a burned out light bulb. Data science is iterative. Thomas Edison famously said of his repeated failure to produce a functioning lightbulb, β€œI have not failed. I’ve just found 10,000 ways that won’t work.” A dose of humility goes a long way in the practical application of data science; if a topic is important, it has likely been studied before. Don’t expect your first inquiry to radically upend existing norms. Do expect to fail more often than you succeed; try new iterations incorporating the learnings from previous iterations until you arrive at an actionable finding (or run out of data, as is often the case). Nearly every living person interacts with and is impacted by the quality and effectiveness of our transportation systems, whether through its success β€” mobility and economic opportunity β€” or shortcomings (global warming; 40,000+ annual deaths due to automobiles in America alone; congestion; etc.). Most public sector transportation agencies have an abundance of data but lack the skilled data scientists to expand their use of data-informed decision making. If you’re willing to be persistent and possess a working knowledge of the five skills identified above, a public sector transportation agency is a great place for an impact-oriented data scientist.
[ { "code": null, "e": 821, "s": 172, "text": "Transportation touches the lives of nearly every living person every day. The public sector entities that build and operate the world’s roads, highways, and other public transportation networks are ravenous to hire data scientists with the skills to make ...
JSF - Add Data to DataTable
In this section, we'll showcase adding a row to a dataTable. Let us create a test JSF application to test the above functionality. <?xml version = "1.0" encoding = "UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns = "http://www.w3.org/1999/xhtml" xmlns:h = "http://java.sun.com/jsf/html" xmlns:f = "http://java.sun.com/jsf/core"> <h:head> <title>JSF tutorial</title> <h:outputStylesheet library = "css" name = "styles.css" /> </h:head> <h:body> <h2>DataTable Example</h2> <h:form> <h:dataTable value = "#{userData.employees}" var = "employee" styleClass = "employeeTable" headerClass = "employeeTableHeader" rowClasses = "employeeTableOddRow,employeeTableEvenRow"> <h:column> <f:facet name = "header">Name</f:facet> #{employee.name} </h:column> <h:column> <f:facet name = "header">Department</f:facet> #{employee.department} </h:column> <h:column> <f:facet name = "header">Age</f:facet> #{employee.age} </h:column> <h:column> <f:facet name = "header">Salary</f:facet> #{employee.salary} </h:column> </h:dataTable> <h3>Add Employee</h3> <hr/> <table> <tr> <td>Name :</td> <td><h:inputText size = "10" value = "#{userData.name}" /></td> </tr> <tr> <td>Department :</td> <td><h:inputText size = "20" value = "#{userData.dept}" /></td> </tr> <tr> <td>Age :</td> <td><h:inputText size = "5" value = "#{userData.age}" /></td> </tr> <tr> <td>Salary :</td> <td><h:inputText size = "5" value = "#{userData.salary}" /></td> </tr> <tr> <td> </td> <td><h:commandButton value = "Add Employee" action = "#{userData.addEmployee}" /></td> </tr> </table> </h:form> </h:body> </html> Once you are ready with all the changes done, let us compile and run the application as we did in JSF - First Application chapter. If everything is fine with your application, this will produce the following result. Add values to Add Employee Form and click Add Employee button. See the following result. 37 Lectures 3.5 hours Chaand Sheikh Print Add Notes Bookmark this page
[ { "code": null, "e": 2013, "s": 1952, "text": "In this section, we'll showcase adding a row to a dataTable." }, { "code": null, "e": 2083, "s": 2013, "text": "Let us create a test JSF application to test the above functionality." }, { "code": null, "e": 4412, "s":...
How to select a directory and store the location using Tkinter in Python?
We are familiar with dialog boxes and interacted with them in many types of applications. Such types of dialogs are useful in creating an application where user interaction is a prime need. We can use the dialog boxes to ask the user to select different types of files and then perform certain operations such as reading the file, writing to the file, etc. The dialog boxes can be created by using the filedialog Module in Python. In this example, we will create an application that will ask the user to select a file from the local directory and then will display the location of the directory with the help of Labels. #Import the Tkinter library from tkinter import * from tkinter import ttk from tkinter import filedialog #Create an instance of Tkinter frame win= Tk() #Define the geometry win.geometry("750x250") def select_file(): path= filedialog.askopenfilename(title="Select a File", filetype=(('text files''*.txt'),('all files','*.*'))) Label(win, text=path, font=13).pack() #Create a label and a Button to Open the dialog Label(win, text="Click the Button to Select a File", font=('Aerial 18 bold')).pack(pady=20) button= ttk.Button(win, text="Select", command= select_file) button.pack(ipadx=5, pady=15) win.mainloop() Running the above code will display a window that contains a button to select the file from the directory and display the file location on the window. Now, select any file from the local directory and then, it will display the location of the file in a Label widget.
[ { "code": null, "e": 1493, "s": 1062, "text": "We are familiar with dialog boxes and interacted with them in many types of applications. Such types of dialogs are useful in creating an application where user interaction is a prime need. We can use the dialog boxes to ask the user to select different...
How to initialize memory with a new operator in C++?
The new operator in C++ is defined for allocating memory and not initializing. If you want to allocate an array of type int with the new operator, and you want to initialize them all to the default value(ie 0 in case of ints), you can use the following syntax βˆ’ new int[10](); Note that you simply must use the empty parentheses - you can't, for example, use (0) or the other expression that is why this is only helpful for default initialization. There are other methods of initializing the same memory using fill_n, memset, etc which you can use to initialize objects to non default values. #include<iostream> int main() { int myArray[10]; // Initialize to 0 using memset memset( myArray, 0, 10 * sizeof( int )); // Using a loop assigns the value 1 to each element std::fill_n(array, n, 1); }
[ { "code": null, "e": 1324, "s": 1062, "text": "The new operator in C++ is defined for allocating memory and not initializing. If you want to allocate an array of type int with the new operator, and you want to initialize them all to the default value(ie 0 in case of ints), you can use the following ...
Java Arithmetic Operator Examples
The following program is a simple example which demonstrates the arithmetic operators. Copy and paste the following Java program into Test.java file, and compile and run this program βˆ’ Online Demo public class Test { public static void main(String args[]) { int a = 10; int b = 20; int c = 25; int d = 25; System.out.println("a + b = " + (a + b) ); System.out.println("a - b = " + (a - b) ); System.out.println("a * b = " + (a * b) ); System.out.println("b / a = " + (b / a) ); System.out.println("b % a = " + (b % a) ); System.out.println("c % a = " + (c % a) ); System.out.println("a++ = " + (a++) ); System.out.println("b-- = " + (a--) ); // Check the difference in d++ and ++d System.out.println("d++ = " + (d++) ); System.out.println("++d = " + (++d) ); } } This will produce the following result βˆ’ a + b = 30 a - b = -10 a * b = 200 b / a = 2 b % a = 0 c % a = 5 a++ = 10 b-- = 11 d++ = 25 ++d = 27
[ { "code": null, "e": 1247, "s": 1062, "text": "The following program is a simple example which demonstrates the arithmetic operators. Copy and paste the following Java program into Test.java file, and compile and run this program βˆ’" }, { "code": null, "e": 1259, "s": 1247, "text"...
Impala - Union Clause
You can combine the results of two queries using the Union clause of Impala. Following is the syntax of the Union clause in Impala. query1 union query2; Assume we have a table named customers in the database my_db and its contents are as follows βˆ’ [quickstart.cloudera:21000] > select * from customers; Query: select * from customers +----+----------+-----+-----------+--------+ | id | name | age | address | salary | +----+----------+-----+-----------+--------+ | 1 | Ramesh | 32 | Ahmedabad | 20000 | | 9 | robert | 23 | banglore | 28000 | | 2 | Khilan | 25 | Delhi | 15000 | | 4 | Chaitali | 25 | Mumbai | 35000 | | 7 | ram | 25 | chennai | 23000 | | 6 | Komal | 22 | MP | 32000 | | 8 | ram | 22 | vizag | 31000 | | 5 | Hardik | 27 | Bhopal | 40000 | | 3 | kaushik | 23 | Kota | 30000 | +----+----------+-----+-----------+--------+ Fetched 9 row(s) in 0.59s In the same way, suppose we have another table named employee and its contents are as follows βˆ’ [quickstart.cloudera:21000] > select * from employee; Query: select * from employee +----+---------+-----+---------+--------+ | id | name | age | address | salary | +----+---------+-----+---------+--------+ | 3 | mahesh | 54 | Chennai | 55000 | | 2 | ramesh | 44 | Chennai | 50000 | | 4 | Rupesh | 64 | Delhi | 60000 | | 1 | subhash | 34 | Delhi | 40000 | +----+---------+-----+---------+--------+ Fetched 4 row(s) in 0.59s Following is an example of the union clause in Impala. In this example, we arrange the records in both tables in the order of their id’s and limit their number by 3 using two separate queries and joining these queries using the UNION clause. [quickstart.cloudera:21000] > select * from customers order by id limit 3 union select * from employee order by id limit 3; On executing, the above query gives the following output. Query: select * from customers order by id limit 3 union select * from employee order by id limit 3 +----+---------+-----+-----------+--------+ | id | name | age | address | salary | +----+---------+-----+-----------+--------+ | 2 | Khilan | 25 | Delhi | 15000 | | 3 | mahesh | 54 | Chennai | 55000 | | 1 | subhash | 34 | Delhi | 40000 | | 2 | ramesh | 44 | Chennai | 50000 | | 3 | kaushik | 23 | Kota | 30000 | | 1 | Ramesh | 32 | Ahmedabad | 20000 | +----+---------+-----+-----------+--------+ Fetched 6 row(s) in 3.11s Print Add Notes Bookmark this page
[ { "code": null, "e": 2362, "s": 2285, "text": "You can combine the results of two queries using the Union clause of Impala." }, { "code": null, "e": 2417, "s": 2362, "text": "Following is the syntax of the Union clause in Impala." }, { "code": null, "e": 2439, "s"...
Create and Access a Python Package - GeeksforGeeks
15 Feb, 2018 Packages are a way of structuring many packages and modules which helps in a well-organized hierarchy of data set, making the directories and modules easy to access. Just like there are different drives and folders in an OS to help us store files, similarly packages help us in storing other sub-packages and modules, so that it can be used by the user when necessary. Creating and Exploring Packages To tell Python that a particular directory is a package, we create a file named __init__.py inside it and then it is considered as a package and we may create other modules and sub-packages within it. This __init__.py file can be left blank or can be coded with the initialization code for the package.To create a package in Python, we need to follow these three simple steps: First, we create a directory and give it a package name, preferably related to its operation.Then we put the classes and the required functions in it.Finally we create an __init__.py file inside the directory, to let Python know that the directory is a package. First, we create a directory and give it a package name, preferably related to its operation. Then we put the classes and the required functions in it. Finally we create an __init__.py file inside the directory, to let Python know that the directory is a package. Example of Creating Package Let’s look at this example and see how a package is created. Let’s create a package named Cars and build three modules in it namely, Bmw, Audi and Nissan. First we create a directory and name it Cars.Then we need to create modules. To do this we need to create a file with the name Bmw.py and create its content by putting this code into it.# Python code to illustrate the Modulesclass Bmw: # First we create a constructor for this class # and add members to it, here models def __init__(self): self.models = ['i8', 'x1', 'x5', 'x6'] # A normal print function def outModels(self): print('These are the available models for BMW') for model in self.models: print('\t%s ' % model)Then we create another file with the name Audi.py and add the similar type of code to it with different members.# Python code to illustrate the Moduleclass Audi: # First we create a constructor for this class # and add members to it, here models def __init__(self): self.models = ['q7', 'a6', 'a8', 'a3'] # A normal print function def outModels(self): print('These are the available models for Audi') for model in self.models: print('\t%s ' % model)Then we create another file with the name Nissan.py and add the similar type of code to it with different members.# Python code to illustrate the Moduleclass Nissan: # First we create a constructor for this class # and add members to it, here models def __init__(self): self.models = ['altima', '370z', 'cube', 'rogue'] # A normal print function def outModels(self): print('These are the available models for Nissan') for model in self.models: print('\t%s ' % model)Finally we create the __init__.py file. This file will be placed inside Cars directory and can be left blank or we can put this initialisation code into it.from Bmw import Bmwfrom Audi import Audifrom Nissan import NissanNow, let’s use the package that we created. To do this make a sample.py file in the same directory where Cars package is located and add the following code to it:# Import classes from your brand new packagefrom Cars import Bmwfrom Cars import Audifrom Cars import Nissan # Create an object of Bmw class & call its methodModBMW = Bmw()ModBMW.outModels() # Create an object of Audi class & call its methodModAudi = Audi()ModAudi.outModels() # Create an object of Nissan class & call its methodModNissan = Nissan()ModNissan.outModels()Various ways of Accessing the PackagesLet’s look at this example and try to relate packages with it and how can we access it.import in PackagesSuppose the cars and the brand directories are packages. For them to be a package they all must contain __init__.py file in them, either blank or with some initialization code. Let’s assume that all the models of the cars to be modules. Use of packages helps importing any modules, individually or whole.Suppose we want to get Bmw i8. The syntax for that would be:'import' Cars.Bmw.x5 While importing a package or sub packages or modules, Python searches the whole tree of directories looking for the particular package and proceeds systematically as programmed by the dot operator.If any module contains a function and we want to import that. For e.g., a8 has a function get_buy(1) and we want to import that, the syntax would be:import Cars.Audi.a8 Cars.Audi.a8.get_buy(1) While using just the import syntax, one must keep in mind that the last attribute must be a subpackage or a module, it should not be any function or class name.β€˜from...import’ in PackagesNow, whenever we require using such function we would need to write the whole long line after importing the parent package. To get through this in a simpler way we use β€˜from’ keyword. For this we first need to bring in the module using β€˜from’ and β€˜import’:from Cars.Audi import a8Now we can call the function anywhere usinga8.get_buy(1)There’s also another way which is less lengthy. We can directly import the function and use it wherever necessary. First import it using:from Cars.Audi.a8 import get_buyNow call the function from anywhere:get_buy(1)β€˜from...import *’ in PackagesWhile using the from...import syntax, we can import anything from submodules to class or function or variable, defined in the same module. If the mentioned attribute in the import part is not defined in the package then the compiler throws an ImportError exception.Importing sub-modules might cause unwanted side-effects that happens while importing sub-modules explicitly. Thus we can import various modules at a single time using * syntax. The syntax is:from Cars.Chevrolet import *This will import everything i.e., modules, sub-modules, function, classes, from the sub-package.This article is contributed by Chinmoy Lenka. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.My Personal Notes arrow_drop_upSave First we create a directory and name it Cars. Then we need to create modules. To do this we need to create a file with the name Bmw.py and create its content by putting this code into it.# Python code to illustrate the Modulesclass Bmw: # First we create a constructor for this class # and add members to it, here models def __init__(self): self.models = ['i8', 'x1', 'x5', 'x6'] # A normal print function def outModels(self): print('These are the available models for BMW') for model in self.models: print('\t%s ' % model)Then we create another file with the name Audi.py and add the similar type of code to it with different members.# Python code to illustrate the Moduleclass Audi: # First we create a constructor for this class # and add members to it, here models def __init__(self): self.models = ['q7', 'a6', 'a8', 'a3'] # A normal print function def outModels(self): print('These are the available models for Audi') for model in self.models: print('\t%s ' % model)Then we create another file with the name Nissan.py and add the similar type of code to it with different members.# Python code to illustrate the Moduleclass Nissan: # First we create a constructor for this class # and add members to it, here models def __init__(self): self.models = ['altima', '370z', 'cube', 'rogue'] # A normal print function def outModels(self): print('These are the available models for Nissan') for model in self.models: print('\t%s ' % model) # Python code to illustrate the Modulesclass Bmw: # First we create a constructor for this class # and add members to it, here models def __init__(self): self.models = ['i8', 'x1', 'x5', 'x6'] # A normal print function def outModels(self): print('These are the available models for BMW') for model in self.models: print('\t%s ' % model) Then we create another file with the name Audi.py and add the similar type of code to it with different members. # Python code to illustrate the Moduleclass Audi: # First we create a constructor for this class # and add members to it, here models def __init__(self): self.models = ['q7', 'a6', 'a8', 'a3'] # A normal print function def outModels(self): print('These are the available models for Audi') for model in self.models: print('\t%s ' % model) Then we create another file with the name Nissan.py and add the similar type of code to it with different members. # Python code to illustrate the Moduleclass Nissan: # First we create a constructor for this class # and add members to it, here models def __init__(self): self.models = ['altima', '370z', 'cube', 'rogue'] # A normal print function def outModels(self): print('These are the available models for Nissan') for model in self.models: print('\t%s ' % model) Finally we create the __init__.py file. This file will be placed inside Cars directory and can be left blank or we can put this initialisation code into it.from Bmw import Bmwfrom Audi import Audifrom Nissan import NissanNow, let’s use the package that we created. To do this make a sample.py file in the same directory where Cars package is located and add the following code to it:# Import classes from your brand new packagefrom Cars import Bmwfrom Cars import Audifrom Cars import Nissan # Create an object of Bmw class & call its methodModBMW = Bmw()ModBMW.outModels() # Create an object of Audi class & call its methodModAudi = Audi()ModAudi.outModels() # Create an object of Nissan class & call its methodModNissan = Nissan()ModNissan.outModels()Various ways of Accessing the PackagesLet’s look at this example and try to relate packages with it and how can we access it.import in PackagesSuppose the cars and the brand directories are packages. For them to be a package they all must contain __init__.py file in them, either blank or with some initialization code. Let’s assume that all the models of the cars to be modules. Use of packages helps importing any modules, individually or whole.Suppose we want to get Bmw i8. The syntax for that would be:'import' Cars.Bmw.x5 While importing a package or sub packages or modules, Python searches the whole tree of directories looking for the particular package and proceeds systematically as programmed by the dot operator.If any module contains a function and we want to import that. For e.g., a8 has a function get_buy(1) and we want to import that, the syntax would be:import Cars.Audi.a8 Cars.Audi.a8.get_buy(1) While using just the import syntax, one must keep in mind that the last attribute must be a subpackage or a module, it should not be any function or class name.β€˜from...import’ in PackagesNow, whenever we require using such function we would need to write the whole long line after importing the parent package. To get through this in a simpler way we use β€˜from’ keyword. For this we first need to bring in the module using β€˜from’ and β€˜import’:from Cars.Audi import a8Now we can call the function anywhere usinga8.get_buy(1)There’s also another way which is less lengthy. We can directly import the function and use it wherever necessary. First import it using:from Cars.Audi.a8 import get_buyNow call the function from anywhere:get_buy(1)β€˜from...import *’ in PackagesWhile using the from...import syntax, we can import anything from submodules to class or function or variable, defined in the same module. If the mentioned attribute in the import part is not defined in the package then the compiler throws an ImportError exception.Importing sub-modules might cause unwanted side-effects that happens while importing sub-modules explicitly. Thus we can import various modules at a single time using * syntax. The syntax is:from Cars.Chevrolet import *This will import everything i.e., modules, sub-modules, function, classes, from the sub-package.This article is contributed by Chinmoy Lenka. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.My Personal Notes arrow_drop_upSave from Bmw import Bmwfrom Audi import Audifrom Nissan import Nissan Now, let’s use the package that we created. To do this make a sample.py file in the same directory where Cars package is located and add the following code to it: # Import classes from your brand new packagefrom Cars import Bmwfrom Cars import Audifrom Cars import Nissan # Create an object of Bmw class & call its methodModBMW = Bmw()ModBMW.outModels() # Create an object of Audi class & call its methodModAudi = Audi()ModAudi.outModels() # Create an object of Nissan class & call its methodModNissan = Nissan()ModNissan.outModels() Various ways of Accessing the Packages Let’s look at this example and try to relate packages with it and how can we access it. import in PackagesSuppose the cars and the brand directories are packages. For them to be a package they all must contain __init__.py file in them, either blank or with some initialization code. Let’s assume that all the models of the cars to be modules. Use of packages helps importing any modules, individually or whole.Suppose we want to get Bmw i8. The syntax for that would be:'import' Cars.Bmw.x5 While importing a package or sub packages or modules, Python searches the whole tree of directories looking for the particular package and proceeds systematically as programmed by the dot operator.If any module contains a function and we want to import that. For e.g., a8 has a function get_buy(1) and we want to import that, the syntax would be:import Cars.Audi.a8 Cars.Audi.a8.get_buy(1) While using just the import syntax, one must keep in mind that the last attribute must be a subpackage or a module, it should not be any function or class name.β€˜from...import’ in PackagesNow, whenever we require using such function we would need to write the whole long line after importing the parent package. To get through this in a simpler way we use β€˜from’ keyword. For this we first need to bring in the module using β€˜from’ and β€˜import’:from Cars.Audi import a8Now we can call the function anywhere usinga8.get_buy(1)There’s also another way which is less lengthy. We can directly import the function and use it wherever necessary. First import it using:from Cars.Audi.a8 import get_buyNow call the function from anywhere:get_buy(1)β€˜from...import *’ in PackagesWhile using the from...import syntax, we can import anything from submodules to class or function or variable, defined in the same module. If the mentioned attribute in the import part is not defined in the package then the compiler throws an ImportError exception.Importing sub-modules might cause unwanted side-effects that happens while importing sub-modules explicitly. Thus we can import various modules at a single time using * syntax. The syntax is:from Cars.Chevrolet import *This will import everything i.e., modules, sub-modules, function, classes, from the sub-package. import in PackagesSuppose the cars and the brand directories are packages. For them to be a package they all must contain __init__.py file in them, either blank or with some initialization code. Let’s assume that all the models of the cars to be modules. Use of packages helps importing any modules, individually or whole.Suppose we want to get Bmw i8. The syntax for that would be:'import' Cars.Bmw.x5 While importing a package or sub packages or modules, Python searches the whole tree of directories looking for the particular package and proceeds systematically as programmed by the dot operator.If any module contains a function and we want to import that. For e.g., a8 has a function get_buy(1) and we want to import that, the syntax would be:import Cars.Audi.a8 Cars.Audi.a8.get_buy(1) While using just the import syntax, one must keep in mind that the last attribute must be a subpackage or a module, it should not be any function or class name. 'import' Cars.Bmw.x5 While importing a package or sub packages or modules, Python searches the whole tree of directories looking for the particular package and proceeds systematically as programmed by the dot operator.If any module contains a function and we want to import that. For e.g., a8 has a function get_buy(1) and we want to import that, the syntax would be: import Cars.Audi.a8 Cars.Audi.a8.get_buy(1) While using just the import syntax, one must keep in mind that the last attribute must be a subpackage or a module, it should not be any function or class name. β€˜from...import’ in PackagesNow, whenever we require using such function we would need to write the whole long line after importing the parent package. To get through this in a simpler way we use β€˜from’ keyword. For this we first need to bring in the module using β€˜from’ and β€˜import’:from Cars.Audi import a8Now we can call the function anywhere usinga8.get_buy(1)There’s also another way which is less lengthy. We can directly import the function and use it wherever necessary. First import it using:from Cars.Audi.a8 import get_buyNow call the function from anywhere:get_buy(1) from Cars.Audi import a8 Now we can call the function anywhere using a8.get_buy(1) There’s also another way which is less lengthy. We can directly import the function and use it wherever necessary. First import it using: from Cars.Audi.a8 import get_buy Now call the function from anywhere: get_buy(1) β€˜from...import *’ in PackagesWhile using the from...import syntax, we can import anything from submodules to class or function or variable, defined in the same module. If the mentioned attribute in the import part is not defined in the package then the compiler throws an ImportError exception.Importing sub-modules might cause unwanted side-effects that happens while importing sub-modules explicitly. Thus we can import various modules at a single time using * syntax. The syntax is:from Cars.Chevrolet import *This will import everything i.e., modules, sub-modules, function, classes, from the sub-package. from Cars.Chevrolet import * This will import everything i.e., modules, sub-modules, function, classes, from the sub-package. This article is contributed by Chinmoy Lenka. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python Different ways to create Pandas Dataframe Python String | replace() Create a Pandas DataFrame from Lists Python program to convert a list to string Reading and Writing to text files in Python
[ { "code": null, "e": 24746, "s": 24718, "text": "\n15 Feb, 2018" }, { "code": null, "e": 25115, "s": 24746, "text": "Packages are a way of structuring many packages and modules which helps in a well-organized hierarchy of data set, making the directories and modules easy to acces...
Plotting a cumulative graph of Python datetimes in Matplotlib
To plot a cumulative graph of python datetimes, we can take the following steps βˆ’ Set the figure size and adjust the padding between and around the subplots. Set the figure size and adjust the padding between and around the subplots. Make a Pandas dataframe with some college data, where one key for time difference and another key for number students have admissioned in the subsequent year. Make a Pandas dataframe with some college data, where one key for time difference and another key for number students have admissioned in the subsequent year. Plot the dataframe using plot() method where kind='bar', i.e., class by name. Plot the dataframe using plot() method where kind='bar', i.e., class by name. To display the figure, use show() method. To display the figure, use show() method. import pandas as pd from matplotlib import pyplot as plt plt.rcParams["figure.figsize"] = [7.50, 3.50] plt.rcParams["figure.autolayout"] = True college_student_data = {'durations': [1, 2, 2.5, 3, 4.5, 5, 5.5, 6, 6.5, 7], 'admissions': [5, 50, 150, 3380, 760, 340, 115, 80, 40, 10]} df = pd.DataFrame(data=college_student_data) df.plot(kind='bar', x='durations', y='admissions', legend=False, color='red', rot=0,) plt.show()
[ { "code": null, "e": 1144, "s": 1062, "text": "To plot a cumulative graph of python datetimes, we can take the following steps βˆ’" }, { "code": null, "e": 1220, "s": 1144, "text": "Set the figure size and adjust the padding between and around the subplots." }, { "code": nu...
Front Office Management - Accounting
Accounting section of any business or organization tracks, records, and manages the financial transactions of the business with its customers and clients. The accounting department handles the financial health and tracks the performance of any business directly. It is helpful for the management to take appropriate decisions. When it comes to a hotel business, accounting is managing expenses and revenue. It provides a clear information to the guests thereby avoiding any unpleasant surprises to the guests. Let us know more about the accounts section of front office. It is a systematic process in which the front office accounting staff identifies, records, measures, classifies, verifies, summarizes, interprets, organizes, and communicates financial information for a hotel business. In the simplest form, a front office account resembles English alphabet β€˜Block-T’. In the domain of front office accounting, the charges are entered on the left side of the β€˜T’. They increase the account balance. The payments are entered on the right side of the β€˜T’. They decrease the account balance. Net Outstanding Balance = Previous Balance + Debit – Credit Where debit increases the outstanding balance and credit decreases it. Most of the contemporary hotel businesses employ automated accounting system. The objectives of accounting system are βˆ’ To handle transactions between the guests and the hotel accurately. To track the transactions throughout the guest’s occupancy. To monitor the guest’s credit limit. To avoid possibility of any fraud. To organize and report the transactional information. There are following typical accounts in hotel business dealing with customers βˆ’ Guest Account Non-guest or City Account Management Account Here are some prominent differences between a guest and a city account βˆ’ Some hotels allow the managers to entertain the guests’ queries or grievances, or any possibility of acquiring a business deal over a brief interaction with the guests. For example, if a guest has some problem about the hotel policy, the manager calls the guest for interaction over a coffee or a drink and tries to resolve the same. The expenses towards this interaction are then recorded on the management account. A folio is a statement of all transaction that has taken place in a single account. The front office staff records all the transactions between the guest and the hotel on the folio. The folio is opened with zero initial balance. The balance in the folio then increases or decreases depending upon the transactions. At the time of check-out, the folio balance must return to zero on settlement of payment. There are following major types of folios βˆ’ Guest βˆ’ Assigned to charge for individual guests. Guest βˆ’ Assigned to charge for individual guests. Master βˆ’ Assigned charge for group/organization. Master βˆ’ Assigned charge for group/organization. Non-guest βˆ’ Assigned for non-resident guest. Non-guest βˆ’ Assigned for non-resident guest. Employee βˆ’ Assigned for hotel employee to charge against coffee shop privileges. Employee βˆ’ Assigned for hotel employee to charge against coffee shop privileges. The process of recording the entries on the folio is called β€˜Posting’ of transactions. There are two basic types of postings βˆ’ Credit βˆ’ They reduce the guest’s outstanding balance. These entries include complete or partial payment, or adjustments against tokens. Credit βˆ’ They reduce the guest’s outstanding balance. These entries include complete or partial payment, or adjustments against tokens. Debit βˆ’ They increase the outstanding balance in the guest account. Debit entries include charges under restaurant, room-service, health center/spa, laundry, telephone, and transportation. Debit βˆ’ They increase the outstanding balance in the guest account. Debit entries include charges under restaurant, room-service, health center/spa, laundry, telephone, and transportation. Vouchers are detailed documentary evidences for a transaction. It transfers the transaction from its source to the front office. Vouchers are used to notify the front office about guest’s purchases or availing of any service at the hotel. The following typical vouchers are used in the hotel βˆ’ Cash Receipt Voucher Commission Voucher Charge Voucher Petty Cash Voucher Allowance Voucher Miscellaneous Charge Order (MCO) Paid-out Voucher (VPO) Transfer voucher Here are some typical vouchers. The ledgers are a group of accounts. There are two ledgers the front office handles βˆ’ Guest ledger βˆ’ A set of all guest accounts currently residing in the hotel. Guest ledger βˆ’ A set of all guest accounts currently residing in the hotel. Non-guest ledger βˆ’ A set of all unsettled, departed guest accounts. Non-guest ledger βˆ’ A set of all unsettled, departed guest accounts. There are two other types of ledgers used in the hotel. Both types of ledgers are used by back office accounting section as given βˆ’ Receivable ledger βˆ’ The back office accounting staff mails the bills and statements to the guests after their departure without settling the bills and ensures the payments for services provided. Receivable ledger βˆ’ The back office accounting staff mails the bills and statements to the guests after their departure without settling the bills and ensures the payments for services provided. Payable ledger βˆ’ The staff handles amounts of money paid in advance on behalf of the guest to the hotel for future consumption of goods and services. Payable ledger βˆ’ The staff handles amounts of money paid in advance on behalf of the guest to the hotel for future consumption of goods and services. There are various issues regarding account settlement βˆ’ By Guest βˆ’ The guest settles own account by cash/credit card/cheque. By Organization βˆ’ The organization settles guest account by transferring money to the hotel account. There are following popular methods of account settlement βˆ’ Account Settlement in Local Currency βˆ’ A guest can pay in terms of a local currency where the payment is not chargeable with conversion fees. Account Settlement in Foreign Currency βˆ’ If the guest prefers to pay in foreign currency, the service of payment by the bank is chargeable for around 3% to 6% of the total payable amount. Account Settlement Using Traveler Check βˆ’ Travelers’ cheques, the pre-printed cheques in the denominations of major world currencies are a good option to paying by cash. Debit Card βˆ’ Use of magnetic cards for payment against account is most common today. Paying by debit cards is as good as paying by cash as the amount of money is instantly transferred from the guest’s bank account into the hotel’s bank account. In case of credit card settlement, the accounting staff mails the charge vouchers signed by guests to the credit card company; preferably within a specified time. The credit card company then settles the guest account by transferring money against it. Credit Settlement by Organization βˆ’ Many national, international, private, or public organizations send their employees or students for attending workshops, seminar, or meetings. Such organizations tie-up with the hotel for paying the bills of their employees on credit. The organizations reserve accommodations depending on the number of room nights (number of rooms Γ— number of nights the representatives are expected to occupy). This is popularly known as account Settlement using Direct Billing. In direct billing account settlement, the front office staff verifies guest folios and transfers the guest account to non-guest or city account. The hotel’s back-office accounting verifies the guest folios and is responsible to collect the direct billing amount from a direct billing agency such as embassy, university, or organizations. The accounting section also notifies the guests that if the direct billing agency fails or refuses to pay the charges then the guests need to settle the account by paying them from their pocket. Combined Account Settlement βˆ’ A guest can settle account by paying partial amount in cash and remaining amount on credit. The front office staff needs to prepare the supporting document for such kind of payment and hands it over to the back-office accounts. 127 Lectures 16.5 hours Joseph Delgadillo 131 Lectures 12.5 hours Sandra L 55 Lectures 11 hours Emenwa Global, Ejike IfeanyiChukwu 107 Lectures 12.5 hours Code And Create 103 Lectures 16.5 hours Nick O Print Add Notes Bookmark this page
[ { "code": null, "e": 2317, "s": 1990, "text": "Accounting section of any business or organization tracks, records, and manages the financial transactions of the business with its customers and clients. The accounting department handles the financial health and tracks the performance of any business ...
Natural Language to SQL: Use it on your own database | by Param Raval | Towards Data Science
Not everyone likes or knows how to write an SQL query to search within a huge database. To a layman, it would be a nightmare to construct complex queries with SQL functions and keywords β€” like JOIN and GROUP BY. A system to convert simple information retrieval questions into a corresponding SQL query would come in handy especially to the banking and healthcare sectors. And with advances in NLP, this could be a reality very soon. Though there has been substantial research to solve this problem, we are yet to get complete accurate results. Still, from work done on benchmark datasets like WikiSQL, Spider, SPaRC and ATIS, people have managed to build Text-to-SQL systems with decent results. Here, you will see how to implement such a system using the EditSQL model along with a simple workaround to use it on your own schema. This article is structured as follows: 1. Introduction 2. Installing & Running EditSQL on SParC 3. Making Changes to the Code 4. Adding a Custom Database and Building the Vocab 5. Testing your Question 6. Conclusion Generating SQL queries from user questions involves solving tasks more than just question-answering and machine translation. Like in question-answering, as the user interacts more, they will often put forward questions that require complex processing such as β€” a reference to some information mentioned previously or require a combination of several disparate schemas. As a tool, such a system helps the end users, who are often inexperienced in database querying matters, extract information from scores of complex databases. Several attempts have been made on different benchmark datasets to address such problems in the Text-to-SQL task, especially in semantic parsing. You can check out this interesting survey [4] that introduces the task and the problem quite well. You can also check out this interactive converter built by the AllenNLP team on the ATIS dataset. Just like several other and better performing models, they use semantic parsing and an encoder-decoder architecture to do the job. As of July 2020, the leaderboard for Spider and SParC list the following models as some of the best performing models with an opensource code given: RATSQL v3 + BERT [5] IRNet + BERT [6] EditSQL + BERT Here, you can find the complete leaderboard for Spider and SParC. For practicality, we limit the scope of this article to exploring EditSQL especially on SParC. While the listed models perform well on their respective datasets, the given codes do not have the option to test the model on a custom database (let alone train). In fact, even to make changes to existing queries of the dataset is a tedious and error-prone process. However, out of the models given above we can make some changes to EditSQL and manage to run the SParC experiment on a custom SQLite database. These changes are minor and prove to be an easy workaround to test the performance on your own database. EditSQL attempts to solve a context-dependent text to SQL query generation task and incorporates interaction histories as well as an utterance-table encoder-decoder unit to robustly understand the context of a user’s utterance (or question). To do this, they use an encoder based on BERT which avails in capturing complex database structures and relate them to the utterances. Thus, given an arbitrary question, the model will most certainly identify correctly the database schema to which the question corresponds. Furthermore, EditSQL takes into account the relation between the user utterance and the table structures as well as recent history of encoding to correctly identify the context. As shown in the diagram above, the information gained here is then passed to a table-aware decoder that uses an attention enhanced LSTM to perform SQL generation. However, the user often asks questions that contain information provided in a previous interaction. In a sequential generation mechanism this might lead to redundancy in processing and query generation. What gives EditSQL its name is the novel mechanism to β€œedit” the generated tokens of the query and take care of this problem using another Bi-LSTM paired with the attention enabled context vector. To run this model, make sure you have a GPU enabled system. Alternatively, you can work on Google Colab. First off, clone this repository into your system (or in Colab): git clone https://github.com/ryanzhumich/editsql Download the SParC dataset to run the experiment from their official page or use the gdown command as follows: pip install gdowngdown --id 13Abvu5SUMSP3SJM-ZIj66mOkeyAquR73 Place the unzipped folder into editsql/data/sparc Next, you can follow the instructions in their README or follow these steps: pip install -r requirements.txt #install dependencies#Download pre-trained BERT modelgdown --id 1f_LEWVgrtZLRuoiExJa5fNzTS8-WcAX9# or download it directly from https://drive.google.com/file/d/1f_LEWVgrtZLRuoiExJa5fNzTS8-WcAX9/view?usp=sharing And place the pretrained BERT model in the file tree as below: If the above steps are successfully executed you can run the following bash script to begin training: bash run_sparc_editsql.sh The authors have saved their experimental logs at logs/logs_sparc_editsql in log.txt where you can find the details and performance of each epoch. You will need to successfully execute at least the initial preprocessing and vocab building steps during training. This will create folders named logs_sparc_editsql processed_data_sparc_removefrom processed_data_sparc_removefrom_test You can interrupt the execution (Ctrl+C) once your output looks like shown below (right before the actual training starts). Number of examples and batch size values might differ as per your configuration. Successful execution of these steps will create the processed data files and vocab files which are necessary for testing on the development set. Since we haven’t really performed any β€œtraining”, we will need to download the author’s pretrained model and place it under logs/logs_sparc_editsql under the name save_31_sparc_editsql. gdown --id 1MRN3_mklw8biUphFxmD7OXJ57yS-FkJP#or download directly from #https://drive.google.com/file/d/1MRN3_mklw8biUphFxmD7OXJ57yS-FkJP/view?usp=sharing Now you are all set to run the bash script to predict on the dev set. bash test_sparc_editsql.sh The predictions are saved in a structured JSON file in logs/logs_sparc_editsql under the name dev_use_predicted_queries_predictions.json. It can be cumbersome to verify each predicted query with its corresponding question. So you can run the following script to get a readable output stored in a file. This script will create output.txt containing the input question, predicted query and the prediction confidence for every question-query pair in the predictions file. Make sure you have a .sqlite database file containing your SQL database. Let’s say you have a db file named β€œsales.sqlite”. Put this file in a new folder named β€œsales” as shown below. Next, open tables.json found in data/sparc and add the description of your database schema and tables there. Use the structure described in the tables section of the README here. You can use the entries already existing in tables.json as reference and add a similar entry for your schema. Open dev.json and dev_no_value.json, found in the same directory, and observe the input question structure as shown in the repository of Spider: right here and with examples in the file parsed_sql_examples.sql. All you have to do is replicate one complete entry (everything between the β€œ{β€œ just above β€œdatabase_id” and the β€œ}” before the next β€œdatabase_id” entry). Once, you append this replicated entry, just edit β€œutterance” and β€œutterance_toks” to your desired question and β€œdatabase_id” to your database name. In our example, it will be β€œdatabase_id” : β€œsales” Finally, add the database_id (your database name) to the list of database names in dev_db_ids.txt (found in the same directory). During the preprocessing and vocab building step of training we get something called β€œinteractions” written to pickle files and saved in data/sparc_data_removefrom. These interactions are inherent in the SParC dataset which you can read about in their paper[2] and also in context of EditSQL in section 3.2 of their paper[1]. Similarly, based on the table description in tables.json and training sequences the model creates different vocabulary files for each of train, valid, and dev experiments stored at processed_data_sparc_removefrom. For this reason, when we add a new database or edit the dev files (like we did in Step 2) we need to run the training script again. Just like in Step 1 we can terminate the script once the preprocessing steps are done. Note: Please delete all the folders that were created by the training script before running it again. This also includes logs_sparc_editsql/args.log. These files are not updated unless the script gets to create them from scratch. Run bash test_sparc_editsql.shto get the predicted query for your questions. Follow the testing steps mentioned in Step 1 and generate output.txt to get a readable view of the results. Now you can easily convert natural language questions to an SQL query on your own schema. Thanks for reading! [1] Zhang, Rui, et al, Editing-Based SQL Query Generation for Cross-Domain Context-Dependent Questions(2019), arXiv preprint arXiv:1909.00786. Github: https://github.com/ryanzhumich/editsql [2] Yu, Tao, et al, Sparc: Cross-domain semantic parsing in context(2019). arXiv preprint arXiv:1906.02285. Github: https://github.com/taoyds/sparc [3] Yu, Tao, et al, Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task (2018), arXiv preprint arXiv:1809.08887. Github: https://github.com/taoyds/spider [4] Affolter, Katrin, Kurt Stockinger, and Abraham Bernstein, A comparative survey of recent natural language interfaces for databases (2019), The VLDB Journal 28.5: 793–819. [5] Wang, Bailin, et al, Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers (2019), arXiv preprint arXiv:1911.04942. [6] Guo, Jiaqi, et al, Towards complex text-to-sql in cross-domain database with intermediate representation (2019), arXiv preprint arXiv:1905.08205.
[ { "code": null, "e": 605, "s": 172, "text": "Not everyone likes or knows how to write an SQL query to search within a huge database. To a layman, it would be a nightmare to construct complex queries with SQL functions and keywords β€” like JOIN and GROUP BY. A system to convert simple information retr...
How to print a line on the console using C#?
To display a line, Console.Write() is used in C#. Console displays the result on the console. I have first set a string. string str = "Tom Hanks is an actor"; Now displaying the above line. Console.WriteLine(str); The following is the complete code βˆ’ Live Demo using System; namespace Program { public class Demo { public static void Main(String[] args) { string str = "Tom Hanks is an actor"; Console.WriteLine("Displaying a line below"); Console.WriteLine(str); Console.ReadLine(); } } } Displaying a line below Tom Hanks is an actor
[ { "code": null, "e": 1112, "s": 1062, "text": "To display a line, Console.Write() is used in C#." }, { "code": null, "e": 1183, "s": 1112, "text": "Console displays the result on the console. I have first set a string." }, { "code": null, "e": 1221, "s": 1183, ...
How to import external libraries in JShell in Java 9?
JShell is an interactive tool to learn Java language and prototyping Java code. JShell does the work by evaluating the commands that user types into it. This tool works on the principle of REPL (Read-Evaluate-Print-Loop). By default, JShell automatically imports a few useful java packages when the JShell session is started. We can type the command /imports to get a list of all these imports. jshell> /imports | import java.io.* | import java.math.* | import java.net.* | import java.nio.file.* | import java.util.* | import java.util.concurrent.* | import java.util.function.* | import java.util.prefs.* | import java.util.regex.* | import java.util.stream.* | import javax.mail.internet.InternetAddress We can also import external libraries in JShell by using the below steps: If we want to create an InternetAddress object that resides in the javax.mail.internet package, then we need to import that package in JShell. jshell> import javax.mail.internet.InternetAddress | Error: | package javax.mail.internet does not exist | import javax.mail.internet.InternetAddress; | ^---------------------------------^ In the above, just importing the class doesn't work because the package is unknown to the classpath. We need to add jars or class files to classpath by using the command: "/env –class-path <jars, class files>" jshell> /env --class-path \Users\user\mail-1.4.7.jar | Setting new options and restoring state. jshell> import javax.mail.internet.InternetAddress Finally, we can create an InternetAddress object by using below jshell> InternetAddress from = new InternetAddress("a@a") from ==> a@a
[ { "code": null, "e": 1284, "s": 1062, "text": "JShell is an interactive tool to learn Java language and prototyping Java code. JShell does the work by evaluating the commands that user types into it. This tool works on the principle of REPL (Read-Evaluate-Print-Loop)." }, { "code": null, ...
How to plot a time series array, with confidence intervals displayed in Python? (Matplotlib)
To plot a time series array, with confidence intervals displayed in Python, we can take the following steps βˆ’ Set the figure size and adjust the padding between and around the subplots. Get the time series array. Initialize a variable, n_steps, to get the mean and standard deviation. Get the under and above lines for confidence intervals. Plot the mean line using plot() method. Use fill_between() method to get the confidence interval. To display the figure, use show() method. import numpy as np import pandas as pd import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = [7.50, 3.50] plt.rcParams["figure.autolayout"] = True time_series_array = np.sin(np.linspace (-np.pi, np.pi, 400)) + np.random.rand((400)) n_steps = 15 time_series_df = pd.DataFrame(time_series_array) line = time_series_df.rolling(n_steps).mean() line_deviation = 2 * time_series_df.rolling(n_steps).std() under_line = (line - line_deviation)[0] over_line = (line + line_deviation)[0] plt.plot(line, linewidth=2) plt.fill_between(line_deviation.index, under_line, over_line, color='red', alpha=.3) plt.show()
[ { "code": null, "e": 1172, "s": 1062, "text": "To plot a time series array, with confidence intervals displayed in Python, we can take the following steps βˆ’" }, { "code": null, "e": 1248, "s": 1172, "text": "Set the figure size and adjust the padding between and around the subplo...
Here’s How OpenAI Codex Will Revolutionize Programming (and the World) | by Alberto Romero | Towards Data Science
Until now, if we wanted to communicate with a computer, we had to learn its language. We had to adapt to it. From now on, computers will adapt to us. OpenAI has done it again. In July last year, they released GPT-3, the first of its kind. This AI system showed a mastery of language like no other before. Mimicking Shakespeare’s poetic genius, writing a rap song about Harry Potter in the style of Lil Wayne, or writing productivity articles are just a few of GPT-3’s capabilities. GPT-3 was, at the time, the largest neural network ever created. It became clear for OpenAI β€” and the rest of the world, given how promptly others followed their example β€” that large pre-trained language models were AI’s answer to the secrets of human language. However, GPT-3 had some abilities not even OpenAI’s researchers had thought of. Sharif Shameem was one of the first to notice GPT-3’s coding skills. He successfully made the system build a generator that wrote code for him. The world was on the verge of one of the most important AI revolutions of our time. Last month, OpenAI β€” in partnership with Microsoft and GitHub β€” presented GitHub Copilot, an AI pair programmer intended to aid developers in their boring day-to-day tasks. Reading documentation? Copilot does it for you. Writing unit tests? Copilot does it for you. The system acts as a super-powerful autocomplete. And it’s fueled by OpenAI’s latest star: Codex. Codex, GPT-3’s younger brother, is another language model. But, instead of being a jack of all trades like GPT-3, Codex is a master. A master of coding. It’s fluent in dozens of programming languages and can understand natural language and interpret it to create unambiguous commands that a computer can understand. Yesterday, OpenAI released a new version of Codex through a new API (we can access it by signing on a waitlist). They’ve also created a new challenge in which people will code against β€” and with Codex (you can register here). On top of it, Greg Brockman and Ilya Sutskever, two high-profile OpenAI researchers, conducted a live demo to show Codex's capabilities and give a hint of its current potential β€” and what awaits in the future. In this article, I’ll review what OpenAI revealed in the demo with impressive examples of Codex’s skillset. I’ll finish with my thoughts on the implications of this technology and how it could reshape the future. GPT-3 was a pioneer. When it came out, most people had never heard about language models, the Transformer architecture, unsupervised learning, or pre-training techniques. It showed the world what language AI systems were capable of. It became an AI celebrity, pretty much like IBM Watson, or AlphaZero before it. But it wouldn’t be fair to write about Codex in the shadow of its older brother. Codex is, in and of itself, an AI superstar. It isn’t just a quantitative improvement from GPT-3. The way Codex could impact our world is closer to how the iPhone did in 2007 than to how AlphaZero did(n’t) in 2017. Our way of living is radically different from the pre-iPhone era and Codex promises to change the world in an arguably comparable way. One of the main drawbacks of GPT-3 was its lack of memory. The context window β€” the amount of previous information it can access when you interact with it β€” was small and made the system commit errors. It’d diverge on its theses in long essays, enter endless loops, or make incongruent arguments. Codex solves this problem with an increased memory store β€” from 4KB to 14KB, more than 3x GPT-3’s size. This allows it to grasp a better, wider understanding of the task at hand. GPT-3, as impressive as it is, lives within a virtual world, forever confined to the boundaries of an unperceivable reality of ones and zeroes. GPT-3 can influence the world only in an indirect way, β€œthrough the mind of the reader,” as OpenAI’s CTO Greg Brockman says. Codex, in contrast, can directly affect the world by creating commands from English prompts to make a computer do something. Codex can take actions that have (I hope beneficial) consequences in the real world. β€œProficient in more than a dozen programming languages, Codex can now interpret simple commands in natural language and execute them on the user’s behalf β€” making it possible to build a natural language interface to existing applications.” After watching the demo live I can only say Codex seems impressive. Not because it can write code in several languages or because it does it fairly well. But because its inferences of the English prompts could be qualified as deep, nuanced, creative, and precise β€” although we should never forget it doesn’t understand language in the same sense we do. Watching Codex interpret natural language commands made me drop my jaw quite a few times. Ambiguous orders, informal language, or even metaphors. Codex overcame those obstacles to achieve its goals with a more than adequate JavaScript. It simply takes an English prompt β€” similar to a comment in a traditional program β€” as input and then it writes the code and sends it to the computer which displays the output on the screen. Here are the curated highlights of three examples (from the demo and the blog post) and my impressions. They wanted to showcase the basics, so no other way to start the demo than with β€œhello world.” The inputs are in bold, and the outputs are in regular font. Say Hello World.Hello WorldNow say it with feeling!Hello World!!!Even louder please.HELLO WORLD!!! The first thing we notice is that the model can self-reference. It can store in memory (within the same session) previous commands and use them adequately. It understands that β€œit” refers to β€œHello World,” which requires a certain understanding of syntax and context. The last prompt is even more complex as it has some ellipsis. A standalone β€œeven louder please” command means nothing. In practice, it’s as if we were having a conversation with Codex that it can remember and use in the future. Then, they wanted to β€œbroadcast that message to the world” so they asked Codex to create a web page and then a server to host that page. They asked the viewers to subscribe at openai.com/c to take part in the demo so Codex could send us a message with β€œHello World.” And, to make the mail amusing, they added the price of Bitcoin too. To send this email they used the Mailchimp API β€” which Codex learned to use just by reading the two middle sentences. Look up the current Bitcoin price, and save it to a variable.Here's a new method you can use.codex_chimp.send(subject, body)-> sends Mailchimp email blastNow send everyone an email, telling them (a) hello world and (b) the current Bitcoin. A few seconds later I got this email. Amazing. For this example, they decided to write a larger, more complex program. Instead of testing Codex with one-line commands, they created a game: The main character is a person that has to dodge a falling boulder by moving left and right on the screen. First, they imported the image of a person, adapted the size to fit in the screen, and located it at the bottom of the screen. They wanted the person to move right and left with the keys so they prompted Codex: β€œNow make it controllable with the left and right arrow keys.” How can Codex interpret this command correctly? What does β€œmake it controllable” mean? It’s a very ambiguous way of expressing β€œmove X pixels to the left/right when the left/right key is pressed.” Still, Codex managed to interpret the instruction and generated code to move the image 10 pixels left and right when the keys were pressed. Tackling nuanced orders like this one is one of the most remarkable capabilities of Codex. How to write JavaScript code is a skill not many people have, but anyone knows what it means to move the person left and right with the keys. Codex closes the gap between our desires β€” that we often are unable to express precisely even in natural language β€” and our expectations. Also, the system has to hold a notion of what’s going on to follow the instructions. We can use the screen as an aid to grasping what’s going on, but the model only has access to the previous code. Then, they included an image of a boulder, that the person has to dodge. However, as it was too big, they ask Codex to β€œmake it small.” The system has to not only understand the meaning of β€œsmall,” but to have a decent sense of the relative nature of this adjective. An ant is small to us, but not to an atom. A house is small to the Earth, but not to us. Yet, Codex converted the boulder to a reasonable size for the task at hand. Finally, after adjusting the size of the boulder, they wanted it to fall from the sky and appear at a random location after hitting the bottom of the screen. They asked Codex to β€œset its position to the top of the screen, at a random horizontal location. Now have it fall from the sky, and wrap around.” Which sent the image of the boulder downwards until it reached the bottom and then appeared again at the top. Notice that the sentence β€œfall from the sky” is a metaphor. There’s no sky and there’s no actual falling. Codex had to understand that β€œfall” meant moving the image in a specific direction (down) continually. And it had to understand that β€œsky” was referring to the top of the screen. To further emphasize Codex’s potential, they decided to use a real-world example. At the end of the day, most people don’t know how to program or don’t even care that an AI system could help them do it. But, as I said at the beginning, the promises of Codex aren’t constrained to the AI/tech world. Codex β€” and its future successors β€” will change how we interact with computers. For this example, they used a JavaScript Microsoft Word API to allow Codex to make changes into Word documents. To top it off, they included a speech recognition system. Picture this: You talk to your computer, the speech system recognizes your voice and converts it into text. The text is sent to Codex β€” which already knows how to use the Microsoft API β€” it transforms the command into JavaScript code and does what you intended to the document. The easiness with which Codex learns how to leverage APIs is another great strength. The programming world is getting β€œAPI-ified,” so these systems will become more useful as time progresses. β€œIt will take a long time for the technology to get good enough, but eventually we may forget that [we] used computers any other way. As more things become API-ified, AI systems that write code can easily make a lot of things happen in the world.” β€” Sam Altman Just imagine what Codex could do in a programmable world. Since the beginning of computer science, programming languages have become increasingly high-level. Codex is the missing link between human and computer language. We will be able to express a thought and the computer will understand what we mean without uncertainty. Anyone will be able to communicate with computers in a way that feels natural to us. Codex is a primitive version of J.A.R.V.I.S. β€œWhat you see here is a taste of the future. As the neural network gets really good at turning instructions to correct API codes, it’ll become possible to do more and more sophisticated things with your software just by telling it what to do.” β€” Ilya Sutskever From punch cards to Python, the history of human-computer communication has never favored us. We’ve always had to sacrifice our commodities and adapt to machines. For this reason, people closely involved in computer communication jobs have been considered high-skill, high-value workers. Even today, when programming is considered a must-know skill, most people don’t know how to create programs to make computers do things. However, this is likely to change in the years to come thanks to systems like Codex. They may not replace programmers, but will for sure be a catalyst to empower laypeople to enter the world of programming and develop healthy relationships with our friends, the computers. One fair criticism of neural networks is their lack of accountability and interpretability. These systems do amazing things but also fail catastrophically. Failing itself isn’t the problem. The main issue is that neural networks are black boxes. Whatever goes inside, stays inside. there’s no way for us to take a peek and get a better sense of what’s going on. Debugging is extremely hard. Here Codex contrasts with GPT-3. Both systems are trained as black boxes, but at test time, Codex lets us see the code it writes from our prompts. We don’t just get the output from the computer, but also what Codex has interpreted. It allows us to check the code, delete it, or make the system rewrite it if necessary. This point is key for developers. No professional programmer would use a code-generating system whose code remains hidden. Greg Brockman talked about the two sides of programming. First, you need to understand the problem you want to solve and how to divide it into smaller pieces. Then, you need to take those pieces and map them to existing code. As Brockman wrote in the blog post, β€œThe latter activity is probably the least fun part of programming (and the highest barrier to entry), and it’s where OpenAI Codex excels most.” Codex isn’t meant to remove programmers from their jobs, but to reduce the boring tasks they have to do and let them focus on those aspects of their job that require a higher level of cognitive effort. Adapting a problem to syntax that a computer can understand is boring. Facing a seemingly unsolvable problem, understanding it little by little, and making small advances towards a solution is what makes programming fun. β€œ[Codex ] is really fun to use, and brings back the early joy of programming for me.” β€” Sam Altman Sam Altman has said this technology is in diapers. But it’s the first hint of what will come and it seems plausible that Codex and its successors will transversely change how we live life. One thing is to revolutionize AI. Another, very different thing is to revolutionize the world to the point that the little things we individually do on any given day, change. Codex isn’t just something that’s happening in AI that most of the world won’t even notice. It, if successful, will eventually modify how we all interact with computers. The same way the iPhone revolutionized how we interact with our phones. Subscribe to my free weekly newsletter Minds of Tomorrow for more content, news, opinions, and insights on Artificial Intelligence! You can also support my work directly and get unlimited access by becoming a Medium member here! :)
[ { "code": null, "e": 322, "s": 172, "text": "Until now, if we wanted to communicate with a computer, we had to learn its language. We had to adapt to it. From now on, computers will adapt to us." }, { "code": null, "e": 654, "s": 322, "text": "OpenAI has done it again. In July la...
Kubernetes - Kubectl
Kubectl is the command line utility to interact with Kubernetes API. It is an interface which is used to communicate and manage pods in Kubernetes cluster. One needs to set up kubectl to local in order to interact with Kubernetes cluster. Download the executable to the local workstation using the curl command. $ curl -O https://storage.googleapis.com/kubernetesrelease/ release/v1.5.2/bin/linux/amd64/kubectl $ curl -O https://storage.googleapis.com/kubernetesrelease/ release/v1.5.2/bin/darwin/amd64/kubectl After download is complete, move the binaries in the path of the system. $ chmod +x kubectl $ mv kubectl /usr/local/bin/kubectl Following are the steps to perform the configuration operation. $ kubectl config set-cluster default-cluster --server = https://${MASTER_HOST} -- certificate-authority = ${CA_CERT} $ kubectl config set-credentials default-admin --certificateauthority = ${ CA_CERT} --client-key = ${ADMIN_KEY} --clientcertificate = ${ ADMIN_CERT} $ kubectl config set-context default-system --cluster = default-cluster -- user = default-admin $ kubectl config use-context default-system Replace ${MASTER_HOST} with the master node address or name used in the previous steps. Replace ${MASTER_HOST} with the master node address or name used in the previous steps. Replace ${CA_CERT} with the absolute path to the ca.pem created in the previous steps. Replace ${CA_CERT} with the absolute path to the ca.pem created in the previous steps. Replace ${ADMIN_KEY} with the absolute path to the admin-key.pem created in the previous steps. Replace ${ADMIN_KEY} with the absolute path to the admin-key.pem created in the previous steps. Replace ${ADMIN_CERT} with the absolute path to the admin.pem created in the previous steps. Replace ${ADMIN_CERT} with the absolute path to the admin.pem created in the previous steps. To verify if the kubectl is working fine or not, check if the Kubernetes client is set up correctly. $ kubectl get nodes NAME LABELS STATUS Vipin.com Kubernetes.io/hostname = vipin.mishra.com Ready 41 Lectures 5 hours AR Shankar 15 Lectures 2 hours Harshit Srivastava, Pranjal Srivastava 18 Lectures 1.5 hours Nigel Poulton 25 Lectures 1.5 hours Pranjal Srivastava 18 Lectures 1 hours Pranjal Srivastava 26 Lectures 1.5 hours Pranjal Srivastava Print Add Notes Bookmark this page
[ { "code": null, "e": 2351, "s": 2195, "text": "Kubectl is the command line utility to interact with Kubernetes API. It is an interface which is used to communicate and manage pods in Kubernetes cluster." }, { "code": null, "e": 2434, "s": 2351, "text": "One needs to set up kubect...
Predicate vs Projection Pushdown in Spark 3 | by PΔ±nar Ersoy | Towards Data Science
Working with big data comes with the challenges of high amounts of processing costs and spending a lot of time scaling them. It is required to be aware of the adequate big data filtering techniques in order to get benefit from them. This article is essential for data scientists and data engineers looking to use the newest enhancements of Apache Spark in the sub-area of filtering especially on nested structured data. With Spark 2.x, files with a maximum 2-level nested structure with .json and .parquet extensions could be read. Example: filter(col(β€˜library.books’).isNotNull()) With Spark 3, it is now possible to read files with both parquet and .snappy parquet extensions in 2+ level nested structure without any need for schema flattening operations. Example: filter(col(β€˜library.books.title’).isNotNull()) While creating a spark session, the following configurations shall be enabled to use pushdown features of the Spark 3. The setting values ​​linked to Pushdown Filtering activities are activated by default. "spark.sql.parquet.filterPushdown", "true""spark.hadoop.parquet.filter.stats.enabled", "true""spark.sql.optimizer.nestedSchemaPruning.enabled","true""spark.sql.optimizer.dynamicPartitionPruning.enabled", "true" There are two kinds of pushdown filtering techniques as Predicate Pushdown and Projection Pushdown in Spark with the following differing features as described in the following sections of the article. Predicate Pushdown points to the where or filter clause which affects the number of rows returned. It basically relates to which rows will be filtered, not which columns. For this reason, while applying the filter on a nested column as β€˜library.books’ to merely return records with the values that are not null, the predicate pushdown function will make parquet read blocks that contain no null values for the specified column. The partition elimination technique allows optimizing performance when reading folders from the corresponding file system so that the desired files in the specified partition can be read. It will address shifting the filtering of data as close to the source as possible to prevent keeping unnecessary data into memory with the aim of reducing disk I/O. Below, it can be observed that the filter of partition of the action of push down, which is β€˜library.books.title’ = β€˜THE HOST’ filter is pushed down into the parquet file scan. This operation enables the minimization of the files and scanning of data. data.filter(col('library.books.title') == 'THE HOST').explain() For a more detailed output including Parsed Logical Plan, Analyzed Logical Plan, Optimized Logical Plan, Physical Plan, the β€˜extended’ parameter can be added in the explain() function as follows. data.filter(col('library.books.title') == 'THE HOST').explain('extended') It can also reduce the amount of data passed back to the Spark Engine for the average operation in the aggregation function on the β€˜price’ column. data.filter(col('library.books.title') == 'THE HOST').groupBy('publisher').agg(avg('price')).explain() Parquet formatted files keep some different statistical metrics for each column including the minimum and maximum of their values. Predicate Pushdown helps to skip irrelevant data and deals with the required data. Projection Pushdown stands for the selected column(s) with the select clause which affects the number of columns returned. It stores data in columns, so when your projection limits the query to specified columns, specifically those columns will be returned. data.select('library.books.title','library.books.author').explain() This means that the scanning of the β€˜library.books.title’, β€˜library.books.author’columns meaning that the scanning will occur in the file system/database before sending back the data to the Spark engine. For both the Projection and Predicate Pushdown, there are some crucial points to highlight. Pushdown Filtering works on partitioned columns which are calculated by the nature of parquet formatted files. To be able to get the most benefit from them, the partition columns shall be carrying smaller-sized values with adequate matching data to scatter the correct files in the directories. Prevent too many small-sized files causes to make scans less efficient with excessive parallelism. Also, preventing accepting too few big-sized files may damage parallelism. The Projection Pushdown feature allows the minimization of data transfer between the file system/database and the Spark engine by eliminating unnecessary fields from the table scanning process. It is primarily useful when a dataset contains too many columns. On the other hand, the Predicate Pushdown boosts performance by scaling down the amount of data passed between the file system/database and the Spark engine when filtering data. Projection Pushdown is distinguished by column-based and Predicate Pushdown by row-based filtering. Questions and comments are highly appreciated! Spark Release 3.0.0Pushdown of disjunctive predicatesGeneralize Nested Column PruningParquet predicate pushdown for nested fields Spark Release 3.0.0 Pushdown of disjunctive predicates Generalize Nested Column Pruning Parquet predicate pushdown for nested fields
[ { "code": null, "e": 591, "s": 171, "text": "Working with big data comes with the challenges of high amounts of processing costs and spending a lot of time scaling them. It is required to be aware of the adequate big data filtering techniques in order to get benefit from them. This article is essent...
What is the use of update command in SQL?
Update command is a data manipulation command which is used to edit the records of a table. It may be used to update a single row based on a condition, all rows or set of rows based on the condition given by the user. It is used along with the SET clause, operationally, a WHERE clause may be used to match conditions βˆ’ An example is given below for the use of update command βˆ’ update table student set name=’sneha’ where branch=’CSE’; Given below is another example of the usage of update command βˆ’ create table employee(ename varchar(30),department varchar(20)); insert into employee values('pinky','CSE'); insert into employee values('priya','ECE'); insert into employee values('hari','EEE'); select * from employee; update employee set ename='sneha' where department='CSE'; select * from employee; You will get the following output βˆ’ pinky|CSE priya|ECE hari|EEE sneha|CSE priya|ECE hari|EEE Given below is an example to update table employee set age=age+1: create table employee(ename varchar(30),department varchar(20), age number(30)); insert into employee values('ram','projectmanager',40); insert into employee values('priya','assistant director',45); insert into employee values('hari','developer',46); select * from employee; update employee set age=age+2; select * from employee; You will get the following output βˆ’ ram|projectmanager|40 priya|assistant director|45 hari|developer|46 ram|projectmanager|42 priya|assistant director|47 hari|developer|48 Given below is an example to update table salary set βˆ’ Here, bonus=bonus+5000 basic=basic+(0.2*bonus) create table employee(ename varchar(30),department varchar(20), age number(30), salary number(20)); insert into employee values('ram','projectmanager',40,50000); insert into employee values('priya','assistant director',45,45000); insert into employee values('hari','developer',46,30000); select * from employee; update employee set age=age+2, salary= salary+5000; select * from employee; You will get the following output βˆ’ ram |projectmanager |40|50000 priya|assistant director|45|45000 hari |developer |46|30000 ram |projectmanager |42|55000 priya|assistant director|47|50000 hari |developer |48|35000
[ { "code": null, "e": 1280, "s": 1062, "text": "Update command is a data manipulation command which is used to edit the records of a table. It may be used to update a single row based on a condition, all rows or set of rows based on the condition given by the user." }, { "code": null, "e"...
How to get particular id records using regexp in Android sqlite?
Before getting into example, we should know what sqlite data base in android is. SQLite is an open source SQL database that stores data to a text file on a device. Android comes in with built in SQLite database implementation. SQLite supports all the relational database features. In order to access this database, you don't need to establish any kind of connections for it like JDBC, ODBC etc. This example demonstrate about How to get particular id records using regexp in Android sqlite Step 1 βˆ’ Create a new project in Android Studio, go to File β‡’ New Project and fill all required details to create a new project. Step 2 βˆ’ Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity" android:orientation="vertical"> <EditText android:id="@+id/name" android:layout_width="match_parent" android:hint="Enter Name" android:layout_height="wrap_content" /> <EditText android:id="@+id/salary" android:layout_width="match_parent" android:inputType="numberDecimal" android:hint="Enter Salary" android:layout_height="wrap_content" /> <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content"> <Button android:id="@+id/save" android:text="Save" android:layout_width="wrap_content" android:layout_height="wrap_content" /> <Button android:id="@+id/refresh" android:text="Refresh" android:layout_width="wrap_content" android:layout_height="wrap_content" /> <Button android:id="@+id/udate" android:text="Update" android:layout_width="wrap_content" android:layout_height="wrap_content" /> <Button android:id="@+id/Delete" android:text="DeleteALL" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </LinearLayout> <ListView android:id="@+id/listView" android:layout_width="match_parent" android:layout_height="wrap_content"> </ListView> </LinearLayout> In the above code, we have taken name and salary as Edit text, when user click on save button it will store the data into sqlite data base. Click on refresh button to get the listview. Step 3 βˆ’ Add the following code to src/MainActivity.java package com.example.andy.myapplication; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.view.View; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.ListView; import android.widget.Toast; import java.util.ArrayList; public class MainActivity extends AppCompatActivity { Button save, refresh; EditText name, salary; ArrayAdapter arrayAdapter; private ListView listView; @Override protected void onCreate(Bundle readdInstanceState) { super.onCreate(readdInstanceState); setContentView(R.layout.activity_main); final DatabaseHelper helper = new DatabaseHelper(this); final ArrayList array_list = helper.getAllCotacts(); name = findViewById(R.id.name); salary = findViewById(R.id.salary); listView = findViewById(R.id.listView); arrayAdapter = new ArrayAdapter(MainActivity.this, android.R.layout.simple_list_item_1, array_list); listView.setAdapter(arrayAdapter); findViewById(R.id.Delete).setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if (helper.delete()) { Toast.makeText(MainActivity.this, "Deleted", Toast.LENGTH_LONG).show(); } else { Toast.makeText(MainActivity.this, "NOT Deleted", Toast.LENGTH_LONG).show(); } } }); findViewById(R.id.udate).setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if (!name.getText().toString().isEmpty() && !salary.getText().toString().isEmpty()) { if (helper.update(name.getText().toString(), salary.getText().toString())) { Toast.makeText(MainActivity.this, "Updated", Toast.LENGTH_LONG).show(); } else { Toast.makeText(MainActivity.this, "NOT Updated", Toast.LENGTH_LONG).show(); } } else { name.setError("Enter NAME"); salary.setError("Enter Salary"); } } }); findViewById(R.id.refresh).setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { array_list.clear(); array_list.addAll(helper.getAllCotacts()); arrayAdapter.notifyDataSetChanged(); listView.invalidateViews(); listView.refreshDrawableState(); } }); findViewById(R.id.save).setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if (!name.getText().toString().isEmpty() && !salary.getText().toString().isEmpty()) { if (helper.insert(name.getText().toString(), salary.getText().toString())) { Toast.makeText(MainActivity.this, "Inserted", Toast.LENGTH_LONG).show(); } else { Toast.makeText(MainActivity.this, "NOT Inserted", Toast.LENGTH_LONG).show(); } } else { name.setError("Enter NAME"); salary.setError("Enter Salary"); } } }); } } Step 4 βˆ’ Add the following code to src/ DatabaseHelper.java package com.example.andy.myapplication; import android.content.ContentValues; import android.content.Context; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteException; import android.database.sqlite.SQLiteOpenHelper; import java.io.IOException; import java.util.ArrayList; class DatabaseHelper extends SQLiteOpenHelper { public static final String DATABASE_NAME = "salaryDatabase6"; public static final String CONTACTS_TABLE_NAME = "SalaryDetails"; public DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, 1); } @Override public void onCreate(SQLiteDatabase db) { try { db.execSQL( "create table " + CONTACTS_TABLE_NAME + "(id INTEGER PRIMARY KEY, name text,salary float,datetime default current_timestamp )" ); } catch (SQLiteException e) { try { throw new IOException(e); } catch (IOException e1) { e1.printStackTrace(); } } } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL("DROP TABLE IF EXISTS " + CONTACTS_TABLE_NAME); onCreate(db); } public boolean insert(String s, String s1) { SQLiteDatabase db = this.getWritableDatabase(); ContentValues contentValues = new ContentValues(); contentValues.put("name", s); contentValues.put("salary", s1); db.replace(CONTACTS_TABLE_NAME, null, contentValues); return true; } public ArrayList getAllCotacts() { SQLiteDatabase db = this.getReadableDatabase(); ArrayList<String> array_list = new ArrayList<String>(); Cursor res = db.rawQuery("select (id ||' : '||name || ' : ' || cast(salary as int) || ' : '|| strftime('%d-%m-%Y %H:%M:%S', datetime)) as fullname from " + CONTACTS_TABLE_NAME+" where Id regexp '1|3'", null); res.moveToFirst(); while (res.isAfterLast() == false) { if ((res != null) && (res.getCount() > 0)) array_list.add(res.getString(res.getColumnIndex("fullname"))); res.moveToNext(); } return array_list; } public boolean update(String s, String s1) { SQLiteDatabase db = this.getWritableDatabase(); db.execSQL("UPDATE " + CONTACTS_TABLE_NAME + " SET name = " + "'" + s + "', " + "salary = " + "'" + s1 + "'"); return true; } public boolean delete() { SQLiteDatabase db = this.getWritableDatabase(); db.execSQL("DELETE from " + CONTACTS_TABLE_NAME); return true; } } Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen βˆ’ In the above result, it is showing 1, 3 records because we have given regexp 1|3. Click here to download the project code
[ { "code": null, "e": 1457, "s": 1062, "text": "Before getting into example, we should know what sqlite data base in android is. SQLite is an open source SQL database that stores data to a text file on a device. Android comes in with built in SQLite database implementation. SQLite supports all the re...
Input type DateTime Value format with HTML
Use input type=”datetype”. The datetime input type is used in HTML using the <input type="datetime-local">. Using this, allow the users to select date and time. A date time picker popup is visible whenever input field is clicked. <!DOCTYPE html> <html> <head> <title>HTML input datetime</title> </head> <body> <form action = "" method = "get"> Details:<br><br> Student Name<br><input type = "name" name = "sname"><br> Exam Date and Time<br><input type = "datetime-local" name = "datetime"><br> <input type = "submit" value = "Submit"> </form> </body> </html>
[ { "code": null, "e": 1292, "s": 1062, "text": "Use input type=”datetype”. The datetime input type is used in HTML using the <input type=\"datetime-local\">. Using this, allow the users to select date and time. A date time picker popup is visible whenever input field is clicked." }, { "code":...
JSTL - Core <c:forEach>, <c:forTokens> Tag
These tags exist as a good alternative to embedding a Java for, while, or do-while loop via a scriptlet. The <c:forEach> tag is a commonly used tag because it iterates over a collection of objects. The <c:forTokens> tag is used to break a string into tokens and iterate through each of the tokens. The <c:forEach> tag has the following attributes βˆ’ The <c:forTokens> tag has similar attributes as that of the <c:forEach> tag except one additional attribute delims which specifies sharacters to use as delimiters. <%@ taglib uri = "http://java.sun.com/jsp/jstl/core" prefix = "c" %> <html> <head> <title><c:forEach> Tag Example</title> </head> <body> <c:forEach var = "i" begin = "1" end = "5"> Item <c:out value = "${i}"/><p> </c:forEach> </body> </html> The above code will generate the following result βˆ’ Item 1 Item 2 Item 3 Item 4 Item 5 <%@ taglib uri = "http://java.sun.com/jsp/jstl/core" prefix = "c" %> <html> <head> <title><c:forTokens> Tag Example</title> </head> <body> <c:forTokens items = "Zara,nuha,roshy" delims = "," var = "name"> <c:out value = "${name}"/><p> </c:forTokens> </body> </html> The above code will generate the following result βˆ’ Zara nuha roshy 108 Lectures 11 hours Chaand Sheikh 517 Lectures 57 hours Chaand Sheikh 41 Lectures 4.5 hours Karthikeya T 42 Lectures 5.5 hours TELCOMA Global 15 Lectures 3 hours TELCOMA Global 44 Lectures 15 hours Uplatz Print Add Notes Bookmark this page
[ { "code": null, "e": 2537, "s": 2239, "text": "These tags exist as a good alternative to embedding a Java for, while, or do-while loop via a scriptlet. The <c:forEach> tag is a commonly used tag because it iterates over a collection of objects. The <c:forTokens> tag is used to break a string into to...
Building an Automated Machine Learning Pipeline: Part Two | by Ceren Iyim | Towards Data Science
Part 1: Understand, clean, explore, process data Part 2: Set metric and baseline, select and tune model (you are reading now) Part 3: Train, evaluate and interpret model (live!) Part 4: Automate your pipeline using Docker and Luigi (live!) In this article series, we set our course to build a 9-step machine learning (ML) pipeline (we are calling it the wine rating predictor) and automate it. Eventually, we will observe how each step congregates and runs in production systems. We are working on a supervised regression problem. We want to develop a performant, understandable and good wine rating predictor that can predict the points, a quality measure of wine. In the first article, we defined the problem and our motivation behind building the wine rating predictor. Then, we had a detailed look at the data by visualizing the relationships between the features and the target along with the Understand & Clean & Format Data and Exploratory Data Analysis steps. In the Feature Engineering & Pre-processing step, we added new and more useful features. Besides, we prepared the training and test datasets to use during training and evaluation of the model. As the final step of the first article, we created validation dataset out of the training dataset for model selection. In this article, we are going to complete the following steps: 4. Set Evaluation Metric & Establish Baseline 5. Select an ML Model based on the Evaluation Metric 6. Perform Hyperparameter Tuning on the Selected Model The code behind this article can be found in this notebook. The complete project is available on GitHub: github.com Feel free to share, fork, and utilize this repo for your projects! The datasets that we are going to use is available in notebooks/transformed X_train consisting of features (country, province, region_1, variety, price, year, taster_name, is_red, is_white, is_rose, is_dry, is_sweet, is_sparkling, is_blend) and y_train consisting of the target (points) to train the model X_valid consisting of features and y_valid consisting of target to validate the model We are going to need the following libraries of Python in this notebook: Let’s load the datasets into dataframes and convert to an array using below functions: X_train = pd.read_csv("transformed/X_train.csv")X_train_array = convert_features_to_array(X_train)X_valid = pd.read_csv("transformed/X_valid.csv")X_valid_array = convert_features_to_array(X_valid)y_train = pd.read_csv("transformed/y_train.csv")y_train_array = convert_target_to_array(y_train)y_valid = pd.read_csv("transformed/y_valid.csv")y_valid_array = convert_target_to_array(y_valid) If this was a Kaggle competition, we would skip this step of the pipeline because we would be given with the evaluation metric. However, in real-world applications of data science/machine learning, the evaluation metric is set by data scientists in line with the stakeholder’s expectations from the ML model. That is why this is an important step. After we decide on our evaluation metric, to quantify our initial motive β€” building a good wine rating predictor and to compare our model’s performance against, we are going to form a baseline. The mean square error (MSE) is set as the evaluation metric. It is the average of the sum of squared residuals where a ​residual is the subtraction of the actual values from the predicted values of the target variable. In other words, evaluation of the model is done by looking at the measure of how large the squared errors (residuals) are spread out. We selected MSE because it is interpretable and analogous to variance, and it is a widely-used optimization criterion among ML models. (E.g. linear regression, random forest) A ​baseline​ can be explained as generating a naive guess of the target value by using expert knowledge or a few lines of code. It also helps to measure the performance of the ML model. If the built-model (wine rating predictor) cannot beat this baseline, then the selected ML algorithm may not be the best approach to solve this problem or we might want to revisit the previous steps of the pipeline. Recall that the points (target) has a normal distribution between 80 and 100. The mean is 88.45 and the variance is 9.1. We are going to form the baseline by: using these statistics and calculating differences between each point in the validation set and the mean of the training dataset, then taking the average of the sum of the squared differences (similar approach of computing MSE): # set baseline as mean of training set's target valuebaseline = (np .mean( y_train_array))# calculate MSE baselinemse_baseline = (np .mean( np.square( baseline - y_valid_array))) It is not a coincidence that the variance of the points and baseline error is almost equal. You can think of this baseline MSE as the manually calculated variance with a smaller set from our dataset. This number (9.01) will accompany us in the next step β€” Select an ML Model based on the Evaluation Metric while we are testing different ML algorithms. I always find useful trying out several algorithms with different rationales behind, because I believe this whole process involves experiment as well! (β€œscientist” part of being a data scientist makes sense now πŸ™ƒ) While searching for the best algorithm, we are going to observe both the improvement of the MSE and the run time of the algorithms (with %%time magic at the beginning of the cells) and compare the algorithm’s MSE to our baseline MSE. At the same time, we will keep in mind the understandable and performant requirements of the wine rating predictor. We are going to experiment with one linear algorithm, two distance-based algorithm and two tree-based algorithms, ordered from the simplest to most complex: Linear regression K-nearest neighbors regressor Support vector regressor Random forest regressor Light gradient boosting regressor We are going to train them with the training set, and compare their generalization performances with the validation set. Below function will do the work for us. At the end of this step, we are going to elaborate on how the selected algorithm works. Linear regression slightly decreased the baseline metric, showing that it is not a candidate to be a good predictor. Distance-based models use the euclidian distance (or other distance measures) for training, thus varying ranges causes distance-based models to generate inaccurate predictions. To apply distance-based algorithms, we scaled the datasets with normalization before here in the notebook. K-nearest neighbors regressor performed better than the linear regression. However, MSE is still high, showing that this algorithm is not a good predictor as well. Support vector regressor performed better than the k-nearest neighbors regressor at a higher run-time. All in all, MSE decreased 35% showing that this algorithm might be a candidate for building a good predictor. Random forest regressor performed better than the support vector regressor at lesser run-times. It decreased the MSE 44% and replaced support vector regressor in the good-predictor list. Light gradient boosting regressor (Light GBM) showed the best performance of all tried models. It also lowered the baseline MSE 45% and showed that it is a potential candidate for a good predictor at a lower run time. As a summary: It might not be a fair selection of algorithms since we only train them with the default hyperparameters. Nevertheless, this is the experimental step of the pipeline, that is why both light GBM and random forest algorithm has been given a chance for further improvements in the Perform Hyperparameter Tuning on the Selected Model step. (You can find it here in the notebook) In this article, we will use random forest regressor due to the understandable and performant requirements and we will only elaborate on it. Before moving on to the next step, let’s understand how random forest regressor works: Random forest regressor ​is an ensemble algorithm that builds multiple decision trees once and trains them on various sub-samples and various subsets of the features of the dataset. Its random selection of the dataset and feature subsamples makes this algorithm more robust. A Decision tree​ uses a tree-like structure to make predictions. It breaks down a dataset into smaller and smaller subsets while at the same time an associated decision tree is incrementally developed. The final result is a tree with decision nodes and leaf nodes. A decision node has two or more branches, each representing values for the feature tested. Leaf node represents a final decision on the target. A hyperparameter is the set of defined parameter(s) by the data scientist or ML engineer whose values are not affected by the training process of the model. On the other hand, a parameter of a model is searched and optimized by the model during the training process and affected by the dataset. I loved this quotation from Jason Brownlee on Machine Learning Mastery to prevent confusion of the two: β€œIf you have to specify a model parameter manually then, it is probably a model hyperparameter.” An example of a parameter from our simplest model: Coefficients of the linear regression model, which are optimized through the model training. An example of a hyperparameter from our selected model: The number of trees constructed in a random forest model, which is specified by us or by scikit-learn. Hyperparameter tuning is the process of defining, searching and perhaps improving the performance of the model further. We are going to search for the best set of parameters with the random search and k-fold cross-validation. Random search is the process of searching for the combination of the defined parameters randomly and comparing the defined score (mean squared error, for this problem) in each iteration. It is fast and run-time-efficient, but you may not always find the most optimal set of hyperparameters due to the searching of a random combination of defined hyperparameters. We are going to search for the below hyperparameters of random forest regressor with the hyperparameter_grid dictionary: n_estimators: number of trees to be used in the model, default is 100. min_samples_split: minimum number of samples required to split an internal node, default value is 2. min_samples_leaf: minimum number of samples required to be at a leaf node, default value is 1. max_features: number of features to consider when looking for the best split, default value is auto. # define search parameters n_estimators = [100, 200, 300, 500, 1000] min_samples_split = [2, 4, 6, 10] min_samples_leaf = [1, 2, 4, 6, 8] max_features = ['auto', 'sqrt', 'log2', None]# Define the grid of hyperparameters to search hyperparameter_grid = { "n_estimators": n_estimators, "min_samples_split": min_samples_split, "min_samples_leaf": min_samples_leaf, "max_features": max_features} K-fold cross-validation is the method used to assess the performance of the model on the complete training dataset. Rather than splitting the dataset into two static subsets of training and validation set, the dataset is divided equally for the given K. Then the model is trained with K-1 subsets and tested on Kth subset iteratively. This process makes the model more robust to overfitting β€” more on that at the end of the article. To perform hyperparameter tuning with random search and k-fold cross-validation we are going to add training and validation datasets and continue with one training set from now on. # add dataframes back for to perform random search and cross-validationX = pd.concat([X_train, X_valid])y = pd.concat([y_train, y_valid])X_array = convert_features_to_array(X)y_array = convert_target_to_array(y) Since we have less than 10.000 rows in the training dataset, we are going to perform 4 fold cross-validation to have sufficient number of data points in each fold. We are going to consolidate random search and k-fold cross-validation in the RandomizedSearchCV object: rf_random_cv = RandomizedSearchCV( estimator=rf, param_distributions=hyperparameter_grid, cv=4, n_iter=25, scoring='neg_mean_squared_error', n_jobs=-1, verbose=1, return_train_score=True, random_state=42)rf_random_cv.fit(X_array, y_array) With the fit method, we initiate the search for the random combination of the values defined for each hyperparameter in the hyperparameter_grid. At the same time, "neg_mean_squared_error" is calculated for each fold in each iteration. We can observe the determined best set of parameters by calling the best_estimator_ method: After hyperparameter tuning, the best set of hyperparameters are determined as: n_estimators: 200 min_samples_split: 4 min_samples_leaf: 2 max_features: β€˜sqrt’ Let’s see if those hyperparameters will help us to improve the MSE of the random forest regressor further. rf_random_cv_model = rf_random_cv.best_estimator_ The MSE is decreased from 5.41 to 4.99 and tuned model’s run-time resulted in 1.12 seconds which is lower than the run-time of the initial random forest model (1.89 seconds). Hyperparameter tuning has not only improved the evaluation metric but also lowered the run-time in our case. Given the sample dataset, the determined set of features and the tuned random forest regressor we have successfully built a good wine rating predictor without falling into the underfitting and overfitting areas! Although this topic is another article by itself, I think it is important to address here since we mentioned these concepts. Overfitting happens when the ML model perfectly fits (or memorizes) the training dataset rather than grasping the common patterns between the features and the target. An overfit model will have a high variance, I find a resemblance between this situation and a fragile glass home. It is perfectly built for its current conditions, but it is less likely to survive if the conditions are changed. Did you remember that cross-validation makes our model more robust to overfitting? The reason behind is that our model sees a different dataset in each fold during training β€” like a house already encounters different weather conditions. This improves the generalization performance which makes the model more resistant to overfitting. Underfitting happens when the ML model fails to grasp the relationships between the features and the target due to not having enough data points or features. It may perform as bad as a random guess about the predictions. An underfit model will have a high bias, I think of this situation as a half-built shed. It should have to be undergone some construction to serve its purpose. Did you remember the linear regression model that only showed a slight improvement on the baseline MSE? That was an example of an underfitting. Linear regression model resulted in almost the same MSE as the baseline MSE and failed to understand the relationship between the features and the target. All in all, both overfitting and underfitting decrease the generalization performance of the machine learning models, and results in unsatisfactory levels of evaluation metric. Although we did not test the tuned random forest regressor specifically the overfitting case, the 45% improvement shows that we are at the fine line between underfitting and overfitting. Before moving on to the conclusions let’s save our model in the directory so that we can pick up where we left off in the last article and notebook. filename = 'random_forests_model.sav'pickle.dump(rf_random_cv_model, open(filename, 'wb')) In this article, we completed the intermediate steps of the machine learning pipeline. After some quick recap on the first article and objectives, we Set the evaluation metric as mean square error due to its wide-use in ML algorithms and explainability.Established a baseline aligning with the mean square error calculation which resulted in 9.01.Tried several different ML algorithms, and selected random forest regressor which reported MSE of 5.4 at 1.93 seconds run time.Fine-tuned hyperparameters of the random forest regressor and achieved 8% performance improvement compared to the initial random forest model (MSE: 4.99 at 1.12 seconds run time). Set the evaluation metric as mean square error due to its wide-use in ML algorithms and explainability. Established a baseline aligning with the mean square error calculation which resulted in 9.01. Tried several different ML algorithms, and selected random forest regressor which reported MSE of 5.4 at 1.93 seconds run time. Fine-tuned hyperparameters of the random forest regressor and achieved 8% performance improvement compared to the initial random forest model (MSE: 4.99 at 1.12 seconds run time). If we consider our initial motivation (building a good wine predictor) and the baseline MSE (9.01), we are going on the right path as we approach the end of the pipeline. We have significantly lowered the baseline MSE, from 9.01 to 4.99 which results in 45% improvement! The third article will start by loading the fine-tuned random forest regressor model and will focus on the evaluation of the model with the test set, as well as the results of it. (steps 7, 8, and 9). towardsdatascience.com The last article will automate this pipeline with Docker and Luigi. towardsdatascience.com Thanks for reading 😊 For comments or constructive feedback, you can reach out to me on responses, Twitter or Linkedin! Stay safe and healthy until then πŸ‘‹
[ { "code": null, "e": 221, "s": 172, "text": "Part 1: Understand, clean, explore, process data" }, { "code": null, "e": 298, "s": 221, "text": "Part 2: Set metric and baseline, select and tune model (you are reading now)" }, { "code": null, "e": 350, "s": 298, ...
How to use enums in C++?
Enumeration is a user defined datatype in C/C++ language. It is used to assign names to the integral constants which makes a program easy to read and maintain. The keyword β€œenum” is used to declare an enumeration. The following is the syntax of enums. enum enum_name{const1, const2, ....... }; Here, enum_name βˆ’ Any name given by user. const1, const2 βˆ’ These are values of type flag. The enum keyword is also used to define the variables of enum type. There are two ways to define the variables of enum type as follows βˆ’ enum colors{red, black}; enum suit{heart, diamond=8, spade=3, club}; The following is an example of enums. Live Demo #include <iostream> using namespace std; enum colors{red=5, black}; enum suit{heart, diamond=8, spade=3, club}; int main() { cout <<"The value of enum color : "<<red<<","<<black; cout <<"\nThe default value of enum suit : "<<heart<<","<<diamond<<","<<spade<<","<<club; return 0; } The value of enum color : 5,6 The default value of enum suit : 0,8,3,4 In the above program, two enums are declared as color and suit outside the main() function. enum colors{red=5, black}; enum suit{heart, diamond=8, spade=3, club}; In the main() function, the values of enum elements are printed. cout <<"The value of enum color : "<<red<<","<<black; cout <<"\nThe default value of enum suit : "<<heart<<","<<diamond<<","<<spade<<","<<club;
[ { "code": null, "e": 1276, "s": 1062, "text": "Enumeration is a user defined datatype in C/C++ language. It is used to assign names to the integral constants which makes a program easy to read and maintain. The keyword β€œenum” is used to declare an enumeration." }, { "code": null, "e": 13...
Handling POST, PUT, PATCH and DELETE Requests
In this chapter, we will understand how to use the POST method using requests library and also pass parameters to the URL. For PUT request, the Requests library has requests.post() method, the example of it is shown below βˆ’ myurl = 'https://postman-echo.com/post' myparams = {'name': 'ABC', 'email':'xyz@gmail.com'} res = requests.post(myurl, data=myparams) print(res.text) E:\prequests>python makeRequest.py {"args":{},"data":"","files":{},"form":{"name":"ABC","email":"xyz@gmail.com"}, "headers":{"x-forwarded-proto":"https","host":"postman-echo.com","content- length":"30","accept":"*/*","accept-encoding":"gzip,deflate","content- type":"application/x-www-form-urlencoded","user-agent":"python- requests/2.22.0","x-forwarded- port":"443"},"json":{"name":"ABC","email":"xyz@gmail.com"}, "url":"https://postman-echo.com/post"} In the example shown above, you can pass the form data as key-value pair to the data param inside requests.post(). We will also see how to work with PUT, PATCH and DELETE in requests module. For PUT request, the Requests library has requests.put() method, the example of it is shown below. import requests myurl = 'https://postman-echo.com/put' myparams = {'name': 'ABC', 'email':'xyz@gmail.com'} res = requests.put(myurl, data=myparams) print(res.text) E:\prequests>python makeRequest.py {"args":{},"data":"","files":{},"form":{"name":"ABC","email":"xyz@gmail.com"}, "headers":{"x-forwarded-proto":"https","host":"postman-echo.com","content- length": "30","accept":"*/*","accept-encoding":"gzip, deflate","content- type":"applicatio n/x-www-form-urlencoded","user-agent":"python-requests/2.22.0","x-forwarded- port ":"443"},"json":{"name":"ABC","email":"xyz@gmail.com"}, "url":"https://postman-echo.com/put"} For the PATCH request, the Requests library has requests.patch() method, the example of it is shown below. import requests myurl = https://postman-echo.com/patch' res = requests.patch(myurl, data="testing patch") print(res.text) E:\prequests>python makeRequest.py {"args":{},"data":{},"files":{},"form":{},"headers":{"x-forwarded- proto":"https" ,"host":"postman-echo.com","content-length":"13","accept":"*/*","accept- encoding ":"gzip, deflate","user-agent":"python-requests/2.22.0","x-forwarded- port":"443" },"json":null,"url":"https://postman-echo.com/patch"} For the DELETE request, the Requests library has requests.delete() method, the example of it is shown below. import requests myurl = 'https://postman-echo.com/delete' res = requests.delete(myurl, data="testing delete") print(res.text) E:\prequests>python makeRequest.py {"args":{},"data":{},"files":{},"form":{},"headers":{"x-forwarded- proto":"https" ,"host":"postman-echo.com","content-length":"14","accept":"*/*","accept- encoding ":"gzip, deflate","user-agent":"python-requests/2.22.0","x-forwarded- port":"443" },"json":null,"url":"https://postman-echo.com/delete"} Print Add Notes Bookmark this page
[ { "code": null, "e": 2311, "s": 2188, "text": "In this chapter, we will understand how to use the POST method using requests library and also pass parameters to the URL." }, { "code": null, "e": 2412, "s": 2311, "text": "For PUT request, the Requests library has requests.post() m...
How to change the color of X and Y axis lines in a JavaFX char?
The javafx.scene.chart package provides classes to create various charts namely &minusl line chart, area chart, bar chart, pie chart, bubble chart, scatter chart, etc. Except for pie chart, all other charts are plotted on the XY planes. You can create the required XY chart by instantiating the respective class. The fx-border-color class of JavaFX CSS is used to set the color of the border of a node. The fx-border-color class of JavaFX CSS is used to set the color of the border of a node. The -fx-border-width class of JavaFX CSS is used to set the width of the border of a node. The -fx-border-width class of JavaFX CSS is used to set the width of the border of a node. The setStyle() method of the Node (Base class of all the nodes) class accepts a CSS string and sets the specified style to the current chart. The setStyle() method of the Node (Base class of all the nodes) class accepts a CSS string and sets the specified style to the current chart. To change the color of the x and y axes (to OrangeRed), set the following CSS to the chart object using the setStyle() method βˆ’ To change the color of the x and y axes (to OrangeRed), set the following CSS to the chart object using the setStyle() method βˆ’ fx-border-color: OrangeRed transparent transparent; -fx-border-width:3 -fx-border-color: transparent OrangeRed transparent transparent; -fx-borderwidth:3 import javafx.application.Application; import javafx.scene.Scene; import javafx.stage.Stage; import javafx.scene.chart.LineChart; import javafx.scene.chart.NumberAxis; import javafx.scene.chart.XYChart; import javafx.scene.layout.StackPane; public class ChangingAxisColor extends Application { public void start(Stage stage) { //Defining the x axis NumberAxis xAxis = new NumberAxis(1960, 2020, 10); xAxis.setLabel("Years"); //Defining the y axis NumberAxis yAxis = new NumberAxis (0, 350, 50); yAxis.setLabel("No.of schools"); //Creating the line chart LineChart<Number,Number> linechart = new LineChart<Number,Number>(xAxis, yAxis); XYChart.Series<Number,Number> series = new XYChart.Series<Number,Number>(); series.setName("No of schools in an year"); series.getData().add(new XYChart.Data<Number,Number>(1970, 15)); series.getData().add(new XYChart.Data<Number,Number>(1980, 30)); series.getData().add(new XYChart.Data<Number,Number>(1990, 60)); series.getData().add(new XYChart.Data<Number,Number>(2000, 120)); series.getData().add(new XYChart.Data<Number,Number>(2013, 240)); series.getData().add(new XYChart.Data<Number,Number>(2014, 300)); //Setting the data to Line chart linechart.getData().add(series); //Changing the color of the x and y axis linechart.getXAxis().setStyle("-fx-border-color: OrangeRed transparent transparent; -fx-border-width:3"); linechart.getYAxis().setStyle("-fx-border-color: transparent OrangeRed transparent transparent; -fx-border-width:3"); //Creating a Group object StackPane root = new StackPane(linechart); //Setting the scene object Scene scene = new Scene(root, 595, 300); stage.setTitle("Line Chart"); stage.setScene(scene); stage.show(); } public static void main(String args[]){ launch(args); } }
[ { "code": null, "e": 1230, "s": 1062, "text": "The javafx.scene.chart package provides classes to create various charts namely &minusl line chart, area chart, bar chart, pie chart, bubble chart, scatter chart, etc." }, { "code": null, "e": 1375, "s": 1230, "text": "Except for pie...
How to clear the Entry widget after a button is pressed in Tkinter?
Tkinter Entry widgets are used to display a single line text that is generally taken in the form of user Input. We can clear the content of Entry widget by defining a method delete(0, END) which aims to clear all the content in the range. The method can be invoked by defining a function which can be used by creating a Button Object. In this example, we have created an entry widget and a button that can be used to clear all the content from the widget. #Import the required libraries from tkinter import * #Create an instance of tkinter frame win= Tk() #Set the geometry of frame win.geometry("650x250") #Define a function to clear the Entry Widget Content def clear_text(): text.delete(0, END) #Create a entry widget text= Entry(win, width=40) text.pack() #Create a button to clear the Entry Widget Button(win,text="Clear", command=clear_text, font=('Helvetica bold',10)).pack(pady=5) win.mainloop() Running the above code will display a window that contains an entry widget and a Button which can be used to clear the text written in the entry field. Now click on the β€œClear” Button to clear the Entry widget.
[ { "code": null, "e": 1397, "s": 1062, "text": "Tkinter Entry widgets are used to display a single line text that is generally taken in the form of user Input. We can clear the content of Entry widget by defining a method delete(0, END) which aims to clear all the content in the range. The method can...
Installing OpenCV 3.4.3 on Raspberry Pi 3 Model B+ | by Mike Alatortsev | Towards Data Science
I previously wrote a step-by-step guide showing how to make OpenCV 3.4.1 run on a Raspberry Pi 3 B. That article generated a lot of feedback. I have completed a few installations since then, so here’s a new, streamlined, process for getting OpenCV 3.4.1 (most recent version!) to run on your Raspberry Pi 3 B plus. Why 3.4.1? It offers an improved DNN module and many other improvements and bug fixes. Overall, about 250 patches have been integrated and over 200 issues have been closed since OpenCV 3.4.0. Warning: compiling OpenCV is a CPU-intensive task β€” all 4 cores will be maxed out for 1..2 hours. To avoid overheating, make sure your Raspberry Pi has radiators and a fan (or place a powerful external fan next to it). No, it won’t die from overheating, but it will throttle its CPU performance, potentially increasing build time from 2 hours to 6 hours, and you probably don’t want that. So, here’s the new process: Step 1: make sure your OS is current. (< 5min) Current OS version is Raspbian Stretch (April 2018). You can either do a clean install from SD (follow the instructions listed here), or upgrade your existing version. To upgrade, open (as sudo) the files /etc/apt/sources.list and /etc/apt/sources.list.d/raspi.list in your favorite editor, and change all occurences of your current distro name (e.g. β€œjessie”) to β€œstretch”. Then, open a terminal and run the update: sudo apt-get update sudo apt-get -y dist-upgrade If you’re already running Stretch, simply update all packages before proceeding: sudo apt-get update sudo apt-get upgrade Step 2: configure SSH and utilities (< 2min) Make sure SSH is enabled. Change the default password, too! Some of my favorite utilities on Linux are screen (to keep processes running if your terminal session is dropped) and htop (performance monitoring) β€” these may already be pre-installed: sudo apt-get install screen sudo apt-get install htop Step 3: free up 1GB+ by ditching Wolfram and Libreoffice (< 2min) It’s unlikely that you will need these two packages on your computer vision box, so: sudo apt-get purge wolfram-engine sudo apt-get purge libreoffice* sudo apt-get clean sudo apt-get autoremove Step 4: install dependencies (< 10min) sudo apt-get install build-essential cmake pkg-config sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev sudo apt-get install libxvidcore-dev libx264-dev sudo apt-get install libgtk2.0-dev libgtk-3-dev sudo apt-get install libatlas-base-dev gfortran Step 5: install Python 3 (< 2min) We need this in order to enable Python bindings in Open CV: In OpenCV, all algorithms are implemented in C++. But these algorithms can be used from different languages like Python, Java etc. This is made possible by the bindings generators. These generators create a bridge between C++ and Python which enables users to call C++ functions from Python. To get a complete picture of what is happening in background, a good knowledge of Python/C API is required. A simple example on extending C++ functions to Python can be found in official Python documentation[1]. So extending all functions in OpenCV to Python by writing their wrapper functions manually is a time-consuming task. So OpenCV does it in a more intelligent way. OpenCV generates these wrapper functions automatically from the C++ headers using some Python scripts which are located in modules/python/src2 sudo apt-get install python3-dev Step 6: install pip3 (< 2min) sudo apt-get install python3-pip Step 7: get the latest (3.4.3) OpenCV source code (< 5min) I am using version 3.4.3 of OpenCV. You can check the Releases section of the official site (or Github) to see what the current build is. If your desired version is different, update the commands and paths below accordingly. Download and unzip OpenCV 3.4.3 and its experimental modules (those are stored in the opencv_contrib repository): wget -O opencv.zip https://github.com/opencv/opencv/archive/3.4.3.zip wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/3.4.3.zip unzip opencv.zip unzip opencv_contrib.zip Step 8: install Numpy, Scipy (< 3min) sudo pip3 install numpy scipy Step 9: compile OpenCV (< 10min) Note: this step will take a long time. Took almost 2 hours on my device. Also, your Raspberry Pi will overheat without proper cooling. Again, I am using version 3.4.3 of OpenCV. If you aren’t β€” update your paths accordingly: cd ~/opencv-3.4.3/ mkdir build cd build cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.4.3/modules \ -D BUILD_EXAMPLES=ON .. Make sure cmake finishes without errors. Step 10: build OpenCV (90–120min) This will work faster if you use all four CPU cores: make -j4 Once OpenCV builds successfully, continue the installation: sudo make install sudo ldconfig sudo apt-get update ... then reboot the system β€” and you should be good to go! sudo reboot Step 11: test your OpenCV installation (< 1min) $ python3 Python 3.5.3 (default, September 5 2018, 14:11:04) [GCC 6.3.0 20170124] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import cv2 >>> cv2.__version__ '3.4.3' >>> Comments and suggestions are welcome! Originally published at www.alatortsev.com on September 5, 2018.
[ { "code": null, "e": 486, "s": 171, "text": "I previously wrote a step-by-step guide showing how to make OpenCV 3.4.1 run on a Raspberry Pi 3 B. That article generated a lot of feedback. I have completed a few installations since then, so here’s a new, streamlined, process for getting OpenCV 3.4.1 (...
What Does random.seed Do in NumPy | Towards Data Science
Randomness is a fundamental mathematical concept that is usually used in the context of programming as well. Sometimes, we may need to introduce some randomness when creating some toy data or when we need to perform some specific calculations that will be dependent on some random event. In today’s article, we are first going to discuss about the concepts of pseudo random numbers and true randomness. Additionally, we will discuss about numpy’s random.seed and how to use it in order to create reproducible results. Finally, we will showcase how to ensure that the same seed is sustained throughout the code. A sequence of pseudo random numbers is one that has been generated using a deterministic process but appears to be statistically random. Even though the properties of sequences generated by pseudo random generators (also known as Deterministic Random Bit Generators) approximate the properties of random number sequences, in reality they are not truly random. This is because the generated sequence is determined by an initial value which is known as the seed. You can view seed as the actual starting point of the sequence. Pseudo-random number generators generate each number based on some processes and operations performed on the previously generated value. Since the first value to be generated has no previous value on which the generated can perform these operations, the seed acts as the β€œprevious value” for the first number to be generated. In the same way, NumPy’s random number routines generate sequences of pseudo random numbers. This is achieved by creating a sequence with the use of BitGenerators (objects that generate random numbers) and Generators that make use of the created sequences to sample from different probability distributions such as Normal, Uniform or Binomial. Now in order to generate reproducible sequences of pseudo random numbers, the BitGenerator object accepts a seed that is used to set the initial state. This can be achieved by setting numpy.random.seed as shown below: import numpy as npnp.random.seed(123) Creating reproducible results is a common requirement in different use cases. For instance, when testing some piece of functionality, you may need to create reproducible results by configuring the seed to a specific value so that the generated results can be compared against the expected results. Additionally, the creation of reproducible results is common in the wider field of research. For instance, if you work with a model that uses randomness (a random forest for example) and want to publish the results (say in a paper) then you may want (and probably have) to ensure that other people and users can reproduce the results you present. It is also important to mention that the random seed in NumPy also affects other methods, such as numpy.random.permutation and it also has a local effect. This means that if you specify numpy.random.seed only once but call numpy.random.permutation multiple times, the results that you’ll get won’t be identical (since they won’t depend on the same seed). To showcase the problem, let’s consider the following code: import numpy as npnp.random.seed(123)print(np.random.permutation(10))array([4, 0, 7, 5, 8, 3, 1, 6, 9, 2])print(np.random.permutation(10))array([3, 5, 4, 2, 8, 7, 6, 9, 0, 1]) As you can see, the results are not reproducible even though we have set the random seed. This happens because random.seed has only β€˜local effect’. In order to reproduce the results, you’d have to specify the same random seed just before every call of np.random.permutation as shown below. import numpy as npnp.random.seed(123)print(np.random.permutation(10))array([4, 0, 7, 5, 8, 3, 1, 6, 9, 2])np.random.seed(123)print(np.random.permutation(10))array([4, 0, 7, 5, 8, 3, 1, 6, 9, 2]) In today’s article we discussed about the concepts of true or pseudo randomness and purpose of random.seed in NumPy and Python. Additionally, we showcased how to create reproducible results every time we execute the same piece of code, even when the results are dependent on some (pseudo)randomness. Finally, we explored how to ensure that the effect of random seed will be sustained throughout the code when this is required. Become a member and read every story on Medium. Your membership fee directly supports me and other writers you read. You may also like
[ { "code": null, "e": 460, "s": 172, "text": "Randomness is a fundamental mathematical concept that is usually used in the context of programming as well. Sometimes, we may need to introduce some randomness when creating some toy data or when we need to perform some specific calculations that will be...
How to Avoid Overlapping Labels in ggplot2 in R? - GeeksforGeeks
18 Oct, 2021 In this article, we are going to see how to avoid overlapping labels in ggplot2 in R Programming Language. To avoid overlapping labels in ggplot2, we use guide_axis() within scale_x_discrete(). Syntax: plot+scale_x_discrete(guide = guide_axis(<type>)) In the place of we can use the following properties: n.dodge: It makes overlapping labels shift a step-down. check.overlap: This removes the overlapping labels and displays only those which do not overlap R # Create sample dataset.seed(5642) sample_data <- data.frame(name = c("Geeksforgeeks1", "Geeksforgeeks2", "Geeksforgeeks3", "Geeksforgeeks4", "Geeeksforgeeks5") , value = c(31,12,15,28,45))# Load ggplot2 packagelibrary("ggplot2") # Create bar plotplot<-ggplot(sample_data, aes(name,value, fill=name)) +geom_bar(stat = "identity")plot Output: To avoid overlapping by shifting labels downward we use n.dodge parameter of guide_axis() function: R # Create sample dataset.seed(5642) sample_data <- data.frame(name = c("Geeksforgeeks1", "Geeksforgeeks2", "Geeksforgeeks3", "Geeksforgeeks4", "Geeeksforgeeks5") , value = c(31,12,15,28,45))# Load ggplot2 packagelibrary("ggplot2") # Create bar plot without overlapping labelsplot<-ggplot(sample_data, aes(name,value, fill=name)) +geom_bar(stat = "identity") +scale_x_discrete(guide = guide_axis(n.dodge=2))plot Output: To remove overlapping labels we use check.overlap parameter of guide_axis() function: R # Create sample dataset.seed(5642) sample_data <- data.frame(name = c("Geeksforgeeks1", "Geeksforgeeks2", "Geeksforgeeks3", "Geeksforgeeks4", "Geeeksforgeeks5") , value = c(31,12,15,28,45))# Load ggplot2 packagelibrary("ggplot2") # Create bar plot without overlapping labelsplot<-ggplot(sample_data, aes(name,value, fill=name)) +geom_bar(stat = "identity") +scale_x_discrete(guide = guide_axis(check.overlap = TRUE))plot Output: abhishek0719kadiyan Picked R-ggplot R Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Change Color of Bars in Barchart using ggplot2 in R How to Change Axis Scales in R Plots? Group by function in R using Dplyr How to Split Column Into Multiple Columns in R DataFrame? How to filter R DataFrame by values in a column? How to import an Excel File into R ? Time Series Analysis in R How to filter R dataframe by multiple conditions? Replace Specific Characters in String in R R - if statement
[ { "code": null, "e": 25242, "s": 25214, "text": "\n18 Oct, 2021" }, { "code": null, "e": 25349, "s": 25242, "text": "In this article, we are going to see how to avoid overlapping labels in ggplot2 in R Programming Language." }, { "code": null, "e": 25436, "s": 253...
How to style background image to no repeat with JavaScript DOM?
To style the background image to no repeat with JavaScript, use the backgroundRepeat property. It allows you to set whether the background image repeats or not on a page. You can try to run the following code to learn how to style background image to no repeat with JavaScript βˆ’ Live Demo <!DOCTYPE html> <html> <body> <button onclick="display()">Click to Set background image</button> <script> function display() { document.body.style.backgroundImage = "url('https://www.tutorialspoint.com/html5/images/html5-mini-logo.jpg')"; document.body.style.backgroundRepeat = "no-repeat"; } </script> </body> </html>
[ { "code": null, "e": 1233, "s": 1062, "text": "To style the background image to no repeat with JavaScript, use the backgroundRepeat property. It allows you to set whether the background image repeats or not on a page." }, { "code": null, "e": 1341, "s": 1233, "text": "You can try...
A Practical Guide To A/B Tests in Python | by Leihua Ye, PhD | Towards Data Science
Randomized Controlled Trials (aka. A/B tests) are the gold standard of establishing causal inference. RCTs strictly control for the randomization process and ensure equal distributions across covariates before rolling out the treatment. Thus, we can attribute the mean difference between the treatment and control groups to the intervention. A/B tests are effective and only rely on mild assumptions, and the most important assumption is the Stable Unit Treatment Value Assumption, SUTVA. It states that the treatment and control units don’t interact with each other; otherwise, the interference leads to biased estimates. My latest blog post discusses its sources and coping strategies of major tech. towardsdatascience.com As a data scientist, I’m thrilled to see the increased adoption of Experimentation and Causal Inference in the industry. Harvard Business Review recently published an article titled β€œWhy Business Schools Need to Teach Experimentation,” infusing the importance of incorporating experimental thinking. Relatedly, they discussed the β€œSurprising Power of Online Experiments” in another paper (Kohavi and Thomke, 2017). A/B test can be roughly divided into three stages. Stage 1 Pre-Test: run a power analysis to decide the sample size. Stage 2 At-Test: keep an eye on the key metrics. Be aware of sudden drops. Stage 3 Post-Test: analyze data and reach conclusions. Today’s post shares a few best practices at each stage and walks through a hypothetical case study with detailed code implementation in Python. TikTok develops a new animal filter and wants to assess its effects on users. They are interested in two key metrics: 1. How does the filter affect user engagement (e.g., time spent on the app)? 2. How does the filter affect user retention (e.g., active)? There are a few constraints as well. First, TikTok has no prior knowledge of its performance and prefers a small-scale study with minimal exposure. This is the desired approach because they can end the test promptly if the key metrics plummet (e.g., zero conversation rate in the treatment group). Second, it is an urgent issue in a timely manner, and TikTok wants an answer within two weeks. Fortunately, TikTok has read my previous post on user interference and addressed the SUTVA assumption violation properly. The company decides to hire a small group of very talented data scientists, and you are the team leader in charge of model selection and research design. After consulting with multiple stakeholders, you propose an A/B test and suggest the following best practices. What is the goal of the test? How to measure success? How long should we run it? As a first step, we want to clarify the goal of the test and relay it back to the team. As mentioned, the study aims to measure user engagement and retention after rolling out the filter. Next, we move to the metrics and decide how to measure the success. As a social networking app, we adopt the time spent on the app to measure user engagement and two boolean variables, metric 1 and metric 2 (described below), indicating if the user is active after 1 day and 7 days, respectively. The remaining question is: how long should we run the test? A common strategy is to stop the experiment once we observe a statistically significant result (e.g., a small p-value). Established data scientists strongly oppose p-hacking as it leads to biased results (Kohavi et al. 2020). On a related note, Airbnb has encountered the same problem when p-hacking leads to false positives (Experiments at Airbnb). Instead, we should run a power analysis and decide a minimum sample size, according to three parameters: The significance Level, also denoted as alpha or Ξ±: the probability of rejecting a null hypothesis when it is true. By rejecting a true null hypothesis, we falsely claim there is an effect when there is no actual effect. Thus, it is also called the probability of False Positive.Statistical Power: the probability of correctly identifying the effect when there is indeed an effect. Power = 1 β€” Type II Error.The Minimum Detectable Effect, MDE: to find a widely agreed upon MDE, our data team sits down with the PM and decides the smallest acceptable difference is 0.1. In other words, the difference between the two groups scaled by the standard deviation needs to be at least 0.1. Otherwise, the release won’t compensate for the business costs incurred (e.g., engineers’ time, product lifecycle, etc.). For example, it won’t make any sense to roll out a new design if it only brings in a 0.000001% lift, even if it is statistically significant. The significance Level, also denoted as alpha or Ξ±: the probability of rejecting a null hypothesis when it is true. By rejecting a true null hypothesis, we falsely claim there is an effect when there is no actual effect. Thus, it is also called the probability of False Positive. Statistical Power: the probability of correctly identifying the effect when there is indeed an effect. Power = 1 β€” Type II Error. The Minimum Detectable Effect, MDE: to find a widely agreed upon MDE, our data team sits down with the PM and decides the smallest acceptable difference is 0.1. In other words, the difference between the two groups scaled by the standard deviation needs to be at least 0.1. Otherwise, the release won’t compensate for the business costs incurred (e.g., engineers’ time, product lifecycle, etc.). For example, it won’t make any sense to roll out a new design if it only brings in a 0.000001% lift, even if it is statistically significant. Here is the bi-relationship between these three parameters and the required sample size: Significance Level decreases β†’ Larger Sample Size Statistical Power increases β†’ Larger Sample Size The Minimum Detectable Effect decreases β†’ Larger Sample Size Typically, we set the significance level at 5% (or alpha = 5%) and statistical power at 80%. Thus, the sample size is calculated by the following formula: where: Οƒ2: sample variance. Ξ΄: the difference between the treatment and control groups (in percentage). To obtain the sample variance (Οƒ2), we typically run an A/A test that follows the same design thinking as an A/B test except assigning the same treatment to both groups. What is an A/A test? Splitting the users into two groups and then assign the same treatment to both. Here is the code to calculate sample size in Python. from statsmodels.stats.power import TTestIndPower# parameters for power analysis # effect_size has to be positiveeffect = 0.1alpha = 0.05power = 0.8# perform power analysis analysis = TTestIndPower()result = analysis.solve_power(effect, power = power,nobs1= None, ratio = 1.0, alpha = alpha)print('Sample Size: %.3f' % round(result))1571.000 We need 1571 for each variant. In terms of how long we should run the test, it depends on how much traffic the app receives. Then, we divide the daily traffic equally into these two variants and wait until collecting a sufficiently large sample size (β‰₯1571). As noted, TikTok is a tremendously popular app and has millions of DAUs. However, we are specifically targeting users who try out the new filters. Furthermore, the minimal exposure approach may take a few days to collect enough observations for the experiment. Understand the goal of the experiment and how to measure the success. Run an A/A test to estimate the variance of the metric. Check out my latest post on how to run and interpret A/A tests in Python. Run a power analysis to obtain the minimum sample size. We roll out the test and initiate the data collection process. Here, we simulate the Data Generation Process (DGP) and artificially create variables that follow specific distributions. The true parameters are known to us, which comes in handy when comparing the estimated treatment effect to the true effects. In other words, we can evaluate the effectiveness of A/B tests and check to what extent they lead to unbiased results. There are five variables to be simulated in our case study: 1. userid2. version3. minutes of plays4. user engagement after 1 day (metric_1)5. user engagement after 7 days (metric_2) # Variables 1 and 2: userid and version We intentionally create 1600 control units and 1749 treated units to signal a potential Sample Ratio Mismatch, SRM. # variable 1: useriduser_id_control = list(range(1,1601))# 1600 controluser_id_treatment = list(range(1601,3350))# 1749 treated# variable 2: version import numpy as npcontrol_status = [β€˜control’]*1600treatment_status = [β€˜treatment’]*1749 # Variable 3: minutes of plays We simulate variable 3 (β€œminutes of plays”) as a normal distribution with a ΞΌ of 30 minutes and Οƒ2 of 10. In specific, the mean for the control group is 30 minutes, and the variance is 10. To recap, the effect parameter to the MDE is calculated as the difference between the two groups divided by the standard deviation (ΞΌ_1 β€” ΞΌ_2)/Οƒ_squared = 0.1. According to the formula, we obtain ΞΌ_2 = 31. The variance is also 10. # for control groupΞΌ_1 = 30Οƒ_squared_1 = 10np.random.seed(123)minutes_control = np.random.normal(loc = ΞΌ_1, scale = Οƒ_squared_1, size = 1600)# for treatment group, which increases the user engagement by # according to the formula (ΞΌ_1 β€” ΞΌ_2)/Οƒ_squared = 0.1, we obtain ΞΌ_2 = 31ΞΌ_2 = 31Οƒ_squared_2 = 10np.random.seed(123)minutes_treat = np.random.normal(loc = ΞΌ_2, scale = Οƒ_squared_2, size = 1749) # variable 4: user engagement after 1 day, metric_1 Our simulation shows that the control group has 30% active (True) and 70% inactive (False) users after 1 day (metric_1), while the treatment has 35% active and 65% inactive users, respectively. Active_status = [True,False]# control day_1_control = np.random.choice(Active_status, 1600, p=[0.3,0.7])# treatmentday_1_treatment = np.random.choice(Active_status, 1749, p=[0.35,0.65]) # variable 5: user engagement after 7 day, metric_2 The simulation data shows the control group has a 35% active user rate, while the treatment has a 25% after 7 days. # control day_7_control = np.random.choice(Active_status, 1600, p=[0.35,0.65])# treatmentday_7_treatment = np.random.choice(Active_status, 1749, p=[0.25,0.75]) The true data contains a reversed pattern: the treatment performs better in the short term but the control group comes back and stands out after one week. Let’s check if the A/B test picks up the reversed signal. final_data.head() For the complete simulation process, please refer to my Github. Don’t end your A/B tests prematurely after witnessing some initial positive effects. No early stopping! No p-hacking! Instead, end it when you have reached the minimum sample size. After collecting enough data, we move to the last stage of experiments, which is data analysis. As a first step, it would be beneficial to check how many users fell into each variant. # calculate the number of users in each versionfinal_data.groupby('version')['user_id'].count() It appears to be a suspicious variant split: 1600 control units but 1749 treatment units. The treatment assignment process looks suspicious at face value as more users are assigned to the treatment than the control. To formally check for the SRM, we conduct a chi-square test between the actual split and the expected split of the treated and control units (Kohavi et al. 2020). from scipy.stats import chisquare chisquare([1600,1749],f_exp = [1675,1675])Power_divergenceResult(statistic=6.627462686567164, pvalue=0.010041820594939122) We set the alpha level at 0.001 to test SRM. Since the p-value is 0.01, we fail to reject the null hypothesis and conclude there is no evidence of SRM. In contrast to our intuition, statistical tests conclude that the treatment assignment works as expected. Since the variable minutes_play is a float, we have to round it up to the nearest integers before grouping. %matplotlib inlinefinal_data[β€˜minutes_play_integers’] = round(final_data[β€˜minutes_play’])plot_df = final_data.groupby(β€˜minutes_play_integers’)[β€˜user_id’].count()# Plot the distribution of players that played 0 to 50 minutesax = plot_df.head(n=50).plot(x=”minutes_play_integers”, y=”user_id”, kind=”hist”)ax.set_xlabel(β€œDuration of Video Played in Minutes”)ax.set_ylabel(β€œUser Count”) # 1-day retentionfinal_data[β€˜day_1_active’].mean()0.3248730964467005 After 1 day, the overall active user rate, on average, hovers around 32.5%. # 1-day retention by groupfinal_data.groupby(β€˜version’)[β€˜day_1_active’].mean() After taking a closer look, the control group has 29.7% active users, and the treatment has 35%. Naturally, we are interested in the following questions: Is the higher retention rate in the treatment group statistically significant? What is its variability? If we repeat the process for 10,000 times, how often do we observe at least as extreme values? Bootstrap can answer these questions. It is a resample strategy that repeatedly samples from the original data with replacements. According to the Central Limit Theorem, the distribution of the resample means approximately normally distributed (check my other posts on Bootstrap, in R or Python). # solution: bootstrapboot_means = []# run the simulation for 10k times for i in range(10000):#set frac=1 β†’ sample all rows boot_sample = final_data.sample(frac=1,replace=True).groupby (β€˜version’) [β€˜day_1_active’].mean() boot_means.append(boot_sample)# a Pandas DataFrameboot_means = pd.DataFrame(boot_means)# kernel density estimateboot_means.plot(kind = β€˜kde’) # create a new column, diff, which is the difference between the two variants, scaled by the control groupboot_means[β€˜diff’]=(boot_means[β€˜treatment’] β€” boot_means[β€˜control’]) /boot_means[β€˜control’]*100boot_means['diff'] # plot the bootstrap sample difference ax = boot_means[β€˜diff’].plot(kind = β€˜kde’)ax.set_xlabel(β€œ% diff in means”) boot_means[boot_means[β€˜diff’] > 0]# p value p = (boot_means[β€˜diff’] >0).sum()/len(boot_means)p0.9996 After bootstrapping 10,000 times, the treatment has a higher 1-day retention rate 99.96% of the time. Awesome! The test result is consistent with our original simulated data. We apply the same analysis to the 7-day metric. boot_7d = []for i in range(10000): boot_mean = final_data.sample(frac=1,replace=True).groupby(β€˜version’)[β€˜day_7_active’].mean() boot_7d.append(boot_mean) boot_7d = pd.DataFrame(boot_7d)boot_7d[β€˜diff’] = (boot_7d[β€˜treatment’] β€” boot_7d[β€˜control’]) /boot_7d[β€˜control’] *100# Ploting the bootstrap % differenceax = boot_7d['diff'].plot(kind = 'kde')ax.set_xlabel("% diff in means") # Calculating the probability that 7-day retention is greater when the gate is at level 30p = (boot_7d['diff']>0).sum()/len(boot_7d)1-p0.9983 On the 7-day metric, the control obviously has a better user retention rate 99.83% of the time, also consistent with the original data. The reversed pattern between 1-day and 7-day metrics supports the novelty effect as users become activated and intrigued by the new design, not because the change actually improves engagement. The novelty effect is popular in consumer-side A/B tests. SRM is a real concern. We apply a chi-square test to formally test for the SRM. If the p-value is smaller than the threshold (Ξ± = 0.001), the randomization process does not work as expected. An SRM introduces selection bias that invalidates any test results. Three fundamental statistical concepts to master: SRM, chi-square test, and bootstrap. Compare short-term and long-term metrics to evaluate the novelty effect. For the complete simulation process, please refer to my Github. An A/B test requires extensive statistical knowledge and careful attention to detail. There are thousands of ways of ruining your test results but only one way to do it correctly. Follow the best practices described before-, during-, and after- the experiments and set up your experiments for success. Medium recently evolved its Writer Partner Program, which supports ordinary writers like myself. If you are not a subscriber yet and sign up via the following link, I’ll receive a portion of the membership fees. leihua-ye.medium.com towardsdatascience.com towardsdatascience.com towardsdatascience.com Mobile Games A/B Testing with Cookie Cats Please find me on LinkedIn and Youtube. Also, check my other posts on Artificial Intelligence and Machine Learning.
[ { "code": null, "e": 513, "s": 171, "text": "Randomized Controlled Trials (aka. A/B tests) are the gold standard of establishing causal inference. RCTs strictly control for the randomization process and ensure equal distributions across covariates before rolling out the treatment. Thus, we can attri...
MySQL Select where value exists more than once
For this, you can use GROUP BY HAVING along with the COUNT(*) function. Let us first create a table βˆ’ mysql> create table DemoTable -> ( -> Value int -> ); Query OK, 0 rows affected (0.47 sec) Insert some records in the table using insert command βˆ’ mysql> insert into DemoTable values(20); Query OK, 1 row affected (0.18 sec) mysql> insert into DemoTable values(10); Query OK, 1 row affected (0.08 sec) mysql> insert into DemoTable values(30); Query OK, 1 row affected (0.12 sec) mysql> insert into DemoTable values(10); Query OK, 1 row affected (0.16 sec) mysql> insert into DemoTable values(30); Query OK, 1 row affected (0.12 sec) mysql> insert into DemoTable values(40); Query OK, 1 row affected (0.12 sec) mysql> insert into DemoTable values(50); Query OK, 1 row affected (0.27 sec) Display all records from the table using select statement βˆ’ mysql> select *from DemoTable; This will produce the following output βˆ’ +-------+ | Value | +-------+ | 20 | | 10 | | 30 | | 10 | | 30 | | 40 | | 50 | +-------+ 7 rows in set (0.00 sec) Following is the query to select where value exists more than once βˆ’ mysql> select *from DemoTable -> group by Value -> having count(*) > 1; This will produce the following output βˆ’ +-------+ | Value | +-------+ | 10 | | 30 | +-------+ 2 rows in set (0.38 sec)
[ { "code": null, "e": 1164, "s": 1062, "text": "For this, you can use GROUP BY HAVING along with the COUNT(*) function. Let us first create a table βˆ’" }, { "code": null, "e": 1264, "s": 1164, "text": "mysql> create table DemoTable\n -> (\n -> Value int\n -> );\nQuery OK, 0 r...
How to Find Factorial of Number Using Recursion in Python?
Factorial of a number is product of all numbers from 1 to that number. A function is called a recursive function if it calls itself. In following program factorial() function accepts one argument and keeps calling itself by reducing value by one till it reaches 1. def factorial(x): if x==1: return 1 else: return x*factorial(x-1) f=factorial(5) print ("factorial of 5 is ",f) The result is factorial of 5 is 120
[ { "code": null, "e": 1133, "s": 1062, "text": "Factorial of a number is product of all numbers from 1 to that number." }, { "code": null, "e": 1195, "s": 1133, "text": "A function is called a recursive function if it calls itself." }, { "code": null, "e": 1327, "s...
Bootstrap checkbox class
When building a form, use the checkbox class if you want the user to select any number of options from a list. Use .checkbox-inline class to a series of checkboxes for controls appear on the same line. You can try to run the following code to implement the Bootstrap checkbox class βˆ’ Live Demo <!DOCTYPE html> <html> <head> <title>Try v1.2 Bootstrap Online</title> <link href = "/bootstrap/css/bootstrap.min.css" rel = "stylesheet"> <script src = "/scripts/jquery.min.js"></script> <script src = "/bootstrap/js/bootstrap.min.js"></script> </head> <body> <label for = "name">Favourite Live Streaming</label> <div class = "checkbox"> <label> <input type = "checkbox" value = "">Amazon Prime </label> </div> <div class = "checkbox"> <label> <input type = "checkbox" value = "">Hotstar </label> </div> </body> </html>
[ { "code": null, "e": 1264, "s": 1062, "text": "When building a form, use the checkbox class if you want the user to select any number of options from a list. Use .checkbox-inline class to a series of checkboxes for controls appear on the same line." }, { "code": null, "e": 1346, "s":...
time.LoadLocation() Function in Golang With Examples - GeeksforGeeks
21 Apr, 2020 In Go language, time packages supplies functionality for determining as well as viewing time. The LoadLocation() function in Go language is used to find a location with the name stated. So, if the stated name is β€œUTC” then it returns UTC and if the stated name is β€œLocal” then it returns Local. Else, the name to be used is assumed to be a location which is equivalent to a file in the database of IANA Time Zone. Where this database is present only on Unix systems. Moreover, this function is defined under the time package. Here, you need to import the β€œtime” package in order to use these functions. Syntax: func LoadLocation(name string) (*Location, error) Here, β€œname” is the name of the location to be used, and *Location is the pointer to the Location. Where β€œLocation” forms the set of time offsets in use. And β€œerror” is a panic error. Return Value: It returns the location with the stated name. Example 1: // Golang program to illustrate the usage of// LoadLocation() function // Including main packagepackage main // Importing fmt and timeimport ( "fmt" "time") // Calling mainfunc main() { // Calling LoadLocation // method with its parameter locat, error := time.LoadLocation("Asia/Kolkata") // If error not equal to nil then // return panic error if error != nil { panic(error) } // Prints location fmt.Println(locat)} Output: Asia/Kolkata Here, IANA time zone of India is returned as there is no error. Example 2: // Golang program to illustrate the usage of// LoadLocation() function // Including main packagepackage main // Importing fmt and timeimport ( "fmt" "time") // Calling mainfunc main() { // Calling LoadLocation // method with its parameter locat, error := time.LoadLocation("Asia/Kolkata") // If error not // equal to nil then // return panic error if error != nil { panic(error) } // Calling Date() method // with its parameter tm := time.Date(2020, 4, 7, 16, 7, 0, 0, time.UTC) // Prints the time and date // of the stated location fmt.Println(tm.In(locat))} Output: 2020-04-07 21:37:00 +0530 IST Here, at first LoadLocation() method is called then the Date() method is called with its parameters i.e, date and time then the date and time of the stated location are returned. GoLang-time Go Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. 6 Best Books to Learn Go Programming Language Arrays in Go strings.Replace() Function in Golang With Examples fmt.Sprintf() Function in Golang With Examples How to Split a String in Golang? Golang Maps Slices in Golang Inheritance in GoLang Different Ways to Find the Type of Variable in Golang Interfaces in Golang
[ { "code": null, "e": 25727, "s": 25699, "text": "\n21 Apr, 2020" }, { "code": null, "e": 26330, "s": 25727, "text": "In Go language, time packages supplies functionality for determining as well as viewing time. The LoadLocation() function in Go language is used to find a location...
PyQt5 - Adding action to Radio Button - GeeksforGeeks
22 Apr, 2020 In this article we will see how we can set action to a radio button. Setting action to a radio button means adding an action to it which get called and perform some task when radio button get checked or unchecked. In order to add action to radio button we will use toggled.connect method. Syntax : radio_button.toggled.connect(method_name) Argument : It takes method name as argument. Action performed : It will call the method associated with it when radio button will be toggled. Below is the implementation. # importing librariesfrom PyQt5.QtWidgets import * from PyQt5 import QtCore, QtGuifrom PyQt5.QtGui import * from PyQt5.QtCore import * import sys class Window(QMainWindow): def __init__(self): super().__init__() # setting title self.setWindowTitle("Python ") # setting geometry self.setGeometry(100, 100, 600, 400) # calling method self.UiComponents() # showing all the widgets self.show() # method for widgets def UiComponents(self): # creating a radio button self.radio_button = QRadioButton(self) # setting geometry of radio button self.radio_button.setGeometry(200, 150, 120, 40) # setting text to radio button self.radio_button.setText("GEEK ?") # creating label to display if it is checked or not self.label = QLabel("", self) # setting geometry of label self.label.setGeometry(200, 200, 150, 40) # adding action to radio button self.radio_button.toggled.connect(self.action) # method called by radio button def action(self): # changing the content of label self.label.setText("Action performed") # create pyqt5 appApp = QApplication(sys.argv) # create the instance of our Windowwindow = Window() # start the appsys.exit(App.exec()) Output : Python-gui Python-PyQt Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Check if element exists in list in Python How To Convert Python Dictionary To JSON? Python Classes and Objects How to drop one or multiple columns in Pandas Dataframe Defaultdict in Python Python | Get unique values from a list Python | os.path.join() method Create a directory in Python Python | Pandas dataframe.groupby()
[ { "code": null, "e": 25537, "s": 25509, "text": "\n22 Apr, 2020" }, { "code": null, "e": 25751, "s": 25537, "text": "In this article we will see how we can set action to a radio button. Setting action to a radio button means adding an action to it which get called and perform som...
JavaScript Nullish Coalescing Operator - GeeksforGeeks
11 Sep, 2020 Below is the example of the Nullish Coalescing Operator. Example:<script>function foo(bar) { bar = bar ?? 55; document.write(bar); document.write("</br>"); }foo(); // 55foo(22); // 22</script> <script>function foo(bar) { bar = bar ?? 55; document.write(bar); document.write("</br>"); }foo(); // 55foo(22); // 22</script> Output:55 22 55 22 Nullish Coalescing Operator: It is a new feature introduced in this ECMA proposal has now been adopted into the official JavaScript Specification. This operator returns the right hand value if the left hand value is null or undefined. If not null or undefined then it will return left hand value. Before, setting default values for undefined and null variables required the use of if statement or the Logical OR operator β€œ||” as shown below: Program:<script>function foo(bar) { bar = bar || 42; console.log(bar);} // Output: 42foo();</script> <script>function foo(bar) { bar = bar || 42; console.log(bar);} // Output: 42foo();</script> Output:42 42 When the passed parameter is less than the number of parameters defined in the function prototype, it is assigned the value of undefined. To set default values for the parameters not passed during the function call, or to set default values for fields not present in a JSON object, the above method is popular. Program:</script>function foo(bar) { bar = bar || 42; console.log(bar);}// Output: 42foo(0);</script> </script>function foo(bar) { bar = bar || 42; console.log(bar);}// Output: 42foo(0);</script> Output:42 42 There are values in JavaScript like 0 and an empty string that are logically false by nature. These values may change the expected behavior of the programs written in JavaScript. All the reoccurring problems led to the development of the Nullish Coalescing Operator. The Nullish Coalescing Operator is defined by two adjacent question marks ?? and its use is as follows: Syntax:variable ?? default_value variable ?? default_value Examples: If the passed variable is either null or undefined and only if it’s those two values, the default value would be returned. In all other cases including 0, empty string, or false, the value of the variable is returned and not the default value. Program 1:<script>function foo(bar) { bar = bar ?? 42; console.log(bar);} foo(); // 42foo(0); // 0</script> <script>function foo(bar) { bar = bar ?? 42; console.log(bar);} foo(); // 42foo(0); // 0</script> Output: Program 2: The more common use case is to set default values for JSON objects as follows.<script>const foo = { bar: 0} const valueBar = foo.bar ?? 42;const valueBaz = foo.baz ?? 42; // Value of bar: 0console.log("Value of bar: ", valueBar); // Value of bar: 42console.log("Value of baz: ", valueBaz);</script> <script>const foo = { bar: 0} const valueBar = foo.bar ?? 42;const valueBaz = foo.baz ?? 42; // Value of bar: 0console.log("Value of bar: ", valueBar); // Value of bar: 42console.log("Value of baz: ", valueBaz);</script> Output: Supported Browsers: The browsers supported by JavaScript Nullish Coalescing Operator are listed below: Google Chrome 80 Firefox 72 javascript-operators JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Remove elements from a JavaScript Array Convert a string to an integer in JavaScript Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React How to calculate the number of days between two dates in javascript? Remove elements from a JavaScript Array Installation of Node.js on Linux Convert a string to an integer in JavaScript How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 31305, "s": 31277, "text": "\n11 Sep, 2020" }, { "code": null, "e": 31362, "s": 31305, "text": "Below is the example of the Nullish Coalescing Operator." }, { "code": null, "e": 31513, "s": 31362, "text": "Example:<script>function foo(bar)...
HTML autocapitalize Attribute - GeeksforGeeks
14 Nov, 2021 The HTML autocapitalize attribute is used to define whether the text present inside the HTML element should be automatically capitalized or not. It is a global attribute that means it is applied to all HTML elements. Features: It specifies how the text will be automatically capitalized. It indicates that the first letter of the word or sentence would be in Capital. It does not support <input> tag with type URL, Email, and Password. It is a Global Attribute. Syntax: <tag_name autocapitalize="off | none | on | sentences | words | characters" /> Attribute Values: off/none: It defines that the text will not be capitalized. on/sentences: It defines that the first letter of each sentence would be capital. words: It defines that the first letter of each word would be capital. characters: It specifies that the whole text will be capitalized. Example: Below HTML code use the auto-capitalize attribute with the <input> Tag. HTML <!DOCTYPE html><html> <head> <title> HTML | autocapitalize Attribute <input> </title></head> <body style="text-align:center"> <h1 style="color: green;"> GeeksforGeeks </h1> <h2> HTML | autocapitalize Attribute with <input> Tag </h2> Name: <input type="text" autocapitalize="words" autofocus> <br><br> <!-- Assign id to the Button. --> <button id="GFG"> Submit </button> <br></body> </html> Output: Important Note: This code is working on virtual keyboards such as on mobile devices and voice inputs. It is not working on physical keyboards. Supported Browsers: Google Chrome 43.0 Apple Safari 5.0 Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course. HTML-Attributes Picked HTML Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. REST API (Introduction) HTML Cheat Sheet - A Basic Guide to HTML Design a web page using HTML and CSS Form validation using jQuery Angular File Upload Remove elements from a JavaScript Array Installation of Node.js on Linux Convert a string to an integer in JavaScript How to fetch data from an API in ReactJS ? Difference between var, let and const keywords in JavaScript
[ { "code": null, "e": 26139, "s": 26111, "text": "\n14 Nov, 2021" }, { "code": null, "e": 26356, "s": 26139, "text": "The HTML autocapitalize attribute is used to define whether the text present inside the HTML element should be automatically capitalized or not. It is a global att...
OSI, TCP/IP and Hybrid models - GeeksforGeeks
03 Nov, 2021 OSI model: Developed by International Organization for Standardization or ISO, Open Systems Interconnection model or OSI model is a critical building block in networking. It helps in troubleshooting and understanding networks because of the layered approach that it follows; the various layers being: (i) Physical layer (i) Data Link layer (ii) Network layer (iii) Transport layer (iv) Session layer (v) Presentation layer (vi) Application layer These layers are given bottom to up as following below. Advantages: Provides standards and interoperability. Split development(a person working in layer 3 need not be concerned with layer 7). Quicker development (as each layer is independent of the other, development in an OSI model is faster as compared to the old proprietary models). TCP/IP model: OSI model was used for connectionless protocols like CLNS and CLMNP; but with the advent of TCP (connection oriented protocol) a new model; i.e., TCP/IP model came into play. In this model, the Application, Presentation and Session layers of OSI model were combined to form the Application layer in the TCP/IP model and the Datalink and Physical layers in the OSI model were combined to form the Network access layer in the TCP/IP model and the Internet layer in the TCP/IP model was the equivalent of Network layer in OSI model. Advantages: TCP/IP supports various Network Routing Protocols. It is scalable and based on client-server architecture. It is an open protocol suite i.e., it’s not proprietary, so anyone can use it. TCP/IP works independently of the OS. Hybrid model: In the real world, we use a mix of both the OSI model and the TCP/IP model, called the Hybrid model. In the Hybrid model, the Application layer is a combination of layer 7, layer 6 and layer 5 of OSI model (similar to TCP/IP model). The remaining layers (layer 1, 2, 3 and 4) are the same as the OSI model. kmbh Computer Networks GATE CS Computer Networks Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Advanced Encryption Standard (AES) Intrusion Detection System (IDS) Introduction and IPv4 Datagram Header Secure Socket Layer (SSL) Cryptography and its Types ACID Properties in DBMS Types of Operating Systems Normal Forms in DBMS Page Replacement Algorithms in Operating Systems Cache Memory in Computer Organization
[ { "code": null, "e": 25779, "s": 25751, "text": "\n03 Nov, 2021" }, { "code": null, "e": 26082, "s": 25779, "text": "OSI model: Developed by International Organization for Standardization or ISO, Open Systems Interconnection model or OSI model is a critical building block in netw...
How to Add Dividers in Android RecyclerView? - GeeksforGeeks
27 Sep, 2021 In the article Android RecyclerView in Kotlin, it’s been demonstrated how to implement the RecyclerView in Android. But in the case of User Experience, the items need to be distinguished with the divider and proper padding and margins in each item. In this case, the RecyclerView ItemDecoration comes into the picture. So, in this its been demonstrated how to make actually use the RecyclerView ItemDecoration and use the dividers and custom dividers in between RecyclerView Items. Have a look at the following image to get an overview of the entire discussion. Create an empty activity project To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. Add Required Dependency Include google material design components dependency in the build.gradle file. After adding the dependencies don’t forget to click on the β€œSync Now” button present at the top right corner. implementation β€œandroidx.recyclerview:recyclerview:1.2.1” Note that while syncing the project you need to be connected to the network and make sure that you are adding the dependency to the app-level Gradle file as shown below. Step 1: Working with activity_main.xml file The main layout of the project contains one RecyclerView for demonstration purposes. To implement the same invoke the following code inside the activity_main.xml file. XML <?xml version="1.0" encoding="utf-8"?><androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <androidx.recyclerview.widget.RecyclerView android:id="@+id/recyclerView" android:layout_width="match_parent" android:layout_height="match_parent" /> </androidx.constraintlayout.widget.ConstraintLayout> Before going to the output we need to populate the RecyclerView with the data. So we need to now work with RecyclerView Adapter and a custom view for the RecyclerView. Step 2: Create a custom view for RecyclerView The custom view for the RecyclerView contains one simple icon at the left and two TextViews. To implement the same create a file named recycler_data_view.xml inside the layout folder and invoke the following code. XML <?xml version="1.0" encoding="utf-8"?><androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="wrap_content" android:padding="16dp"> <ImageView android:id="@+id/imageView" android:layout_width="54dp" android:layout_height="54dp" android:src="@drawable/ic_android" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" /> <TextView android:id="@+id/tvNumber" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginStart="16dp" android:textSize="24sp" android:textStyle="bold" app:layout_constraintBottom_toTopOf="@+id/tvNumbersInText" app:layout_constraintStart_toEndOf="@+id/imageView" app:layout_constraintTop_toTopOf="parent" tools:text="1" /> <TextView android:id="@+id/tvNumbersInText" android:layout_width="wrap_content" android:layout_height="wrap_content" android:textSize="16sp" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintStart_toStartOf="@+id/tvNumber" app:layout_constraintTop_toBottomOf="@+id/tvNumber" tools:text="One" /> </androidx.constraintlayout.widget.ConstraintLayout> The above custom view produces the following output for each item in the list: Step 3: Creating Data Class for RecyclerView Now creating the data for the above custom view by creating a Data Class, using the following code. Kotlin data class RecyclerViewData( val text1: String, val text2: String) Step 4: Creating the RecyclerView Adapter The following code needs to invoke in a separate class by creating the class named as MyRecyclerAdapter. Kotlin import android.view.LayoutInflaterimport android.view.Viewimport android.view.ViewGroupimport android.widget.TextViewimport androidx.recyclerview.widget.RecyclerView class MyRecyclerViewAdapter(private val items: List<RecyclerViewData>) : RecyclerView.Adapter<MyRecyclerViewAdapter.MyRecyclerViewDataHolder>() { inner class MyRecyclerViewDataHolder(itemView: View) : RecyclerView.ViewHolder(itemView) override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): MyRecyclerViewDataHolder { val view: View = LayoutInflater.from(parent.context).inflate(R.layout.recycler_data_view, parent, false) return MyRecyclerViewDataHolder(view) } override fun onBindViewHolder(holder: MyRecyclerViewDataHolder, position: Int) { val currentItem: RecyclerViewData = items[position] val tvNumber: TextView = holder.itemView.findViewById(R.id.tvNumber) tvNumber.text = currentItem.text1 val tvNumbersInText: TextView = holder.itemView.findViewById(R.id.tvNumbersInText) tvNumbersInText.text = currentItem.text2 } override fun getItemCount(): Int { return items.size }} Step 5: Working with the MainActivity.kt file In this class, we have to create some sample data for the RecyclerView in the form of a list. To implement the same invoke the following code inside the MainActivity.kt file (comments are added for better understanding). Kotlin package com.adityamshidlyali.gfgautohidefab import android.os.Bundleimport androidx.appcompat.app.AppCompatActivityimport androidx.recyclerview.widget.LinearLayoutManagerimport androidx.recyclerview.widget.RecyclerView class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) // create instance of RecyclerView // and register with the appropriate ID val recyclerView: RecyclerView = findViewById(R.id.recyclerView) // create list of RecyclerViewData var recyclerViewData = listOf<RecyclerViewData>() recyclerViewData = recyclerViewData + RecyclerViewData("1", "One") recyclerViewData = recyclerViewData + RecyclerViewData("2", "Two") recyclerViewData = recyclerViewData + RecyclerViewData("3", "Three") recyclerViewData = recyclerViewData + RecyclerViewData("4", "Four") recyclerViewData = recyclerViewData + RecyclerViewData("5", "Five") recyclerViewData = recyclerViewData + RecyclerViewData("6", "Six") recyclerViewData = recyclerViewData + RecyclerViewData("7", "Seven") recyclerViewData = recyclerViewData + RecyclerViewData("8", "Eight") recyclerViewData = recyclerViewData + RecyclerViewData("9", "Nine") recyclerViewData = recyclerViewData + RecyclerViewData("10", "Ten") recyclerViewData = recyclerViewData + RecyclerViewData("11", "Eleven") recyclerViewData = recyclerViewData + RecyclerViewData("12", "Twelve") recyclerViewData = recyclerViewData + RecyclerViewData("13", "Thirteen") recyclerViewData = recyclerViewData + RecyclerViewData("14", "Fourteen") recyclerViewData = recyclerViewData + RecyclerViewData("15", "Fifteen") // create a vertical layout manager val layoutManager = LinearLayoutManager(this, LinearLayoutManager.VERTICAL, false) // create instance of MyRecyclerViewAdapter val myRecyclerViewAdapter = MyRecyclerViewAdapter(recyclerViewData) // attach the adapter to the recycler view recyclerView.adapter = myRecyclerViewAdapter // also attach the layout manager recyclerView.layoutManager = layoutManager // make the adapter the data set // changed for the recycler view myRecyclerViewAdapter.notifyDataSetChanged() }} Output: Creating the default divider for the items in RecyclerView We have to create a default Divider using addItemDecoration() method with the RecyclerView instance, we need to pass the ItemDecoration(in this case it is DividerItemDecoration()) instance and the orientation of the LayoutManager(in this case it is vertical) of the recycler view. To implement the same invoke the following code inside the MainActivity.kt file(Comments are added for better understanding). Kotlin import android.os.Bundleimport androidx.appcompat.app.AppCompatActivityimport androidx.recyclerview.widget.DividerItemDecorationimport androidx.recyclerview.widget.LinearLayoutManagerimport androidx.recyclerview.widget.RecyclerView class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) // create instance of RecyclerView and register with the appropriate ID val recyclerView: RecyclerView = findViewById(R.id.recyclerView) // create list of RecyclerViewData var recyclerViewData = listOf<RecyclerViewData>() recyclerViewData = recyclerViewData + RecyclerViewData("1", "One") recyclerViewData = recyclerViewData + RecyclerViewData("2", "Two") recyclerViewData = recyclerViewData + RecyclerViewData("3", "Three") recyclerViewData = recyclerViewData + RecyclerViewData("4", "Four") recyclerViewData = recyclerViewData + RecyclerViewData("5", "Five") recyclerViewData = recyclerViewData + RecyclerViewData("6", "Six") recyclerViewData = recyclerViewData + RecyclerViewData("7", "Seven") recyclerViewData = recyclerViewData + RecyclerViewData("8", "Eight") recyclerViewData = recyclerViewData + RecyclerViewData("9", "Nine") recyclerViewData = recyclerViewData + RecyclerViewData("10", "Ten") recyclerViewData = recyclerViewData + RecyclerViewData("11", "Eleven") recyclerViewData = recyclerViewData + RecyclerViewData("12", "Twelve") recyclerViewData = recyclerViewData + RecyclerViewData("13", "Thirteen") recyclerViewData = recyclerViewData + RecyclerViewData("14", "Fourteen") recyclerViewData = recyclerViewData + RecyclerViewData("15", "Fifteen") // create a vertical layout manager val layoutManager = LinearLayoutManager(this, LinearLayoutManager.VERTICAL, false) // create instance of MyRecyclerViewAdapter val myRecyclerViewAdapter = MyRecyclerViewAdapter(recyclerViewData) // attach the adapter to the recycler view recyclerView.adapter = myRecyclerViewAdapter // also attach the layout manager recyclerView.layoutManager = layoutManager // call the method addItemDecoration with the // recyclerView instance and add default Item Divider recyclerView.addItemDecoration( DividerItemDecoration( baseContext, layoutManager.orientation ) ) // make the adapter the data set // changed for the recycler view myRecyclerViewAdapter.notifyDataSetChanged() }} Output: Now creating custom divider for RecyclerView items If the divider needs to be custom then there is a need to create our own shape in the drawable folder. So here the shape is a rectangle with the height of 2dp and the color, green. To implement the same shape invoke the following code inside the divider.xml file and create this file inside the drawable folder. XML <?xml version="1.0" encoding="utf-8"?><shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle"> <size android:height="2dp" /> <solid android:color="@color/green_500" /></shape> Now we need to create our own ItemDecoration class and override the onDraw() method. This method is called once and this method decides where the divider needs to be drawn and how to be drawn. One main important thing here is to do not add the divider for the first and last items. To implement the same invoke the following code inside the RecyclerViewItemDecoration.kt file(comments are added for better understanding). Kotlin import android.content.Contextimport android.graphics.Canvasimport android.graphics.drawable.Drawableimport android.view.Viewimport androidx.core.content.ContextCompatimport androidx.recyclerview.widget.RecyclerView class RecyclerViewItemDecoration( context: Context, resId: Int) : RecyclerView.ItemDecoration() { private var mDivider: Drawable = ContextCompat.getDrawable(context, resId)!! override fun onDraw(c: Canvas, parent: RecyclerView, state: RecyclerView.State) { super.onDraw(c, parent, state) // left margin for the divider val dividerLeft: Int = 32 // right margin for the divider with // reference to the parent width val dividerRight: Int = parent.width - 32 // this loop creates the top and bottom // divider for each items in the RV // as each items are different for (i in 0 until parent.childCount) { // this condition is because the last // and the first items in the RV have // no dividers in the list if (i != parent.childCount - 1) { val child: View = parent.getChildAt(i) val params = child.layoutParams as RecyclerView.LayoutParams // calculating the distance of the // divider to be drawn from the top val dividerTop: Int = child.bottom + params.bottomMargin val dividerBottom: Int = dividerTop + mDivider.intrinsicHeight mDivider.setBounds(dividerLeft, dividerTop, dividerRight, dividerBottom) mDivider.draw(c) } } }} Now, this custom ItemDecoration needs to be attached to the recycler view. So now working with the MainActivity.kt file, where we need to pass the instance of custom ItemDecoration class(in this case RecyclerViewItemDecoration). To implement the same invoke the following code inside the MainActivity.kt file. Kotlin import android.os.Bundleimport androidx.appcompat.app.AppCompatActivityimport androidx.recyclerview.widget.DividerItemDecorationimport androidx.recyclerview.widget.LinearLayoutManagerimport androidx.recyclerview.widget.RecyclerView class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) // create instance of RecyclerView and register with the appropriate ID val recyclerView: RecyclerView = findViewById(R.id.recyclerView) // create list of RecyclerViewData var recyclerViewData = listOf<RecyclerViewData>() recyclerViewData = recyclerViewData + RecyclerViewData("1", "One") recyclerViewData = recyclerViewData + RecyclerViewData("2", "Two") recyclerViewData = recyclerViewData + RecyclerViewData("3", "Three") recyclerViewData = recyclerViewData + RecyclerViewData("4", "Four") recyclerViewData = recyclerViewData + RecyclerViewData("5", "Five") recyclerViewData = recyclerViewData + RecyclerViewData("6", "Six") recyclerViewData = recyclerViewData + RecyclerViewData("7", "Seven") recyclerViewData = recyclerViewData + RecyclerViewData("8", "Eight") recyclerViewData = recyclerViewData + RecyclerViewData("9", "Nine") recyclerViewData = recyclerViewData + RecyclerViewData("10", "Ten") recyclerViewData = recyclerViewData + RecyclerViewData("11", "Eleven") recyclerViewData = recyclerViewData + RecyclerViewData("12", "Twelve") recyclerViewData = recyclerViewData + RecyclerViewData("13", "Thirteen") recyclerViewData = recyclerViewData + RecyclerViewData("14", "Fourteen") recyclerViewData = recyclerViewData + RecyclerViewData("15", "Fifteen") // create a vertical layout manager val layoutManager = LinearLayoutManager(this, LinearLayoutManager.VERTICAL, false) // create instance of MyRecyclerViewAdapter val myRecyclerViewAdapter = MyRecyclerViewAdapter(recyclerViewData) // attach the adapter to the recycler view recyclerView.adapter = myRecyclerViewAdapter // also attach the layout manager recyclerView.layoutManager = layoutManager // call the method addItemDecoration with the // recyclerView instance and pass custom ItemDecoration instance recyclerView.addItemDecoration(RecyclerViewItemDecoration(this, R.drawable.divider)) // make the adapter the data set changed // for the recycler view myRecyclerViewAdapter.notifyDataSetChanged() }} Output: Android Kotlin Android Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Resource Raw Folder in Android Studio Flutter - Custom Bottom Navigation Bar How to Read Data from SQLite Database in Android? Retrofit with Kotlin Coroutine in Android How to Post Data to API using Retrofit in Android? Kotlin Array Android UI Layouts Retrofit with Kotlin Coroutine in Android How to Get Current Location in Android? Kotlin Setters and Getters
[ { "code": null, "e": 26381, "s": 26353, "text": "\n27 Sep, 2021" }, { "code": null, "e": 26943, "s": 26381, "text": "In the article Android RecyclerView in Kotlin, it’s been demonstrated how to implement the RecyclerView in Android. But in the case of User Experience, the items n...
Find the K-th Permutation Sequence of first N natural numbers - GeeksforGeeks
19 Jan, 2021 Given two integers N and K, find the Kth permutation sequence of numbers from 1 to N without using STL function.Note: Assume that the inputs are such that Kth permutation of N number is always possible. Examples: Input: N = 3, K = 4 Output: 231 Explanation: The ordered list of permutation sequence from integer 1 to 3 is : 123, 132, 213, 231, 312, 321. So, the 4th permutation sequence is β€œ231”. Input: N = 2, K = 1 Output: 12 Explanation: For n = 2, only 2 permutations are possible 12 21. So, the 1st permutation sequence is β€œ12”. Naive Approach:To solve the problem mentioned above the simple approach is to find all permutation sequences and output the kth out of them. But this method is not so efficient and takes more time, hence it can be optimized. Efficient Approach:To optimize the above method mentioned above, observe that the value of k can be directly used to find the number at each index of the sequence. The first position of an n length sequence is occupied by each of the numbers from 1 to n exactly n! / n that is (n-1)! number of times and in ascending order. So the first position of the kth sequence will be occupied by the number present at index = k / (n-1)! (according to 1-based indexing). The currently found number can not occur again so it is removed from the original n numbers and now the problem reduces to finding the ( k % (n-1)! )th permutation sequence of the remaining n-1 numbers. This process can be repeated until we have only one number left which will be placed in the first position of the last 1-length sequence. The factorial values involved here can be very large as compared to k. So, the trick used to avoid the full computation of such large factorials is that as soon as the product n * (n-1) * ... becomes greater than k, we no longer need to find the actual factorial value because: k / n_actual_factorial_value = 0 and k / n_partial_factorial_value = 0 when partial_factorial_value > k Below is the implementation of the above approach: C++ Java Python3 C# // C++ program to Find the kth Permutation// Sequence of first n natural numbers #include <bits/stdc++.h>using namespace std; // Function to find the index of number// at first position of// kth sequence of set of size nint findFirstNumIndex(int& k, int n){ if (n == 1) return 0; n--; int first_num_index; // n_actual_fact = n! int n_partial_fact = n; while (k >= n_partial_fact && n > 1) { n_partial_fact = n_partial_fact * (n - 1); n--; } // First position of the // kth sequence will be // occupied by the number present // at index = k / (n-1)! first_num_index = k / n_partial_fact; k = k % n_partial_fact; return first_num_index;} // Function to find the// kth permutation of n numbersstring findKthPermutation(int n, int k){ // Store final answer string ans = ""; set<int> s; // Insert all natural number // upto n in set for (int i = 1; i <= n; i++) s.insert(i); set<int>::iterator itr; // Mark the first position itr = s.begin(); // subtract 1 to get 0 based indexing k = k - 1; for (int i = 0; i < n; i++) { int index = findFirstNumIndex(k, n - i); advance(itr, index); // itr now points to the // number at index in set s ans += (to_string(*itr)); // remove current number from the set s.erase(itr); itr = s.begin(); } return ans;} // Driver codeint main(){ int n = 3, k = 4; string kth_perm_seq = findKthPermutation(n, k); cout << kth_perm_seq << endl; return 0;} // Java program to Find// the kth Permutation// Sequence of first// n natural numbersimport java.util.*;class GFG{ // Function to find the index of// number at first position of// kth sequence of set of size nstatic int findFirstNumIndex(int k, int n){ if (n == 1) return 0; n--; int first_num_index; // n_actual_fact = n! int n_partial_fact = n; while (k >= n_partial_fact && n > 1) { n_partial_fact = n_partial_fact * (n - 1); n--; } // First position of the // kth sequence will be // occupied by the number present // at index = k / (n-1)! first_num_index = k / n_partial_fact; k = k % n_partial_fact; return first_num_index;} // Function to find the// kth permutation of n numbersstatic String findKthPermutation(int n, int k){ // Store final answer String ans = ""; HashSet<Integer> s = new HashSet<>(); // Insert all natural number // upto n in set for (int i = 1; i <= n; i++) s.add(i); Vector<Integer> v = new Vector<>(); v.addAll(s); // Mark the first position int itr = v.elementAt(0); // Subtract 1 to // get 0 based // indexing k = k - 1; for (int i = 0; i < n; i++) { int index = findFirstNumIndex(k, n - i); // itr now points to the // number at index in set s if(index < v.size()) { ans += ((v.elementAt(index).toString())); v.remove(index); } else ans += String.valueOf(itr + 2); // Remove current number // from the set itr = v.elementAt(0); } return ans;} // Driver codepublic static void main(String[] args){ int n = 3, k = 4; String kth_perm_seq = findKthPermutation(n, k); System.out.print(kth_perm_seq + "\n");}} // This code is contributed by Rajput-Ji # Python3 program to find the kth permutation# Sequence of first n natural numbers # Function to find the index of number# at first position of kth sequence of# set of size ndef findFirstNumIndex(k, n): if (n == 1): return 0, k n -= 1 first_num_index = 0 # n_actual_fact = n! n_partial_fact = n while (k >= n_partial_fact and n > 1): n_partial_fact = n_partial_fact * (n - 1) n -= 1 # First position of the kth sequence # will be occupied by the number present # at index = k / (n-1)! first_num_index = k // n_partial_fact k = k % n_partial_fact return first_num_index, k # Function to find the# kth permutation of n numbersdef findKthPermutation(n, k): # Store final answer ans = "" s = set() # Insert all natural number # upto n in set for i in range(1, n + 1): s.add(i) # Subtract 1 to get 0 based indexing k = k - 1 for i in range(n): # Mark the first position itr = list(s) index, k = findFirstNumIndex(k, n - i) # itr now points to the # number at index in set s ans += str(itr[index]) # remove current number from the set itr.pop(index) s = set(itr) return ans # Driver codeif __name__=='__main__': n = 3 k = 4 kth_perm_seq = findKthPermutation(n, k) print(kth_perm_seq) # This code is contributed by rutvik_56 // C# program to Find// the kth Permutation// Sequence of first// n natural numbersusing System;using System.Collections.Generic;class GFG{ // Function to find the index of// number at first position of// kth sequence of set of size nstatic int findFirstNumIndex(int k, int n){ if (n == 1) return 0; n--; int first_num_index; // n_actual_fact = n! int n_partial_fact = n; while (k >= n_partial_fact && n > 1) { n_partial_fact = n_partial_fact * (n - 1); n--; } // First position of the // kth sequence will be // occupied by the number present // at index = k / (n-1)! first_num_index = k / n_partial_fact; k = k % n_partial_fact; return first_num_index;} // Function to find the// kth permutation of n numbersstatic String findKthPermutation(int n, int k){ // Store readonly answer String ans = ""; HashSet<int> s = new HashSet<int>(); // Insert all natural number // upto n in set for (int i = 1; i <= n; i++) s.Add(i); List<int> v = new List<int>(s); // Mark the first position int itr = v[0]; // Subtract 1 to // get 0 based // indexing k = k - 1; for (int i = 0; i < n; i++) { int index = findFirstNumIndex(k, n - i); // itr now points to the // number at index in set s if(index < v.Count) { ans += ((v[index].ToString())); v.RemoveAt(index); } else ans += String.Join("", itr + 2); // Remove current number // from the set itr = v[0]; } return ans;} // Driver codepublic static void Main(String[] args){ int n = 3, k = 4; String kth_perm_seq = findKthPermutation(n, k); Console.Write(kth_perm_seq + "\n");}} // This code is contributed by Rajput-Ji 231 Rajput-Ji rutvik_56 Natural Numbers permutation Combinatorial Mathematical Mathematical permutation Combinatorial Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python program to get all subsets of given size of a set Heap's Algorithm for generating permutations Distinct permutations of the string | Set 2 Make all combinations of size k Count Derangements (Permutation such that no element appears in its original position) Program for Fibonacci numbers C++ Data Types Set in C++ Standard Template Library (STL) Coin Change | DP-7 Merge two sorted arrays
[ { "code": null, "e": 26333, "s": 26305, "text": "\n19 Jan, 2021" }, { "code": null, "e": 26536, "s": 26333, "text": "Given two integers N and K, find the Kth permutation sequence of numbers from 1 to N without using STL function.Note: Assume that the inputs are such that Kth perm...
gdb command in Linux with examples - GeeksforGeeks
20 May, 2019 gdb is the acronym for GNU Debugger. This tool helps to debug the programs written in C, C++, Ada, Fortran, etc. The console can be opened using the gdb command on terminal. Syntax: gdb [-help] [-nx] [-q] [-batch] [-cd=dir] [-f] [-b bps] [-tty=dev] [-s symfile] [-e prog] [-se prog] [-c core] [-x cmds] [-d dir] [prog[core|procID]] Example: The program to be debugged should be compiled with -g option. The below given C++ file that is saved as gfg.cpp. We are going to use this file in this article. #include <iostream>#include <stdlib.h>#include <string.h>using namespace std; int findSquare(int a){ return a * a;} int main(int n, char** args){ for (int i = 1; i < n; i++) { int a = atoi(args[i]); cout << findSquare(a) << endl; } return 0;} Compile the above C++ program using the command: g++ -g -o gfg gfg.cpp To start the debugger of the above gfg executable file, enter the command gdb gfg. It opens the gdb console of the current program, after printing the version information. run [args] : This command runs the current executable file. In the below image, the program was executed twice, one with the command line argument 10 and another with the command line argument 1, and their corresponding outputs were printed.quit or q : To quit the gdb console, either quit or q can be used.help : It launches the manual of gdb along with all list of classes of individual commands.break : The command break [function name] helps to pause the program during execution when it starts to execute the function. It helps to debug the program at that point. Multiple breakpoints can be inserted by executing the command wherever necessary. b findSquare command makes the gfg executable pause when the debugger starts to execute the findSquare function.b break [function name] break [file name]:[line number] break [line number] break *[address] break ***any of the above arguments*** if [condition] b ***any of the above arguments*** In the above example, the program that was being executed(run 10 100), paused when it encountered findSquare function call. The program pauses whenever the function is called. Once the command is successful, it prints the breakpoint number, information of the program counter, file name, and the line number. As it encounters any breakpoint during execution, it prints the breakpoint number, function name with the values of the arguments, file name, and line number. The breakpoint can be set either with the address of the instruction(in hexadecimal form preceded with *0x) or the line number and it can be combined with if condition(if the condition fails, the breakpoint will not be set) For example, break findSquare if a == 10.continue : This command helps to resume the current executable after it is paused by the breakpoint. It executes the program until it encounters any breakpoint or runs time error or the end of the program. If there is an integer in the argument(repeat count), it will consider it as the continue repeat count and will execute continue command β€œrepeat count” number of times.continue [repeat count] c [repeat count] next or n : This command helps to execute the next instruction after it encounters the breakpoint.Whenever it encounters the above command, it executes the next instruction of the executable by printing the line in execution.delete : This command helps to deletes the breakpoints and checkpoints. If the delete command is executed without any arguments, it deletes all the breakpoints without modifying any of the checkpoints. Similarly, if the checkpoint of the parent process is deleted, all the child checkpoints are automatically deleted.d delete delete [breakpoint number 1] [breakpoint number 2] ... delete checkpoint [checkpoint number 1] [checkpoint number 2] ... In the above example, two breakpoints were defined, one at the main and the other at the findSquare. Using the above command findSquare breakpoint was deleted. If there is no argument after the command, the command deletes all the breakpoints.clear : This command deletes the breakpoint which is at a particular function with the name FUNCTION_NAME. If the argument is a number, then it deletes the breakpoint that lies in that particular line.clear [line number] clear [FUNCTION_NAME] In the above example, once the clear command is executed, the breakpoint is deleted after printing the breakpoint number.disable [breakpoint number 1] [breakpoint number 2] .... : Instead of deleting or clearing the breakpoints, they can be disabled and can be enabled whenever they are necessary.enable [breakpoint number 1] [breakpoint number 2] .... : To enable the disabled breakpoints, this command is used.info : When the info breakpoints in invoked, the breakpoint number, type, display, status, address, the location will be displayed. If the breakpoint number is specified, only the information about that particular breakpoint will be displayed. Similarly, when the info checkpoints are invoked, the checkpoint number, the process id, program counter, file name, and line number are displayed.info breakpoints [breakpoint number 1] [breakpoint number 2] ... info checkpoints [checkpoint number 1] [checkpoint number 2] ... checkpoint command and restart command : These command creates a new process and keep that process in the suspended mode and prints the created process’s process id.For example, in the above execution, the breakpoint is kept at function findSquare and the program was executed with the arguments β€œ1 10 100”. When the function is called initially with a = 1, the breakpoint happens. Now we create a checkpoint and hence gdb returns a process id(4272), keeps it in the suspended mode and resumes the original thread once the continue command is invoked. Now the breakpoint happens with a = 10 and another checkpoint(pid = 4278) is created. From the info checkpoint information, the asterisk mentions the process that will run if the gdb encounters a continue. To resume a specific process, restart command is used with the argument that specifies the serial number of the process. If all the process are finished executing, the info checkpoint command returns nothing.set args [arg1] [arg2] ... : This command creates the argument list and it passes the specified arguments as the command line arguments whenever the run command without any argument is invoked. If the run command is executed with arguments after set args, the arguments are updated. Whenever the run command is ran without the arguments, the arguments are set by default.show args : The show args prints the default arguments that will passed if the run command is executed. If either set args or run command is executed with the arguments, the default arguments will get updated, and can be viewed using the above show args command.display [/format specifier] [expression] and undisplay [display id1] [display id2] ... : These command enables automatic displaying of expressions each time whenever the execution encounters a breakpoint or the n command. The undisplay command is used to remove display expressions. Valid format specifiers are as follows:o - octal x - hexadecimal d - decimal u - unsigned decimal t - binary f - floating point a - address c - char s - string i - instruction In the above example, the breakpoint is set at line 12 and ran with the arguments 1 10 100. Once the breakpoint is encountered, display command is executed to print the value of i in hexadecimal form and value of args[i] in the string form. After then, whenever the command n or a breakpoint is encountered, the values are displayed again until they are disabled using undisplay command.print : This command prints the value of a given expression. The display command prints all the previously displayed values whenever it encounters a breakpoint or the next command, whereas the print command saves all the previously displayed values and prints whenever it is called.print [Expression] print $[Previous value number] print {[Type]}[Address] print [First element]@[Element count] print /[Format] [Expression] file : gdb console can be opened using the command gdb command. To debug the executables from the console, file [executable filename] command is used. run [args] : This command runs the current executable file. In the below image, the program was executed twice, one with the command line argument 10 and another with the command line argument 1, and their corresponding outputs were printed. quit or q : To quit the gdb console, either quit or q can be used. help : It launches the manual of gdb along with all list of classes of individual commands. break : The command break [function name] helps to pause the program during execution when it starts to execute the function. It helps to debug the program at that point. Multiple breakpoints can be inserted by executing the command wherever necessary. b findSquare command makes the gfg executable pause when the debugger starts to execute the findSquare function.b break [function name] break [file name]:[line number] break [line number] break *[address] break ***any of the above arguments*** if [condition] b ***any of the above arguments*** In the above example, the program that was being executed(run 10 100), paused when it encountered findSquare function call. The program pauses whenever the function is called. Once the command is successful, it prints the breakpoint number, information of the program counter, file name, and the line number. As it encounters any breakpoint during execution, it prints the breakpoint number, function name with the values of the arguments, file name, and line number. The breakpoint can be set either with the address of the instruction(in hexadecimal form preceded with *0x) or the line number and it can be combined with if condition(if the condition fails, the breakpoint will not be set) For example, break findSquare if a == 10. b break [function name] break [file name]:[line number] break [line number] break *[address] break ***any of the above arguments*** if [condition] b ***any of the above arguments*** In the above example, the program that was being executed(run 10 100), paused when it encountered findSquare function call. The program pauses whenever the function is called. Once the command is successful, it prints the breakpoint number, information of the program counter, file name, and the line number. As it encounters any breakpoint during execution, it prints the breakpoint number, function name with the values of the arguments, file name, and line number. The breakpoint can be set either with the address of the instruction(in hexadecimal form preceded with *0x) or the line number and it can be combined with if condition(if the condition fails, the breakpoint will not be set) For example, break findSquare if a == 10. continue : This command helps to resume the current executable after it is paused by the breakpoint. It executes the program until it encounters any breakpoint or runs time error or the end of the program. If there is an integer in the argument(repeat count), it will consider it as the continue repeat count and will execute continue command β€œrepeat count” number of times.continue [repeat count] c [repeat count] continue [repeat count] c [repeat count] next or n : This command helps to execute the next instruction after it encounters the breakpoint.Whenever it encounters the above command, it executes the next instruction of the executable by printing the line in execution. Whenever it encounters the above command, it executes the next instruction of the executable by printing the line in execution. delete : This command helps to deletes the breakpoints and checkpoints. If the delete command is executed without any arguments, it deletes all the breakpoints without modifying any of the checkpoints. Similarly, if the checkpoint of the parent process is deleted, all the child checkpoints are automatically deleted.d delete delete [breakpoint number 1] [breakpoint number 2] ... delete checkpoint [checkpoint number 1] [checkpoint number 2] ... In the above example, two breakpoints were defined, one at the main and the other at the findSquare. Using the above command findSquare breakpoint was deleted. If there is no argument after the command, the command deletes all the breakpoints. d delete delete [breakpoint number 1] [breakpoint number 2] ... delete checkpoint [checkpoint number 1] [checkpoint number 2] ... In the above example, two breakpoints were defined, one at the main and the other at the findSquare. Using the above command findSquare breakpoint was deleted. If there is no argument after the command, the command deletes all the breakpoints. clear : This command deletes the breakpoint which is at a particular function with the name FUNCTION_NAME. If the argument is a number, then it deletes the breakpoint that lies in that particular line.clear [line number] clear [FUNCTION_NAME] In the above example, once the clear command is executed, the breakpoint is deleted after printing the breakpoint number. clear [line number] clear [FUNCTION_NAME] In the above example, once the clear command is executed, the breakpoint is deleted after printing the breakpoint number. disable [breakpoint number 1] [breakpoint number 2] .... : Instead of deleting or clearing the breakpoints, they can be disabled and can be enabled whenever they are necessary. enable [breakpoint number 1] [breakpoint number 2] .... : To enable the disabled breakpoints, this command is used. info : When the info breakpoints in invoked, the breakpoint number, type, display, status, address, the location will be displayed. If the breakpoint number is specified, only the information about that particular breakpoint will be displayed. Similarly, when the info checkpoints are invoked, the checkpoint number, the process id, program counter, file name, and line number are displayed.info breakpoints [breakpoint number 1] [breakpoint number 2] ... info checkpoints [checkpoint number 1] [checkpoint number 2] ... info breakpoints [breakpoint number 1] [breakpoint number 2] ... info checkpoints [checkpoint number 1] [checkpoint number 2] ... checkpoint command and restart command : These command creates a new process and keep that process in the suspended mode and prints the created process’s process id.For example, in the above execution, the breakpoint is kept at function findSquare and the program was executed with the arguments β€œ1 10 100”. When the function is called initially with a = 1, the breakpoint happens. Now we create a checkpoint and hence gdb returns a process id(4272), keeps it in the suspended mode and resumes the original thread once the continue command is invoked. Now the breakpoint happens with a = 10 and another checkpoint(pid = 4278) is created. From the info checkpoint information, the asterisk mentions the process that will run if the gdb encounters a continue. To resume a specific process, restart command is used with the argument that specifies the serial number of the process. If all the process are finished executing, the info checkpoint command returns nothing. For example, in the above execution, the breakpoint is kept at function findSquare and the program was executed with the arguments β€œ1 10 100”. When the function is called initially with a = 1, the breakpoint happens. Now we create a checkpoint and hence gdb returns a process id(4272), keeps it in the suspended mode and resumes the original thread once the continue command is invoked. Now the breakpoint happens with a = 10 and another checkpoint(pid = 4278) is created. From the info checkpoint information, the asterisk mentions the process that will run if the gdb encounters a continue. To resume a specific process, restart command is used with the argument that specifies the serial number of the process. If all the process are finished executing, the info checkpoint command returns nothing. set args [arg1] [arg2] ... : This command creates the argument list and it passes the specified arguments as the command line arguments whenever the run command without any argument is invoked. If the run command is executed with arguments after set args, the arguments are updated. Whenever the run command is ran without the arguments, the arguments are set by default. show args : The show args prints the default arguments that will passed if the run command is executed. If either set args or run command is executed with the arguments, the default arguments will get updated, and can be viewed using the above show args command. display [/format specifier] [expression] and undisplay [display id1] [display id2] ... : These command enables automatic displaying of expressions each time whenever the execution encounters a breakpoint or the n command. The undisplay command is used to remove display expressions. Valid format specifiers are as follows:o - octal x - hexadecimal d - decimal u - unsigned decimal t - binary f - floating point a - address c - char s - string i - instruction In the above example, the breakpoint is set at line 12 and ran with the arguments 1 10 100. Once the breakpoint is encountered, display command is executed to print the value of i in hexadecimal form and value of args[i] in the string form. After then, whenever the command n or a breakpoint is encountered, the values are displayed again until they are disabled using undisplay command. o - octal x - hexadecimal d - decimal u - unsigned decimal t - binary f - floating point a - address c - char s - string i - instruction In the above example, the breakpoint is set at line 12 and ran with the arguments 1 10 100. Once the breakpoint is encountered, display command is executed to print the value of i in hexadecimal form and value of args[i] in the string form. After then, whenever the command n or a breakpoint is encountered, the values are displayed again until they are disabled using undisplay command. print : This command prints the value of a given expression. The display command prints all the previously displayed values whenever it encounters a breakpoint or the next command, whereas the print command saves all the previously displayed values and prints whenever it is called.print [Expression] print $[Previous value number] print {[Type]}[Address] print [First element]@[Element count] print /[Format] [Expression] print [Expression] print $[Previous value number] print {[Type]}[Address] print [First element]@[Element count] print /[Format] [Expression] file : gdb console can be opened using the command gdb command. To debug the executables from the console, file [executable filename] command is used. linux-command Linux-misc-commands Picked Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. ZIP command in Linux with examples TCP Server-Client implementation in C SORT command in Linux/Unix with examples tar command in Linux with examples curl command in Linux with Examples Conditional Statements | Shell Script 'crontab' in Linux with Examples diff command in Linux with examples UDP Server-Client implementation in C Tail command in Linux with examples
[ { "code": null, "e": 25705, "s": 25677, "text": "\n20 May, 2019" }, { "code": null, "e": 25879, "s": 25705, "text": "gdb is the acronym for GNU Debugger. This tool helps to debug the programs written in C, C++, Ada, Fortran, etc. The console can be opened using the gdb command on...
Find a permutation that causes worst case of Merge Sort - GeeksforGeeks
18 Jan, 2022 Given a set of elements, find which permutation of these elements would result in worst case of Merge Sort.Asymptotically, merge sort always takes O(n Log n) time, but the cases that require more comparisons generally take more time in practice. We basically need to find a permutation of input elements that would lead to maximum number of comparisons when sorted using a typical Merge Sort algorithm. Example: Consider the below set of elements {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16} Below permutation of the set causes 153 comparisons. {1, 9, 5, 13, 3, 11, 7, 15, 2, 10, 6, 14, 4, 12, 8, 16} And an already sorted permutation causes 30 comparisons. See this for a program that counts comparisons and shows above results. Now how to get worst case input for merge sort for an input set? Lets us try to build the array in bottom up mannerLet the sorted array be {1,2,3,4,5,6,7,8}. In order to generate the worst case of merge sort, the merge operation that resulted in above sorted array should result in maximum comparisons. In order to do so, the left and right sub-array involved in merge operation should store alternate elements of sorted array. i.e. left sub-array should be {1,3,5,7} and right sub-array should be {2,4,6,8}. Now every element of array will be compared at-least once and that will result in maximum comparisons. We apply the same logic for left and right sub-array as well. For array {1,3,5,7}, the worst case will be when its left and right sub-array are {1,5} and {3,7} respectively and for array {2,4,6,8} the worst case will occur for {2,4} and {6,8}. Complete Algorithm –GenerateWorstCase(arr[]) Create two auxiliary arrays left and right and store alternate array elements in them.Call GenerateWorstCase for left subarray: GenerateWorstCase (left)Call GenerateWorstCase for right subarray: GenerateWorstCase (right)Copy all elements of left and right subarrays back to original array. Create two auxiliary arrays left and right and store alternate array elements in them. Call GenerateWorstCase for left subarray: GenerateWorstCase (left) Call GenerateWorstCase for right subarray: GenerateWorstCase (right) Copy all elements of left and right subarrays back to original array. Below is the implementation of the idea C++ C Java C# Javascript Python3 // C++ program to generate Worst Case// of Merge Sort#include <bits/stdc++.h>using namespace std; // Function to print an arrayvoid printArray(int A[], int size){ for(int i = 0; i < size; i++) { cout << A[i] << " "; } cout << endl;} // Function to join left and right subarrayint join(int arr[], int left[], int right[], int l, int m, int r){ int i; for(i = 0; i <= m - l; i++) arr[i] = left[i]; for(int j = 0; j < r - m; j++) { arr[i + j] = right[j]; }} // Function to store alternate elements in// left and right subarrayint split(int arr[], int left[], int right[], int l, int m, int r){ for(int i = 0; i <= m - l; i++) left[i] = arr[i * 2]; for(int i = 0; i < r - m; i++) right[i] = arr[i * 2 + 1];} // Function to generate Worst Case// of Merge Sortint generateWorstCase(int arr[], int l, int r){ if (l < r) { int m = l + (r - l) / 2; // Create two auxiliary arrays int left[m - l + 1]; int right[r - m]; // Store alternate array elements // in left and right subarray split(arr, left, right, l, m, r); // Recurse first and second halves generateWorstCase(left, l, m); generateWorstCase(right, m + 1, r); // Join left and right subarray join(arr, left, right, l, m, r); }} // Driver codeint main(){ // Sorted array int arr[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 }; int n = sizeof(arr) / sizeof(arr[0]); cout << "Sorted array is \n"; printArray(arr, n); // Generate Worst Case of Merge Sort generateWorstCase(arr, 0, n - 1); cout << "\nInput array that will result " << "in worst case of merge sort is \n"; printArray(arr, n); return 0;} // This code is contributed by Mayank Tyagi // C program to generate Worst Case of Merge Sort#include <stdlib.h>#include <stdio.h> // Function to print an arrayvoid printArray(int A[], int size){ for (int i = 0; i < size; i++) printf("%d ", A[i]); printf("\n");} // Function to join left and right subarrayint join(int arr[], int left[], int right[], int l, int m, int r){ int i; // Used in second loop for (i = 0; i <= m - l; i++) arr[i] = left[i]; for (int j = 0; j < r - m; j++) arr[i + j] = right[j];} // Function to store alternate elements in left// and right subarrayint split(int arr[], int left[], int right[], int l, int m, int r){ for (int i = 0; i <= m - l; i++) left[i] = arr[i * 2]; for (int i = 0; i < r - m; i++) right[i] = arr[i * 2 + 1];} // Function to generate Worst Case of Merge Sortint generateWorstCase(int arr[], int l, int r){ if (l < r) { int m = l + (r - l) / 2; // create two auxiliary arrays int left[m - l + 1]; int right[r - m]; // Store alternate array elements in left // and right subarray split(arr, left, right, l, m, r); // Recurse first and second halves generateWorstCase(left, l, m); generateWorstCase(right, m + 1, r); // join left and right subarray join(arr, left, right, l, m, r); }} // Driver codeint main(){ // Sorted array int arr[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 }; int n = sizeof(arr) / sizeof(arr[0]); printf("Sorted array is \n"); printArray(arr, n); // generate Worst Case of Merge Sort generateWorstCase(arr, 0, n - 1); printf("\nInput array that will result in " "worst case of merge sort is \n"); printArray(arr, n); return 0;} // Java program to generate Worst Case of Merge Sort import java.util.Arrays; class GFG{ // Function to join left and right subarray static void join(int arr[], int left[], int right[], int l, int m, int r) { int i; for (i = 0; i <= m - l; i++) arr[i] = left[i]; for (int j = 0; j < r - m; j++) arr[i + j] = right[j]; } // Function to store alternate elements in left // and right subarray static void split(int arr[], int left[], int right[], int l, int m, int r) { for (int i = 0; i <= m - l; i++) left[i] = arr[i * 2]; for (int i = 0; i < r - m; i++) right[i] = arr[i * 2 + 1]; } // Function to generate Worst Case of Merge Sort static void generateWorstCase(int arr[], int l, int r) { if (l < r) { int m = l + (r - l) / 2; // create two auxiliary arrays int[] left = new int[m - l + 1]; int[] right = new int[r - m]; // Store alternate array elements in left // and right subarray split(arr, left, right, l, m, r); // Recurse first and second halves generateWorstCase(left, l, m); generateWorstCase(right, m + 1, r); // join left and right subarray join(arr, left, right, l, m, r); } } // driver program public static void main (String[] args) { // sorted array int arr[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 }; int n = arr.length; System.out.println("Sorted array is"); System.out.println(Arrays.toString(arr)); // generate Worst Case of Merge Sort generateWorstCase(arr, 0, n - 1); System.out.println("\nInput array that will result in \n"+ "worst case of merge sort is \n"); System.out.println(Arrays.toString(arr)); }} // Contributed by Pramod Kumar // C# program to generate Worst Case of// Merge Sortusing System; class GFG { // Function to join left and right subarray static void join(int []arr, int []left, int []right, int l, int m, int r) { int i; for (i = 0; i <= m - l; i++) arr[i] = left[i]; for (int j = 0; j < r - m; j++) arr[i + j] = right[j]; } // Function to store alternate elements in // left and right subarray static void split(int []arr, int []left, int []right, int l, int m, int r) { for (int i = 0; i <= m - l; i++) left[i] = arr[i * 2]; for (int i = 0; i < r - m; i++) right[i] = arr[i * 2 + 1]; } // Function to generate Worst Case of // Merge Sort static void generateWorstCase(int []arr, int l, int r) { if (l < r) { int m = l + (r - l) / 2; // create two auxiliary arrays int[] left = new int[m - l + 1]; int[] right = new int[r - m]; // Store alternate array elements // in left and right subarray split(arr, left, right, l, m, r); // Recurse first and second halves generateWorstCase(left, l, m); generateWorstCase(right, m + 1, r); // join left and right subarray join(arr, left, right, l, m, r); } } // driver program public static void Main () { // sorted array int []arr = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 }; int n = arr.Length; Console.Write("Sorted array is\n"); for(int i = 0; i < n; i++) Console.Write(arr[i] + " "); // generate Worst Case of Merge Sort generateWorstCase(arr, 0, n - 1); Console.Write("\nInput array that will " + "result in \n worst case of" + " merge sort is \n"); for(int i = 0; i < n; i++) Console.Write(arr[i] + " "); }} // This code is contributed by Smitha <script> // javascript program to generate Worst Case // of Merge Sort // Function to print an array function printArray(A,size) { for(let i = 0; i < size; i++) { document.write(A[i] + " "); } } // Function to join left and right subarray function join(arr,left,right,l,m,r) { let i; for(i = 0; i <= m - l; i++) arr[i] = left[i]; for(let j = 0; j < r - m; j++) { arr[i + j] = right[j]; } } // Function to store alternate elements in // left and right subarray function split(arr,left,right,l,m,r) { for(let i = 0; i <= m - l; i++) left[i] = arr[i * 2]; for(let i = 0; i < r - m; i++) right[i] = arr[i * 2 + 1]; } // Function to generate Worst Case // of Merge Sort function generateWorstCase(arr,l,r) { if (l < r) { let m = l + parseInt((r - l) / 2, 10); // Create two auxiliary arrays let left = new Array(m - l + 1); let right = new Array(r - m); left.fill(0); right.fill(0); // Store alternate array elements // in left and right subarray split(arr, left, right, l, m, r); // Recurse first and second halves generateWorstCase(left, l, m); generateWorstCase(right, m + 1, r); // Join left and right subarray join(arr, left, right, l, m, r); } } let arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 ]; let n = arr.length; document.write("Sorted array is" + "</br>"); printArray(arr, n); // Generate Worst Case of Merge Sort generateWorstCase(arr, 0, n - 1); document.write("</br>" + "Input array that will result " + "in worst case of merge sort is" + "</br>"); printArray(arr, n); // This code is contributed by vaibhavrabadiya117.</script> # Python program to generate Worst Case of Merge Sort # Function to join left and right subarraydef join(arr, left, right, l, m, r): i = 0; for i in range(m-l+1): arr[i] = left[i]; i+=1; for j in range(r-m): arr[i + j] = right[j]; # Function to store alternate elements in left# and right subarraydef split(arr, left, right, l, m, r): for i in range(m-l+1): left[i] = arr[i * 2]; for i in range(r-m): right[i] = arr[i * 2 + 1]; # Function to generate Worst Case of Merge Sortdef generateWorstCase(arr, l, r): if (l < r): m = l + (r - l) // 2; # create two auxiliary arrays left = [0 for i in range(m - l + 1)]; right = [0 for i in range(r-m)]; # Store alternate array elements in left # and right subarray split(arr, left, right, l, m, r); # Recurse first and second halves generateWorstCase(left, l, m); generateWorstCase(right, m + 1, r); # join left and right subarray join(arr, left, right, l, m, r); # driver programif __name__ == '__main__': # sorted array arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]; n = len(arr); print("Sorted array is"); print(arr); # generate Worst Case of Merge Sort generateWorstCase(arr, 0, n - 1); print("\nInput array that will result in \n" + "worst case of merge sort is "); print(arr); # This code contributed by shikhasingrajput Output: Sorted array is 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Input array that will result in worst case of merge sort is 1 9 5 13 3 11 7 15 2 10 6 14 4 12 8 16 References – Stack OverflowThis article is contributed by Aditya Goel. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above Smitha Dinesh Semwal nidhi_biet mayanktyagi1709 vaibhavrabadiya117 anikakapoor shikhasingrajput Merge Sort Sorting Sorting Merge Sort Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Chocolate Distribution Problem C++ Program for QuickSort Stability in sorting algorithms Quick Sort vs Merge Sort Sorting in Java Quickselect Algorithm Recursive Bubble Sort Check if two arrays are equal or not Longest Common Prefix using Sorting Python | Sort a List according to the Length of the Elements
[ { "code": null, "e": 25321, "s": 25293, "text": "\n18 Jan, 2022" }, { "code": null, "e": 25724, "s": 25321, "text": "Given a set of elements, find which permutation of these elements would result in worst case of Merge Sort.Asymptotically, merge sort always takes O(n Log n) time,...
Implement Stack using Queues - GeeksforGeeks
04 Oct, 2021 The problem is opposite of this post. We are given a Queue data structure that supports standard operations like enqueue() and dequeue(). We need to implement a Stack data structure using only instances of Queue and queue operations allowed on the instances. A stack can be implemented using two queues. Let stack to be implemented be β€˜s’ and queues used to implement be β€˜q1’ and β€˜q2’. Stack β€˜s’ can be implemented in two ways:Method 1 (By making push operation costly) This method makes sure that newly entered element is always at the front of β€˜q1’, so that pop operation just dequeues from β€˜q1’. β€˜q2’ is used to put every new element at front of β€˜q1’. push(s, x) operation’s step are described below: Enqueue x to q2One by one dequeue everything from q1 and enqueue to q2.Swap the names of q1 and q2pop(s) operation’s function are described below: Dequeue an item from q1 and return it. push(s, x) operation’s step are described below: Enqueue x to q2One by one dequeue everything from q1 and enqueue to q2.Swap the names of q1 and q2 Enqueue x to q2 One by one dequeue everything from q1 and enqueue to q2. Swap the names of q1 and q2 pop(s) operation’s function are described below: Dequeue an item from q1 and return it. Dequeue an item from q1 and return it. Below is the implementation of the above approach: C++ Java Python3 C# /* Program to implement a stack usingtwo queue */#include <bits/stdc++.h> using namespace std; class Stack { // Two inbuilt queues queue<int> q1, q2; // To maintain current number of // elements int curr_size; public: Stack() { curr_size = 0; } void push(int x) { curr_size++; // Push x first in empty q2 q2.push(x); // Push all the remaining // elements in q1 to q2. while (!q1.empty()) { q2.push(q1.front()); q1.pop(); } // swap the names of two queues queue<int> q = q1; q1 = q2; q2 = q; } void pop() { // if no elements are there in q1 if (q1.empty()) return; q1.pop(); curr_size--; } int top() { if (q1.empty()) return -1; return q1.front(); } int size() { return curr_size; }}; // Driver codeint main(){ Stack s; s.push(1); s.push(2); s.push(3); cout << "current size: " << s.size() << endl; cout << s.top() << endl; s.pop(); cout << s.top() << endl; s.pop(); cout << s.top() << endl; cout << "current size: " << s.size() << endl; return 0;}// This code is contributed by Chhavi /* Java Program to implement a stack usingtwo queue */import java.util.*; class GfG { static class Stack { // Two inbuilt queues static Queue<Integer> q1 = new LinkedList<Integer>(); static Queue<Integer> q2 = new LinkedList<Integer>(); // To maintain current number of // elements static int curr_size; Stack() { curr_size = 0; } static void push(int x) { curr_size++; // Push x first in empty q2 q2.add(x); // Push all the remaining // elements in q1 to q2. while (!q1.isEmpty()) { q2.add(q1.peek()); q1.remove(); } // swap the names of two queues Queue<Integer> q = q1; q1 = q2; q2 = q; } static void pop() { // if no elements are there in q1 if (q1.isEmpty()) return; q1.remove(); curr_size--; } static int top() { if (q1.isEmpty()) return -1; return q1.peek(); } static int size() { return curr_size; } } // driver code public static void main(String[] args) { Stack s = new Stack(); s.push(1); s.push(2); s.push(3); System.out.println("current size: " + s.size()); System.out.println(s.top()); s.pop(); System.out.println(s.top()); s.pop(); System.out.println(s.top()); System.out.println("current size: " + s.size()); }}// This code is contributed by Prerna # Program to implement a stack using# two queuefrom queue import Queue class Stack: def __init__(self): # Two inbuilt queues self.q1 = Queue() self.q2 = Queue() # To maintain current number # of elements self.curr_size = 0 def push(self, x): self.curr_size += 1 # Push x first in empty q2 self.q2.put(x) # Push all the remaining # elements in q1 to q2. while (not self.q1.empty()): self.q2.put(self.q1.queue[0]) self.q1.get() # swap the names of two queues self.q = self.q1 self.q1 = self.q2 self.q2 = self.q def pop(self): # if no elements are there in q1 if (self.q1.empty()): return self.q1.get() self.curr_size -= 1 def top(self): if (self.q1.empty()): return -1 return self.q1.queue[0] def size(self): return self.curr_size # Driver Codeif __name__ == '__main__': s = Stack() s.push(1) s.push(2) s.push(3) print("current size: ", s.size()) print(s.top()) s.pop() print(s.top()) s.pop() print(s.top()) print("current size: ", s.size()) # This code is contributed by PranchalK /* C# Program to implement a stack usingtwo queue */using System;using System.Collections; class GfG { public class Stack { // Two inbuilt queues public Queue q1 = new Queue(); public Queue q2 = new Queue(); // To maintain current number of // elements public int curr_size; public Stack() { curr_size = 0; } public void push(int x) { curr_size++; // Push x first in empty q2 q2.Enqueue(x); // Push all the remaining // elements in q1 to q2. while (q1.Count > 0) { q2.Enqueue(q1.Peek()); q1.Dequeue(); } // swap the names of two queues Queue q = q1; q1 = q2; q2 = q; } public void pop() { // if no elements are there in q1 if (q1.Count == 0) return; q1.Dequeue(); curr_size--; } public int top() { if (q1.Count == 0) return -1; return (int)q1.Peek(); } public int size() { return curr_size; } }; // Driver code public static void Main(String[] args) { Stack s = new Stack(); s.push(1); s.push(2); s.push(3); Console.WriteLine("current size: " + s.size()); Console.WriteLine(s.top()); s.pop(); Console.WriteLine(s.top()); s.pop(); Console.WriteLine(s.top()); Console.WriteLine("current size: " + s.size()); }} // This code is contributed by Arnab Kundu Output : current size: 3 3 2 1 current size: 1 Method 2 (By making pop operation costly) In push operation, the new element is always enqueued to q1. In pop() operation, if q2 is empty then all the elements except the last, are moved to q2. Finally the last element is dequeued from q1 and returned. push(s, x) operation: Enqueue x to q1 (assuming size of q1 is unlimited).pop(s) operation: One by one dequeue everything except the last element from q1 and enqueue to q2.Dequeue the last item of q1, the dequeued item is result, store it.Swap the names of q1 and q2Return the item stored in step 2. push(s, x) operation: Enqueue x to q1 (assuming size of q1 is unlimited). Enqueue x to q1 (assuming size of q1 is unlimited). pop(s) operation: One by one dequeue everything except the last element from q1 and enqueue to q2.Dequeue the last item of q1, the dequeued item is result, store it.Swap the names of q1 and q2Return the item stored in step 2. One by one dequeue everything except the last element from q1 and enqueue to q2. Dequeue the last item of q1, the dequeued item is result, store it. Swap the names of q1 and q2 Return the item stored in step 2. C++ Java Python3 C# /* Program to implement a stackusing two queue */#include <bits/stdc++.h>using namespace std; class Stack { queue<int> q1, q2; int curr_size; public: Stack() { curr_size = 0; } void pop() { if (q1.empty()) return; // Leave one element in q1 and // push others in q2. while (q1.size() != 1) { q2.push(q1.front()); q1.pop(); } // Pop the only left element // from q1 q1.pop(); curr_size--; // swap the names of two queues queue<int> q = q1; q1 = q2; q2 = q; } void push(int x) { q1.push(x); curr_size++; } int top() { if (q1.empty()) return -1; while (q1.size() != 1) { q2.push(q1.front()); q1.pop(); } // last pushed element int temp = q1.front(); // to empty the auxiliary queue after // last operation q1.pop(); // push last element to q2 q2.push(temp); // swap the two queues names queue<int> q = q1; q1 = q2; q2 = q; return temp; } int size() { return curr_size; }}; // Driver codeint main(){ Stack s; s.push(1); s.push(2); s.push(3); s.push(4); cout << "current size: " << s.size() << endl; cout << s.top() << endl; s.pop(); cout << s.top() << endl; s.pop(); cout << s.top() << endl; cout << "current size: " << s.size() << endl; return 0;}// This code is contributed by Chhavi /* Java Program to implement a stackusing two queue */import java.util.*; class Stack { Queue<Integer> q1 = new LinkedList<>(), q2 = new LinkedList<>(); int curr_size; public Stack() { curr_size = 0; } void remove() { if (q1.isEmpty()) return; // Leave one element in q1 and // push others in q2. while (q1.size() != 1) { q2.add(q1.peek()); q1.remove(); } // Pop the only left element // from q1 q1.remove(); curr_size--; // swap the names of two queues Queue<Integer> q = q1; q1 = q2; q2 = q; } void add(int x) { q1.add(x); curr_size++; } int top() { if (q1.isEmpty()) return -1; while (q1.size() != 1) { q2.add(q1.peek()); q1.remove(); } // last pushed element int temp = q1.peek(); // to empty the auxiliary queue after // last operation q1.remove(); // push last element to q2 q2.add(temp); // swap the two queues names Queue<Integer> q = q1; q1 = q2; q2 = q; return temp; } int size() { return curr_size; } // Driver code public static void main(String[] args) { Stack s = new Stack(); s.add(1); s.add(2); s.add(3); s.add(4); System.out.println("current size: " + s.size()); System.out.println(s.top()); s.remove(); System.out.println(s.top()); s.remove(); System.out.println(s.top()); System.out.println("current size: " + s.size()); }} // This code is contributed by Princi Singh # Program to implement a stack using# two queuefrom queue import Queue class Stack: def __init__(self): # Two inbuilt queues self.q1 = Queue() self.q2 = Queue() # To maintain current number # of elements self.curr_size = 0 def push(self, x): self.q1.put(x) self.curr_size += 1 def pop(self): # if no elements are there in q1 if (self.q1.empty()): return # Leave one element in q1 and push others in q2 while(self.q1.qsize() != 1): self.q2.put(self.q1.get()) # Pop the only left element from q1 popped = self.q1.get() self.curr_size -= 1 # swap the names of two queues self.q = self.q1 self.q1 = self.q2 self.q2 = self.q def top(self): # if no elements are there in q1 if (self.q1.empty()): return # Leave one element in q1 and push others in q2 while(self.q1.qsize() != 1): self.q2.put(self.q1.get()) # Pop the only left element from q1 to q2 top = self.q1.queue[0] self.q2.put(self.q1.get()) # swap the names of two queues self.q = self.q1 self.q1 = self.q2 self.q2 = self.q return top def size(self): return self.curr_size # Driver Codeif __name__ == '__main__': s = Stack() s.push(1) s.push(2) s.push(3) s.push(4) print("current size: ", s.size()) print(s.top()) s.pop() print(s.top()) s.pop() print(s.top()) print("current size: ", s.size()) # This code is contributed by jainlovely450 using System;using System.Collections;class GfG{ public class Stack { public Queue q1 = new Queue(); public Queue q2 = new Queue(); //Just enqueue the new element to q1 public void Push(int x) => q1.Enqueue(x); //move all elements from q1 to q2 except the rear of q1. //Store the rear of q1 //swap q1 and q2 //return the stored result public int Pop() { if (q1.Count == 0) return -1; while (q1.Count > 1) { q2.Enqueue(q1.Dequeue()); } int res = (int)q1.Dequeue(); Queue temp = q1; q1 = q2; q2 = temp; return res; } public int Size() => q1.Count; public int Top() { if (q1.Count == 0) return -1; while (q1.Count > 1) { q2.Enqueue(q1.Dequeue()); } int res = (int)q1.Dequeue(); q2.Enqueue(res); Queue temp = q1; q1 = q2; q2 = temp; return res; } }; public static void Main(String[] args) { Stack s = new Stack(); s.Push(1); Console.WriteLine("Size of Stack: " + s.Size() + "\tTop : " + s.Top()); s.Push(7); Console.WriteLine("Size of Stack: " + s.Size() + "\tTop : " + s.Top()); s.Push(9); Console.WriteLine("Size of Stack: " + s.Size() + "\tTop : " + s.Top()); s.Pop(); Console.WriteLine("Size of Stack: " + s.Size() + "\tTop : " + s.Top()); s.Pop(); Console.WriteLine("Size of Stack: " + s.Size() + "\tTop : " + s.Top()); s.Push(5); Console.WriteLine("Size of Stack: " + s.Size() + "\tTop : " + s.Top()); }}//Submitted by Sakti Prasad //Size of Stack: 1 Top : 1//Size of Stack: 2 Top : 7//Size of Stack: 3 Top : 9//Size of Stack: 2 Top : 7//Size of Stack: 1 Top : 1//Size of Stack: 2 Top : 5 Output : current size: 4 4 3 2 current size: 2 YouTubeGeeksforGeeks507K subscribersImplement Stack using Queues | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 2:43β€’Liveβ€’<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=xBugrptVRUQ" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div> References: Implement Stack using Two QueuesThis article is compiled by Sumit Jain and reviewed by GeeksforGeeks team. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. prerna saini andrew1234 PranchalKatiyar spn12 princi singh ShivamKamboj jainlovely450 Accolite Adobe Amazon CouponDunia D-E-Shaw Grofers Kritikal Solutions Oracle Snapdeal Queue Stack Accolite Amazon Snapdeal D-E-Shaw Oracle Adobe Grofers CouponDunia Kritikal Solutions Stack Queue Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Level Order Binary Tree Traversal Queue Interface In Java Queue in Python Circular Queue | Set 1 (Introduction and Array Implementation) Sliding Window Maximum (Maximum of all subarrays of size k) Check for Balanced Brackets in an expression (well-formedness) using Stack Stack | Set 2 (Infix to Postfix) Program for Tower of Hanoi Stack | Set 4 (Evaluation of Postfix Expression) Next Greater Element
[ { "code": null, "e": 39005, "s": 38977, "text": "\n04 Oct, 2021" }, { "code": null, "e": 39265, "s": 39005, "text": "The problem is opposite of this post. We are given a Queue data structure that supports standard operations like enqueue() and dequeue(). We need to implement a St...
GATE | GATE-CS-2017 (Set 1) | Question 32 - GeeksforGeeks
28 Jun, 2021 The n-bit fixed-point representation of an unsigned real number X uses f bits for the fraction part. Let i = n – f. The range of decimal values for X in this representation is (A) 2-f(B) 2-f to ( 2i – 2 -f)(C) 0 to 2-i(D) 0 to 2i – 2 -f)Answer: (D)Explanation: Since given number is in unsigned bit representation, its decimal value starts with 0.We have i bit in integral part so maximum value will be 2iThus integral value will be from 0 to 2i – 1We know fraction part of binary representation are calculated as (1/0)*2-fThus with f bit maximum number possible = sum of GP series with a = 1/2 and r = 1/2 Thus fmax = {1/2(1 – (1/2)f}/(1 – 1/2) = 1 – 2-f Thus maximum fractional value possible = 2i – 1 + (1 – 2-f ) = 2i - 2-f This solution is contributed by Abhishek Kumar.Quiz of this Question GATE-CS-2017 (Set 1) GATE-GATE-CS-2017 (Set 1) GATE Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. GATE | Gate IT 2007 | Question 25 GATE | GATE-CS-2001 | Question 39 GATE | GATE-CS-2000 | Question 41 GATE | GATE-CS-2005 | Question 6 GATE | GATE MOCK 2017 | Question 21 GATE | GATE MOCK 2017 | Question 24 GATE | GATE-CS-2006 | Question 47 GATE | Gate IT 2008 | Question 43 GATE | GATE-CS-2009 | Question 38 GATE | GATE-CS-2003 | Question 90
[ { "code": null, "e": 25833, "s": 25805, "text": "\n28 Jun, 2021" }, { "code": null, "e": 26009, "s": 25833, "text": "The n-bit fixed-point representation of an unsigned real number X uses f bits for the fraction part. Let i = n – f. The range of decimal values for X in this repre...
How to change SVG color ? - GeeksforGeeks
19 May, 2020 What is SVG Element?SVG stands for Scalable Vector Graphics is a vector image format for two-dimensional graphics that can be used to create the animations and the SVG element is a container that defines a new coordinate system. The SVG document is defined by the XML format. Significance of SVG documents: In today’s world, it is SVG that had made browser animations easier and handy. It is used in making 2D animations and graphics. An SVG document can be used to create 2D games in an HTML document. It has different methods to draw lines, shapes such as circle, rectangle, and paths, etc. It is resolution independent and also supports event handling in the document. Syntax: <svg></svg> Example 1: In this example, we will use SVG element to draw a rectangle and color it. <!DOCTYPE html><html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content= "width=device-width, initial-scale=1"> <title> How to change SVG color? </title></head> <body style="text-align: center;"> <h1 style="color: green;"> GeeksforGeeks </h1> <div> <svg> <rect height="300" width="500" style="fill:#060"> </svg> </div></body> </html> Output: Example 2: In this example, we will use SVG element to draw a rectangle and color it. <!DOCTYPE html><html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content= "width=device-width, initial-scale=1"> <title> How to change SVG color? </title></head> <body style="text-align: center;"> <h1 style="color: green;"> GeeksforGeeks </h1> <div> <svg height="1000" width="500"> <circle cx="250" cy="120" r="80" stroke="#000" stroke-width="5" style="fill:#060"> </svg> </div></body> </html> Output: Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course. HTML-Misc Picked HTML Web Technologies Web technologies Questions HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to update Node.js and NPM to next version ? REST API (Introduction) How to Insert Form Data into Database using PHP ? CSS to put icon inside an input element in a form Types of CSS (Cascading Style Sheet) Remove elements from a JavaScript Array Installation of Node.js on Linux Convert a string to an integer in JavaScript How to fetch data from an API in ReactJS ? Difference between var, let and const keywords in JavaScript
[ { "code": null, "e": 26096, "s": 26068, "text": "\n19 May, 2020" }, { "code": null, "e": 26372, "s": 26096, "text": "What is SVG Element?SVG stands for Scalable Vector Graphics is a vector image format for two-dimensional graphics that can be used to create the animations and the...
SELECT Operation in Relational Algebra - GeeksforGeeks
27 May, 2020 Prerequisite – Relational AlgebraSelect operation chooses the subset of tuples from the relation that satisfies the given condition mentioned in the syntax of selection. The selection operation is also known as horizontal partitioning since it partitions the table or relation horizontally. Notation: Οƒ c(R) where β€˜c’ is selection condition which is a boolean expression(condition), we can have a single condition like Roll= 3 or combination of condition like X>2 AND Y<1,symbol β€˜Οƒ (sigma)’ is used to denote select(choose) operator, R is a relational algebra expression, whose result is a relation. The boolean expression specified in condition β€˜c’ can be written in the following form: <attribute name> <comparison operator> <constant value> or <attribute name> where, <attribute name> is obviously name of an attribute of relation, <comparison operator> is any of the operator {<, >, =, <=, >=, !=} and,<constant value> is constant value taken from the domain of the relation. Example-1: Οƒ Place = 'Mumbai' or Salary >= 1000000 (Citizen) Οƒ Department = 'Analytics'(ΟƒLocation = 'NewYork'(Manager)) The query above(immediate) is called nested expression, here, as usual, we evaluate the inner expression first (which results in relation say Manager1), then we calculate the outer expression on Manager1(the relation we obtained from evaluating the inner expression), which results in relation again, which is an instance of a relation we input. Example-2: Given a relation Student(Roll, Name, Class, Fees, Team) with the following tuples: Select all the student of Team A : Οƒ Team = 'A' (Student) Select all the students of department ECE whose fees is greater then equal to 10000 and belongs to Team other than A. Οƒ Fees >= 10000(ΟƒClass != 'A' (Student)) Important points about Select operation :Select operator is Unary, means it it applied to single relation only. Selection operation is commutative that is, Οƒ c1(Οƒ c2(R)) = Οƒ c2(Οƒ c1(R)) The degree (number of attributes) of resulting relation from a Selection operation is same as the degree of the Relation given. The cardinality (number of tuples) of resulting relation from a Selection operation is, 0 <= Οƒ c (R) <= |R| DBMS-Relational Algebra DBMS GATE CS DBMS Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. SQL Interview Questions CTE in SQL SQL Trigger | Student Database Introduction of B-Tree Difference between Clustered and Non-clustered index Layers of OSI Model TCP/IP Model Types of Operating Systems Page Replacement Algorithms in Operating Systems Differences between TCP and UDP
[ { "code": null, "e": 26073, "s": 26045, "text": "\n27 May, 2020" }, { "code": null, "e": 26364, "s": 26073, "text": "Prerequisite – Relational AlgebraSelect operation chooses the subset of tuples from the relation that satisfies the given condition mentioned in the syntax of sele...
Big Data: Its Benefits, Challenges, and Future | by Benedict Neo | Towards Data Science
Full seriesPart 1 - What is Data Science, Big data and the Data Science processPart 2 - The origin of R, why use R, R vs Python and resources to learnPart 3 - Version Control, Git & GitHub and best practices for sharing code.Part 4 - The 6 types of data analysisPart 5 - The ability to design experiments to answer your Ds questionsPart 6 - P-value & P-hackingPart 7 - Big Data, it's benefits, challenges, and future This series is based on the Data Science Specialization offered by John Hopkins University on Coursera. The articles in this series are notes based on the course, with additional research and topics for my own learning purposes. For the first course, Data Scientist Toolbox, the notes will be separated into 7 parts. Notes on the series can also be found here. Before the internet, information was in some ways restricted and more centralized. The only mediums of information were books, newspapers, and word of mouth, etc. But now with the advent of the internet and improvements to computer technology (Moore’s Law), information and data skyrocketed, and it has become this open-system, where information can be distributed to people without any kind of limits. As the internet became more accessible and world-wide, social mobile applications and websites gradually grew to become platforms for sharing data. Data, along with many other things, grows in value as an increase in size, where this value is applied in many ways, but mostly for analytics and making decisions. Here’s more about Big Data. Big Data can be defined as large amounts of data, both structured and unstructured, usually stored in the cloud or in data centers, which are then utilized by companies, organizations, startups, and even the government for different purposes. To utilize data means cleaning it and then analyzing it, forming patterns and connection, trends and correlations, to produce insights. This is what’s called Big Data analytics. Big Data is also commonly described by its qualities, also known as the 4Vs Insurmountable amounts of data due to improvements to technology and data storage (cloud storages, better processes, etc) Data is generated at astonishing rates, related to computer’s speed and capability increasing (Moore’s Law) Wide range of data of different formats and types easily collected, in an era of social media and the internet. inconsistencies and uncertainty of data (unstructured data β€” images, social media, video, etc.) A brief explainer on structured and unstructured data Traditional data β€” tables, spreadsheets, databases with columns and rows, CSV and Excel, etc rarely how data is today β€” much messier job is to extract information and corral it to something tidy and structured The proliferation of data from digital interactions β€” email, social media, text, customer habits, smartphones, GPS, websites, activity, video, facial rec, Big data β€” new tools and approaches to utilize new data & cleaning and analysis on unstructured data There are a few popular tools which are commonly associated with big data analytics, Hadoop Apache Spark Apache Hive SAS Most of these tools are just open-source frameworks for handling huge data efficiently and helpful features to do so. R Python Scala These languages are very popular in the data science world and can be used for handling large amounts of data through specific libraries and packages. Big Data can be seen in many places today. One prevalent example is online retailers. Companies like Amazon are centered on building accurate recommender systems that tailer to their customers, the better the system, the more products their customers would be interested in, which then translates to more sales. To do this, Amazon would need tons of data, information like purchasing behaviors, browsing and cart history, demographics, etc. Recommender systems that build a profile of users are also seen in social media, streaming services, and many more. Big Data is also applied in many sectors β€” Healthcare, Manufacturing, Public sector, media & entertainment, etc. Volume β€” Some questions benefit from huge amounts of data, with the sheer volume of data, it negates small messiness or inaccuracies. Velocity β€” Real-time information β†’ make swift decisions based on updated and informed predictions Variety β€” Ability to ask new questions and form new connections, questions that were previously inaccessible Veracity β€” Messy and unstructured data give rise to the possibility of hidden correlations. Perhaps the most promising benefit of more data is to identify hidden correlations. Examples: A popular language model that uses Deep Learning. It has 175 billion parameters, it was built by eating up data from the internet to discover patterns and correlation. It’s capable of writing snippets of code, The concept of Big data can be applied to this pandemic situation as well, by collecting data on the whereabouts of people (interactions and visited locations) with contact tracing, analytics can be done to predict the spread of the virus, and help contain it. Having lots of Data has its benefits, but it doesn’t come without any challenges. Lots of raw data to store and analyze expensive and require good computing investment data is constantly changing and fluctuating, systems built to handle that has to be adaptive difficult to determine which source of data useful notables to quickly analyze need to clean data first Big Data is commonly associated with other buzzwords like Machine Learning, Data Science, AI, Deep Learning, etc. Since these fields require data, Big data will continue to play a huge role in improving the current models we have now and allow for advancements in research. Take Tesla, for example, each Tesla car that has self-driving is also at the same time training Tesla’s AI model and continually improves it with each mistake. This huge siphoning of data allows, along with a team of talented engineers is what makes Tesla the best at the self-driving game. As data continues to expand and grow, cloud storage providers like AWS, Microsoft Azure and Google Cloud will rule in storing big data. This allows room for scalability and efficiency for companies. This also means there will be more and more people hired to handle these data, which translate to more job opportunities for β€œdata officers” to manage the database of a company The future of Big data also has it’s dark sides, as you know, many tech companies are facing heat from governments and the public due to issues of privacy and data. Laws that govern the rights of the people to their data will make data collection more restricted albeit honest. By the same vein, the proliferation of data online also exposes us to cyberattacks, and data security will be incredibly important. Many big tech companies today are receiving tons of data from its users, and when it comes down to profit and power or the greater good of society, it’s human nature to go for the former instead, especially if you’re in a position to choose. We live in times where our attention is being capitalized constantly. We must live smarter and act rationally to prevent surrendering our lives over to these short bursts of dopamine and expedient and trivial acts. We can only hope that as we progress into the upcoming decades, the people who are in control of the decisions that these companies make will be for the betterment of society and civilization as a whole. And that our data will be for building systems that serve us, make us more productive, and instead of looking for ways to grab our attention, build products that can provide value and meaning to our lives. A quick summary of everything so far Big data β€” a large dataset of diverse types that are generated rapidly. There is a transition in data from structured to unstructured. Unstructured Data β€” Data that isn’t clearly defined and requires more work to process (images, audio, social media likes, etc) Structured Data β€” Known as traditional data as it is rare in real life, basically data that is clearly defined and easy to process. It’s data scientists’ job is to clean it and form tidy data Pros & Cons in terms of 4Vs Volume slightly messy data negate small errors a lot to store and analyze Variety answer unconventional questions the burden of choice of type Velocity real-time analysis and decision making constantly updating Veracity hidden correlations Analyze messy data One important lesson you should take away is that even with huge sums of data, you still need the right ones with the right variables to correctly answer your question. A quote by John Turkey, a famous American mathematician, puts that lesson nicely: The combination of some data and an aching desire for an answer does not ensure the a reasonable answer can be extracted from a given body of data β€” John Turkey, 1986 And another quote by Atul Butte, Stanford on the hidden capabilities of data β€œHiding within those mounds of data is knowledge that could change the life of a patient, or change the world.” Thanks for reading and that concludes the series. I hope you learned something from the articles and do leave comments about how I can improve, and if you have any suggestions on what I should write next. Stay safe and God Bless. medium.com towardsdatascience.com towardsdatascience.com medium.com towardsdatascience.com If you want to be updated with my latest articles follow me on Medium. Follow my other social profiles too! Linkedin Twitter GitHub Reddit Be on the lookout for my next article and remember to stay safe!
[ { "code": null, "e": 589, "s": 172, "text": "Full seriesPart 1 - What is Data Science, Big data and the Data Science processPart 2 - The origin of R, why use R, R vs Python and resources to learnPart 3 - Version Control, Git & GitHub and best practices for sharing code.Part 4 - The 6 types of data a...
CSS Padding
Padding is used to create space around an element's content, inside of any defined borders. The CSS padding properties are used to generate space around an element's content, inside of any defined borders. With CSS, you have full control over the padding. There are properties for setting the padding for each side of an element (top, right, bottom, and left). CSS has properties for specifying the padding for each side of an element: padding-top padding-right padding-bottom padding-left All the padding properties can have the following values: length - specifies a padding in px, pt, cm, etc. % - specifies a padding in % of the width of the containing element inherit - specifies that the padding should be inherited from the parent element Note: Negative values are not allowed. Set different padding for all four sides of a <div> element: To shorten the code, it is possible to specify all the padding properties in one property. The padding property is a shorthand property for the following individual padding properties: padding-top padding-right padding-bottom padding-left So, here is how it works: If the padding property has four values: padding: 25px 50px 75px 100px; top padding is 25px right padding is 50px bottom padding is 75px left padding is 100px top padding is 25px right padding is 50px bottom padding is 75px left padding is 100px Use the padding shorthand property with four values: If the padding property has three values: padding: 25px 50px 75px; top padding is 25px right and left paddings are 50px bottom padding is 75px top padding is 25px right and left paddings are 50px bottom padding is 75px Use the padding shorthand property with three values: If the padding property has two values: padding: 25px 50px;top and bottom paddings are 25px right and left paddings are 50px top and bottom paddings are 25px right and left paddings are 50px Use the padding shorthand property with two values: If the padding property has one value: padding: 25px;all four paddings are 25px all four paddings are 25px Use the padding shorthand property with one value: The CSS width property specifies the width of the element's content area. The content area is the portion inside the padding, border, and margin of an element (the box model). So, if an element has a specified width, the padding added to that element will be added to the total width of the element. This is often an undesirable result. Here, the <div> element is given a width of 300px. However, the actual width of the <div> element will be 350px (300px + 25px of left padding + 25px of right padding): To keep the width at 300px, no matter the amount of padding, you can use the box-sizing property. This causes the element to maintain its actual width; if you increase the padding, the available content space will decrease. Use the box-sizing property to keep the width at 300px, no matter the amount of padding: Set the left padding This example demonstrates how to set the left padding of a <p> element. Set the right padding This example demonstrates how to set the right padding of a <p> element. Set the top padding This example demonstrates how to set the top padding of a <p> element. Set the bottom padding This example demonstrates how to set the bottom padding of a <p> element. Set the top padding of the <h1> element to 30 pixels. <style> h1 { : 30px; } </style> <body> <h1>This is a heading</h1> <p>This is a paragraph</p> <p>This is a paragraph</p> </body> Start the Exercise We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: help@w3schools.com Your message has been sent to W3Schools.
[ { "code": null, "e": 92, "s": 0, "text": "Padding is used to create space around an element's content, inside of any defined borders." }, { "code": null, "e": 207, "s": 92, "text": "The CSS padding properties are used to generate space around \nan element's content, inside of any...
Write a Python program to trim the minimum and maximum threshold value in a dataframe
Assume, you have a dataframe and the result for trim of minimum and the maximum threshold value, minimum threshold: Column1 Column2 0 30 30 1 34 30 2 56 30 3 78 50 4 30 90 maximum threshold: Column1 Column2 0 12 23 1 34 30 2 50 25 3 50 50 4 28 50 clipped dataframe is: Column1 Column2 0 30 30 1 34 30 2 50 30 3 50 50 4 30 50 To solve this, we will follow the steps given below βˆ’ Define a dataframe Define a dataframe Apply df.clip function inside (lower=30) to calculate minimum threshold value, Apply df.clip function inside (lower=30) to calculate minimum threshold value, df.clip(lower=30) Apply df.clip function inside (upper=50) to calculate maximum threshold value Apply df.clip function inside (upper=50) to calculate maximum threshold value df.clip(upper=50) Apply clipped dataframe with min and max threshold limit as, Apply clipped dataframe with min and max threshold limit as, df.clip(lower=30,upper=50) Let’s check the following code to get a better understanding βˆ’ import pandas as pd data = {"Column1":[12,34,56,78,28], "Column2":[23,30,25,50,90]} df = pd.DataFrame(data) print("DataFrame is:\n",df) print("minimum threshold:\n",df.clip(lower=30)) print("maximum threshold:\n",df.clip(upper=50)) print("clipped dataframe is:\n",df.clip(lower=30,upper=50)) DataFrame is: Column1 Column2 0 12 23 1 34 30 2 56 25 3 78 50 4 28 90 minimum threshold: Column1 Column2 0 30 30 1 34 30 2 56 30 3 78 50 4 30 90 maximum threshold: Column1 Column2 0 12 23 1 34 30 2 50 25 3 50 50 4 28 50 clipped dataframe is: Column1 Column2 0 30 30 1 34 30 2 50 30 3 50 50 4 30 50
[ { "code": null, "e": 1159, "s": 1062, "text": "Assume, you have a dataframe and the result for trim of minimum and the maximum threshold value," }, { "code": null, "e": 1486, "s": 1159, "text": "minimum threshold:\n Column1 Column2\n0 30 30\n1 34 30\n2 56 30\n...
How can we fetch alternate odd numbered records from MySQL table?
To understand this concept, we are using the data from table β€˜Information’ as follows βˆ’ mysql> Select * from Information; +----+---------+ | id | Name | +----+---------+ | 1 | Gaurav | | 2 | Ram | | 3 | Rahul | | 4 | Aarav | | 5 | Aryan | | 6 | Krishan | +----+---------+ 6 rows in set (0.00 sec) Now, the query below will fetch the alternate odd-numbered records from the above table β€˜Information’ βˆ’ mysql> Select id,Name from information group by id having mod(id,2) = 1; +----+--------+ | id | Name | +----+--------+ | 1 | Gaurav | | 3 | Rahul | | 5 | Aryan | +----+--------+ 3 rows in set (0.09 sec)
[ { "code": null, "e": 1150, "s": 1062, "text": "To understand this concept, we are using the data from table β€˜Information’ as follows βˆ’" }, { "code": null, "e": 1379, "s": 1150, "text": "mysql> Select * from Information;\n+----+---------+\n| id | Name |\n+----+---------+\n| 1 ...
Jackson - JsonGenerator Class
JsonGenerator is the base class to define class that defines public API for writing JSON content. Instances are created using factory methods of a JsonFactory instance. Following is the declaration for com.fasterxml.jackson.core.JsonGenerator class: public abstract class JsonGenerator extends Object implements Closeable, Flushable, Versioned protected PrettyPrinter _cfgPrettyPrinter - Object that handles pretty-printing (usually additional white space to make results more human-readable) during output. protected PrettyPrinter _cfgPrettyPrinter - Object that handles pretty-printing (usually additional white space to make results more human-readable) during output. This class inherits methods from the following classes: java.lang.Object java.lang.Object Print Add Notes Bookmark this page
[ { "code": null, "e": 1922, "s": 1753, "text": "JsonGenerator is the base class to define class that defines public API for writing JSON content. Instances are created using factory methods of a JsonFactory instance." }, { "code": null, "e": 2003, "s": 1922, "text": "Following is ...
Java Equivalent of C++’s upper_bound() Method - GeeksforGeeks
28 Jan, 2022 In this article, we will discuss Java’s equivalent implementation of the upper_bound() method of C++. This method is provided with a key-value which is searched in the array. It returns the index of the first element in the array which has a value greater than key or last if no such element is found. The below implementations will find the upper bound value and its index, otherwise, it will print upper bound does not exist. Illustrations: Input : 10 20 30 30 40 50 Output : upper_bound for element 30 is 40 at index 4 Input : 10 20 30 40 50 Output : upper_bound for element 45 is 50 at index 4 Input : 10 20 30 40 50 Output : upper_bound for element 60 does not exists Now let us discuss out the methods in order to use the upper bound() method in order to get the index of the next greater value. Methods: Naive Approach (linear search)Iterative binary searchRecursive binary searchbinarySearch() method of Arrays utility class Naive Approach (linear search) Iterative binary search Recursive binary search binarySearch() method of Arrays utility class Let us discuss each of the above methods to detailed understanding by providing clean java programs for them as follows: Method 1: Using linear search To find the upper bound using linear search, we will iterate over the array starting from the 0th index until we find a value greater than the key. Example Java // Java program for finding upper bound// using linear search // Importing Arrays utility classimport java.util.Arrays; // Main classclass GFG { // Method 1 // To find upper bound of given key static void upper_bound(int arr[], int key) { int upperBound = 0; while (upperBound < arr.length) { // If current value is lesser than or equal to // key if (arr[upperBound] <= key) upperBound++; // This value is just greater than key else{ System.out.print("The upper bound of " + key + " is " + arr[upperBound] + " at index " + upperBound); return; } } System.out.print("The upper bound of " + key + " does not exist."); } // Method 2 // Main driver method public static void main(String[] args) { // Custom array input over which upper bound is to // be operated by passing a key int array[] = { 10, 20, 30, 30, 40, 50 }; int key = 30; // Sort the array using Arrays.sort() method Arrays.sort(array); // Printing the upper bound upper_bound(array, key); }} The upper bound of 30 is 40 at index 4 Time Complexity: O(N), where N is the number of elements in the array. Auxiliary Space: O(1) To find the upper bound of a key, we will search the key in the array. We can use an efficient approach of binary search to search the key in the sorted array in O(log2 n) as proposed in the below examples. Method 2: Using binary search iteratively Procedure: Sort the array before applying binary search.Initialize low as 0 and high as N.Find the index of the middle element (mid)Compare key with the middle element(arr[mid])If the middle element is less than or equals to key then update the low as mid+1, Else update high as mid.Repeat step 2 to step 4 until low is less than high.After all the above steps the low is the upper_bound of the key Sort the array before applying binary search. Initialize low as 0 and high as N. Find the index of the middle element (mid) Compare key with the middle element(arr[mid]) If the middle element is less than or equals to key then update the low as mid+1, Else update high as mid. Repeat step 2 to step 4 until low is less than high. After all the above steps the low is the upper_bound of the key Example Java // Java program to Find upper bound// Using Binary Search Iteratively // Importing Arrays utility classimport java.util.Arrays; // Main classpublic class GFG { // Iterative approach to find upper bound // using binary search technique static void upper_bound(int arr[], int key) { int mid, N = arr.length; // Initialise starting index and // ending index int low = 0; int high = N; // Till low is less than high while (low < high && low != N) { // Find the index of the middle element mid = low + (high - low) / 2; // If key is greater than or equal // to arr[mid], then find in // right subarray if (key >= arr[mid]) { low = mid + 1; } // If key is less than arr[mid] // then find in left subarray else { high = mid; } } // If key is greater than last element which is // array[n-1] then upper bound // does not exists in the array if (low == N ) { System.out.print("The upper bound of " + key + " does not exist."); return; } // Print the upper_bound index System.out.print("The upper bound of " + key + " is " + arr[low] + " at index " + low); } // Driver main method public static void main(String[] args) { int array[] = { 10, 20, 30, 30, 40, 50 }; int key = 30; // Sort the array using Arrays.sort() method Arrays.sort(array); // Printing the upper bound upper_bound(array, key); }} The upper bound of 30 is 40 at index 4 A recursive approach following the same procedure is discussed below: Method 3: Recursive binary search Example Java // Java program to Find Upper Bound// Using Binary Search Recursively // Importing Arrays utility classimport java.util.Arrays; // Main classpublic class GFG { // Recursive approach to find upper bound // using binary search technique static int recursive_upper_bound(int arr[], int low, int high, int key) { // Base Case if (low > high || low == arr.length) return low; // Find the value of middle index int mid = low + (high - low) / 2; // If key is greater than or equal // to array[mid], then find in // right subarray if (key >= arr[mid]) { return recursive_upper_bound(arr, mid + 1, high, key); } // If key is less than array[mid], // then find in left subarray return recursive_upper_bound(arr, low, mid - 1, key); } // Method to find upper bound static void upper_bound(int arr[], int key) { // Initialize starting index and // ending index int low = 0; int high = arr.length; // Call recursive upper bound method int upperBound = recursive_upper_bound(arr, low, high, key); if (upperBound == arr.length) // upper bound of the key does not exists System.out.print("The upper bound of " + key + " does not exist."); else System.out.print( "The upper bound of " + key + " is " + arr[upperBound] + " at index " + upperBound); } // Main driver method public static void main(String[] args) { int array[] = { 10, 20, 30, 30, 40, 50 }; int key = 30; // Sorting the array using Arrays.sort() method Arrays.sort(array); // Printing the upper bound upper_bound(array, key); }} The upper bound of 30 is 40 at index 4 Method 4: Using binarySearch() method of Arrays utility class We can also use the in-built binary search method of Arrays utility class (or Collections utility class). This function returns an index of the key in the array. If the key is present in the array it will return its index (not guaranteed to be the first index), otherwise based on sorted order, it will return the expected position of the key i.e (-(insertion point) – 1). Approach: Sort the array before applying binary search.Search the index of the key in a sorted array, if it is present in the array, its index is returned as positive value of , otherwise, a negative value which specifies the position at which the key should be added in the sorted array.Now if the key is present in the array we move rightwards to find next greater value.Print the upper bound index if present. Sort the array before applying binary search. Search the index of the key in a sorted array, if it is present in the array, its index is returned as positive value of , otherwise, a negative value which specifies the position at which the key should be added in the sorted array. Now if the key is present in the array we move rightwards to find next greater value. Print the upper bound index if present. Example Java // Java program to find upper bound// Using binarySearch() method of Arrays class // Importing Arrays utility classimport java.util.Arrays; // Main classpublic class GFG { // Method 1 // To find upper bound using binary search // implementation of Arrays utility class static void upper_bound(int arr[], int key) { int index = Arrays.binarySearch(arr, key); int n = arr.length; // If key is not present in the array if (index < 0) { // Index specify the position of the key // when inserted in the sorted array // so the element currently present at // this position will be the upper bound int upperBound = Math.abs(index) - 1; if (upperBound < n) System.out.print("The upper bound of " + key + " is " + arr[upperBound] + " at index " + upperBound); else System.out.print("The upper bound of " + key + " does not exists."); return; } // If key is present in the array // we move rightwards to find next greater value else { // Increment the index until value is equal to // key while (index < n) { // If current value is same if (arr[index] == key) index++; // Current value is different which means // it is the greater than the key else { System.out.print( "The upper bound of " + key + " is " + arr[index] + " at index " + index); return; } } System.out.print("The upper bound of " + key + " does not exist."); } } // Method 2 // Main driver method public static void main(String[] args) { int array[] = { 10, 20, 30, 30, 40, 50 }; int key = 30; // Sort the array before applying binary search Arrays.sort(array); // Printing the lower bound upper_bound(array, key); }} The upper bound of 30 is 40 at index 4 Note: We can also find the index of the middle element via any one of them int mid = (high + low)/ 2; int mid = (low + high) >>> 1; varshagumber28 TrueGeek-2021 TrueGeek Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Inventory Management with JSON in Python MongoDB - Replication and Sharding How to Provide the Static IP to a Docker Container? How to Install Golang on CentOS 7? SQL Query to Compare Results With Today's Date Top 12 Most Used Git Commands For Developers Understanding Multi-Layer Feed Forward Networks How to Install Packages in Flutter? How to Remove the Last Character From a Table in SQL? How to Get the Device Information in Android?
[ { "code": null, "e": 24259, "s": 24231, "text": "\n28 Jan, 2022" }, { "code": null, "e": 24688, "s": 24259, "text": "In this article, we will discuss Java’s equivalent implementation of the upper_bound() method of C++. This method is provided with a key-value which is searched i...
What are JSTL formatting tags in JSP?
The JSTL formatting tags are used to format and display text, the date, the time, and numbers for internationalized Websites. Following is the syntax to include Formatting library in your JSP βˆ’ <%@ taglib prefix = "fmt" uri = "http://java.sun.com/jsp/jstl/fmt" %> Following table lists out the Formatting JSTL Tags βˆ’
[ { "code": null, "e": 1256, "s": 1062, "text": "The JSTL formatting tags are used to format and display text, the date, the time, and numbers for internationalized Websites. Following is the syntax to include Formatting library in your JSP βˆ’" }, { "code": null, "e": 1326, "s": 1256, ...
Introduction to backtesting trading strategies | by Eryk Lewinson | Towards Data Science
In this article, I would like to continue the series on quantitative finance. In the first article, I described the stylized facts of asset returns. Now I would like to introduce the concept of backtesting trading strategies and how to do it using existing frameworks in Python. Let’s start with a trading strategy. It can be defined as a method (based on predefined rules) of buying and/or selling assets in markets. These rules can be based on, for example, technical analysis or machine learning models. Backtesting is basically evaluating the performance of a trading strategy on historical data β€” if we used a given strategy on a set of assets in the past, how well/bad would it have performed. Of course, there is no guarantee that past performance is indicative of the future one, but we can still investigate! There are a few available frameworks for backtesting in Python, in this article, I decided to use zipline. Some of the nice features offered by the zipline environment include: ease of use β€” there is a clear structure of how to build a backtest and what outcome we can expect, so the majority of the time can be spent on developing state-of-the-art trading strategies :) realistic β€” includes transaction costs, slippage, order delays, etc. stream-based β€” processes each event individually, thus avoids look-ahead bias it comes with many easily-accessible statistical measures, such as moving average, linear regression, etc. β€” no need to code them from scratch integration with PyData ecosystem β€” zipline uses Pandas DataFrames for storing input data, as well as performance metrics it is easy to integrate other libraries, such asmatplotlib, scipy, statsmodels and sklearn into the workflow of building and evaluating strategies developed and updated by Quantopian which provides a web-interface for zipline, historical data and even live-trading capabilities I believe these arguments speak for themselves. Let’s start coding! The most convenient way to install zipline is to use a virtual environment. In this article, I use conda to do so. I create a new environment with Python 3.5 (I experienced issues with using 3.6 or 3.7) and then install zipline. You can also pip install it. # create new virtual environmentconda create -n env_zipline python=3.5# activate itconda activate env_zipline# install ziplineconda install -c Quantopian zipline For everything to be working properly you should also install jupyter and other packages used in this article (see the watermark printout below). First, we need to load IPython extensions using the %load_ext magic. %load_ext watermark%load_ext zipline Then we import the rest of the libraries: import numpy as npimport matplotlib.pyplot as pltimport pandas as pdimport ziplinefrom yahoofinancials import YahooFinancialsimport warningsplt.style.use('seaborn')plt.rcParams['figure.figsize'] = [16, 9]plt.rcParams['figure.dpi'] = 200warnings.simplefilter(action='ignore', category=FutureWarning) Below you can see the list of libraries used in this article, together with their versions. zipline comes ready with data downloaded from Quandl (the WIKI database). You can always inspect the already ingested data by running: !zipline bundles The issue with this approach is that in mid-2018 the data was discontinued, so there is no data for the last year. To overcome this, I show how to manually ingest data from any source. To do so I use the yahoofinancials library. In order to be loaded into zipline, the data must be in a CSV file and in a predefined format β€” like the one on the preview of the DataFrame. We then need to save the data as a CSV file in a folder called β€˜daily’ (or another folder of your choice). df.to_csv('daily/aapl.csv', header=True, index=False) In the next step, we need to modify the extension.py file located in the zipline directory. After the installation ofzipline it is empty and we need to add the following: We can also define and provide a custom calendar to the data-ingesting script β€” for example when working with European stocks. For details on how to do it please look at the documentation. In contrast to the data downloading function, we need to pass the exact range of dates of the downloaded data. In this example, we start with 2017-01-03, as this is the first day for which we have pricing data. Lastly, we run the following command: !zipline ingest -b apple-prices-2017-2019 We can verify that the bundle was successfully ingested: !zipline bundles There is a known issue with downloading the benchmark data, so β€” for now β€” we stick to historical data in the default bundle. However, you now know how to ingest data using a custom CSV file. For detailed information on how to load custom data using the csvdir bundle please refer to this article, in which I show how to import European stocks data and run basic strategies on their basis. We start with the most basic strategy β€” Buy and Hold. The idea is that we buy a certain asset and do not do anything for the entire duration of the investment horizon. This simple strategy can also be considered a benchmark for more advanced ones β€” because there is no point in using a very complex strategy that generates less money (for example due to transaction costs) than buying and doing nothing. In this example, we consider Apple’s stock and select years 2016–2017 as the duration of the backtest. We start with a capital of 1050 USD. I selected this number as I know how much more or less we need to have for the initial purchase and I like to keep this number as small as possible because we are only buying 10 shares, so no need for a starting balance of a couple of thousands. We assume the default transaction costs (0.001$ per share, without a minimum cost per trade). There are two approaches to using zipline β€” using the command line or Jupyter Notebook. To use the latter we have to write the algorithm within a Notebook cell and indicate that zipline is supposed to run it. This is done via the %%zipline IPython magic command. This magic takes the same arguments as the CLI mentioned above. Also one important thing, all imports required for the algorithm to run (such as numpy, sklearn, etc.) must be specified in the algorithm cell, even if they were previously imported elsewhere. Congrats, we have written our first backtest. So what actually happened? Each zipline algorithm contains (at least) two functions we have to define:* initialize(context) * handle_data(context, data) Before the algorithm starts, the initialize() function is called and a context variable is passed. context is a global variable in which we can store additional variables we need to access from one iteration of the algorithm to the next. After the initialization of the algorithm, the handle_data() function is called once for each event. At every call, it passes the same context variable and an event frame called data. It contains the current trading bar with open, high, low, and close (OHLC) prices together with the volume. We create an order by using order(asset, number_of_units), where we specify what to buy and how many shares/units. A positive number indicates buying that many shares, 0 means selling everything we have, and a negative number is used for short-selling. Another useful type of order is order_target, which orders as many shares as needed to achieve the desired number in the portfolio. In our Buy and Hold strategy, we check if we have already placed an order. If not, we order a given amount of shares and then do nothing for the rest of the backtest. Let’s analyze the performance of the strategy. First, we need to load the performance DataFrame from the pickle file. # read the performance summary dataframebuy_and_hold_results = pd.read_pickle('buy_and_hold.pkl') And now we can plot some of the stored metrics: From the first look, we see that the portfolio generated money over the investment horizon and was very much following the price of Apple (what makes sense as it is the only asset in the portfolio). To view the transactions we need to transform the transactions column from the performance DataFrame. pd.DataFrame.from_records([x[0] for x in buy_and_hold_results.transactions.values if x != []]) By inspecting the columns of the performance DataFrame we can see all the available metrics. buy_and_hold_results.columns Some of the noteworthy ones: starting/ending cash β€” inspecting the cash holding on a given day starting/ending value β€” inspecting the assets; value on a given day orders β€” used for inspecting orders; there are different events for creating an order when the trading strategy generates a signal, and a separate one when it is actually executed on the next trading day pnl β€” daily profit and loss The second strategy we consider is based on the simple moving average (SMA). The β€˜mechanics’ of the strategy can be summarized by the following: when the price crosses the 20-day SMA upwards β€” buy x shares when the price crosses the 20-day SMA downwards β€” sell the shares we can only have a maximum of x shares at any given time there is no short-selling in the strategy (though it can be easily implemented) The remaining components of the backtest like the considered asset, investment horizon or the starting capital are the same as in the Buy and Hold example. The code for this algorithm is a little bit more complex, but we cover all the new aspects of the code. For simplicity, I marked the points of reference in the code snippet above (sma_strategy.py) and will refer to them by number below. 1. I show how to manually set the commission. In this case, I use the default value for comparison’s sake.2. The β€œwarm-up period” β€” this is a trick used in order to make sure that the algorithm has enough days to calculate the moving average. If we are using multiple metrics with different window lengths, we should always take the longest one for the warm-up. 3. We access the historical (and current) data-points by using data.history. In this example, we access the last 20 days. The data (in case of a single asset) is stored as a pandas.Series, indexed by time.4. The SMA is a very basic measure, so for calculation, I simply take the mean of the previously accessed data.5. I encapsulate the logic of the trading strategy in an if statement. To access the current and previous data-points I use price_history[-2] and price_history[-1], respectively. To see if there was a crossover, I compare the current and previous prices to the MA and determine which case I am dealing with (buy/sell signal). In the case where there is no signal, the algorithm does nothing.6. You can use the analyze(context, perf) statement to carry out extra analysis (like plotting) when the backtest is finished. The perf object is simply the performance DataFrame we also store in a pickle file. But when used withing the algorithm, we should refer to it as perf and there is no need for loading it. As compared to the Buy and Hold strategy, you might have noticed the periods where the portfolio value is flat. That is because when we sell the asset (and before buying again), we only hold cash. In our case, the Simple Moving Average strategy outperformed the Buy and Hold one. The ending worth of the portfolio (including cash) is 1784.12 USD for the SMA strategy, while it is 1714.68 USD in the case of the simpler one. In this article, I have shown how to use the zipline framework to carry out the backtesting of trading strategies. Once you get familiar with the library, it is easy to test out different strategies. In a future article, I will cover using more advanced trading strategies based on technical analysis. As always, any constructive feedback is welcome. You can reach out to me on Twitter or in the comments. You can find the code used for this article on my GitHub. Liked the article? Become a Medium member to continue learning by reading without limits. If you use this link to become a member, you will support me at no extra cost to you. Thanks in advance and see you around! Below you can find the next articles in the series: importing custom data to use with zipline (link) evaluating the performance of trading strategies (link) building algorithmic trading strategies based on Technical Analysis (link) building algorithmic trading strategies based on the mean-variance analysis (link) I recently published a book on using Python for solving practical tasks in the financial domain. If you are interested, I posted an article introducing the contents of the book. You can get the book on Amazon or Packt’s website.
[ { "code": null, "e": 451, "s": 172, "text": "In this article, I would like to continue the series on quantitative finance. In the first article, I described the stylized facts of asset returns. Now I would like to introduce the concept of backtesting trading strategies and how to do it using existin...
Java - Files and I/O
The java.io package contains nearly every class you might ever need to perform input and output (I/O) in Java. All these streams represent an input source and an output destination. The stream in the java.io package supports many data such as primitives, object, localized characters, etc. A stream can be defined as a sequence of data. There are two kinds of Streams βˆ’ InPutStream βˆ’ The InputStream is used to read data from a source. InPutStream βˆ’ The InputStream is used to read data from a source. OutPutStream βˆ’ The OutputStream is used for writing data to a destination. OutPutStream βˆ’ The OutputStream is used for writing data to a destination. Java provides strong but flexible support for I/O related to files and networks but this tutorial covers very basic functionality related to streams and I/O. We will see the most commonly used examples one by one βˆ’ Java byte streams are used to perform input and output of 8-bit bytes. Though there are many classes related to byte streams but the most frequently used classes are, FileInputStream and FileOutputStream. Following is an example which makes use of these two classes to copy an input file into an output file βˆ’ Example import java.io.*; public class CopyFile { public static void main(String args[]) throws IOException { FileInputStream in = null; FileOutputStream out = null; try { in = new FileInputStream("input.txt"); out = new FileOutputStream("output.txt"); int c; while ((c = in.read()) != -1) { out.write(c); } }finally { if (in != null) { in.close(); } if (out != null) { out.close(); } } } } Now let's have a file input.txt with the following content βˆ’ This is test for copy file. As a next step, compile the above program and execute it, which will result in creating output.txt file with the same content as we have in input.txt. So let's put the above code in CopyFile.java file and do the following βˆ’ $javac CopyFile.java $java CopyFile Java Byte streams are used to perform input and output of 8-bit bytes, whereas Java Character streams are used to perform input and output for 16-bit unicode. Though there are many classes related to character streams but the most frequently used classes are, FileReader and FileWriter. Though internally FileReader uses FileInputStream and FileWriter uses FileOutputStream but here the major difference is that FileReader reads two bytes at a time and FileWriter writes two bytes at a time. We can re-write the above example, which makes the use of these two classes to copy an input file (having unicode characters) into an output file βˆ’ Example import java.io.*; public class CopyFile { public static void main(String args[]) throws IOException { FileReader in = null; FileWriter out = null; try { in = new FileReader("input.txt"); out = new FileWriter("output.txt"); int c; while ((c = in.read()) != -1) { out.write(c); } }finally { if (in != null) { in.close(); } if (out != null) { out.close(); } } } } Now let's have a file input.txt with the following content βˆ’ This is test for copy file. As a next step, compile the above program and execute it, which will result in creating output.txt file with the same content as we have in input.txt. So let's put the above code in CopyFile.java file and do the following βˆ’ $javac CopyFile.java $java CopyFile All the programming languages provide support for standard I/O where the user's program can take input from a keyboard and then produce an output on the computer screen. If you are aware of C or C++ programming languages, then you must be aware of three standard devices STDIN, STDOUT and STDERR. Similarly, Java provides the following three standard streams βˆ’ Standard Input βˆ’ This is used to feed the data to user's program and usually a keyboard is used as standard input stream and represented as System.in. Standard Input βˆ’ This is used to feed the data to user's program and usually a keyboard is used as standard input stream and represented as System.in. Standard Output βˆ’ This is used to output the data produced by the user's program and usually a computer screen is used for standard output stream and represented as System.out. Standard Output βˆ’ This is used to output the data produced by the user's program and usually a computer screen is used for standard output stream and represented as System.out. Standard Error βˆ’ This is used to output the error data produced by the user's program and usually a computer screen is used for standard error stream and represented as System.err. Standard Error βˆ’ This is used to output the error data produced by the user's program and usually a computer screen is used for standard error stream and represented as System.err. Following is a simple program, which creates InputStreamReader to read standard input stream until the user types a "q" βˆ’ Example import java.io.*; public class ReadConsole { public static void main(String args[]) throws IOException { InputStreamReader cin = null; try { cin = new InputStreamReader(System.in); System.out.println("Enter characters, 'q' to quit."); char c; do { c = (char) cin.read(); System.out.print(c); } while(c != 'q'); }finally { if (cin != null) { cin.close(); } } } } Let's keep the above code in ReadConsole.java file and try to compile and execute it as shown in the following program. This program continues to read and output the same character until we press 'q' βˆ’ $javac ReadConsole.java $java ReadConsole Enter characters, 'q' to quit. 1 1 e e q q As described earlier, a stream can be defined as a sequence of data. The InputStream is used to read data from a source and the OutputStream is used for writing data to a destination. Here is a hierarchy of classes to deal with Input and Output streams. The two important streams are FileInputStream and FileOutputStream, which would be discussed in this tutorial. This stream is used for reading data from the files. Objects can be created using the keyword new and there are several types of constructors available. Following constructor takes a file name as a string to create an input stream object to read the file βˆ’ InputStream f = new FileInputStream("C:/java/hello"); Following constructor takes a file object to create an input stream object to read the file. First we create a file object using File() method as follows βˆ’ File f = new File("C:/java/hello"); InputStream f = new FileInputStream(f); Once you have InputStream object in hand, then there is a list of helper methods which can be used to read to stream or to do other operations on the stream. public void close() throws IOException{} This method closes the file output stream. Releases any system resources associated with the file. Throws an IOException. protected void finalize()throws IOException {} This method cleans up the connection to the file. Ensures that the close method of this file output stream is called when there are no more references to this stream. Throws an IOException. public int read(int r)throws IOException{} This method reads the specified byte of data from the InputStream. Returns an int. Returns the next byte of data and -1 will be returned if it's the end of the file. public int read(byte[] r) throws IOException{} This method reads r.length bytes from the input stream into an array. Returns the total number of bytes read. If it is the end of the file, -1 will be returned. public int available() throws IOException{} Gives the number of bytes that can be read from this file input stream. Returns an int. There are other important input streams available, for more detail you can refer to the following links βˆ’ ByteArrayInputStream ByteArrayInputStream DataInputStream DataInputStream FileOutputStream is used to create a file and write data into it. The stream would create a file, if it doesn't already exist, before opening it for output. Here are two constructors which can be used to create a FileOutputStream object. Following constructor takes a file name as a string to create an input stream object to write the file βˆ’ OutputStream f = new FileOutputStream("C:/java/hello") Following constructor takes a file object to create an output stream object to write the file. First, we create a file object using File() method as follows βˆ’ File f = new File("C:/java/hello"); OutputStream f = new FileOutputStream(f); Once you have OutputStream object in hand, then there is a list of helper methods, which can be used to write to stream or to do other operations on the stream. public void close() throws IOException{} This method closes the file output stream. Releases any system resources associated with the file. Throws an IOException. protected void finalize()throws IOException {} This method cleans up the connection to the file. Ensures that the close method of this file output stream is called when there are no more references to this stream. Throws an IOException. public void write(int w)throws IOException{} This methods writes the specified byte to the output stream. public void write(byte[] w) Writes w.length bytes from the mentioned byte array to the OutputStream. There are other important output streams available, for more detail you can refer to the following links βˆ’ ByteArrayOutputStream ByteArrayOutputStream DataOutputStream DataOutputStream Example Following is the example to demonstrate InputStream and OutputStream βˆ’ import java.io.*; public class fileStreamTest { public static void main(String args[]) { try { byte bWrite [] = {11,21,3,40,5}; OutputStream os = new FileOutputStream("test.txt"); for(int x = 0; x < bWrite.length ; x++) { os.write( bWrite[x] ); // writes the bytes } os.close(); InputStream is = new FileInputStream("test.txt"); int size = is.available(); for(int i = 0; i < size; i++) { System.out.print((char)is.read() + " "); } is.close(); } catch (IOException e) { System.out.print("Exception"); } } } The above code would create file test.txt and would write given numbers in binary format. Same would be the output on the stdout screen. There are several other classes that we would be going through to get to know the basics of File Navigation and I/O. File Class File Class FileReader Class FileReader Class FileWriter Class FileWriter Class A directory is a File which can contain a list of other files and directories. You use File object to create directories, to list down files available in a directory. For complete detail, check a list of all the methods which you can call on File object and what are related to directories. There are two useful File utility methods, which can be used to create directories βˆ’ The mkdir( ) method creates a directory, returning true on success and false on failure. Failure indicates that the path specified in the File object already exists, or that the directory cannot be created because the entire path does not exist yet. The mkdir( ) method creates a directory, returning true on success and false on failure. Failure indicates that the path specified in the File object already exists, or that the directory cannot be created because the entire path does not exist yet. The mkdirs() method creates both a directory and all the parents of the directory. The mkdirs() method creates both a directory and all the parents of the directory. Following example creates "/tmp/user/java/bin" directory βˆ’ Example import java.io.File; public class CreateDir { public static void main(String args[]) { String dirname = "/tmp/user/java/bin"; File d = new File(dirname); // Create directory now. d.mkdirs(); } } Compile and execute the above code to create "/tmp/user/java/bin". Note βˆ’ Java automatically takes care of path separators on UNIX and Windows as per conventions. If you use a forward slash (/) on a Windows version of Java, the path will still resolve correctly. You can use list( ) method provided by File object to list down all the files and directories available in a directory as follows βˆ’ Example import java.io.File; public class ReadDir { public static void main(String[] args) { File file = null; String[] paths; try { // create new file object file = new File("/tmp"); // array of files and directory paths = file.list(); // for each name in the path array for(String path:paths) { // prints filename and directory name System.out.println(path); } } catch (Exception e) { // if any error occurs e.printStackTrace(); } } } This will produce the following result based on the directories and files available in your /tmp directory βˆ’ Output test1.txt test2.txt ReadDir.java ReadDir.class 16 Lectures 2 hours Malhar Lathkar 19 Lectures 5 hours Malhar Lathkar 25 Lectures 2.5 hours Anadi Sharma 126 Lectures 7 hours Tushar Kale 119 Lectures 17.5 hours Monica Mittal 76 Lectures 7 hours Arnab Chakraborty Print Add Notes Bookmark this page
[ { "code": null, "e": 2667, "s": 2377, "text": "The java.io package contains nearly every class you might ever need to perform input and output (I/O) in Java. All these streams represent an input source and an output destination. The stream in the java.io package supports many data such as primitives...
How to catch exceptions in JavaScript?
To catch exceptions in JavaScript, use try...catch...finally. JavaScript implements the try...catch...finally construct as well as the throw operator to handle exceptions. You can catch programmer-generated and runtime exceptions, but you cannot catch JavaScript syntax errors. You can try to run the following code to learn how to catch exceptions in JavaScript βˆ’ Live Demo <html> <head> <script> <!-- function myFunc() { var a = 100; try { alert("Value of variable a is : " + a ); } catch ( e ) { alert("Error: " + e.description ); } finally { alert("Finally block will always execute!" ); } } //--> </script> </head> <body> <p>Click the following to see the result:</p> <form> <input type = "button" value = "Click Me" onclick = "myFunc();" /> </form> </body> </html>
[ { "code": null, "e": 1234, "s": 1062, "text": "To catch exceptions in JavaScript, use try...catch...finally. JavaScript implements the try...catch...finally construct as well as the throw operator to handle exceptions." }, { "code": null, "e": 1340, "s": 1234, "text": "You can ca...
How to Scrape Multiple URLs with Python: Tutorial | by FrancΜ§ois St-Amant | Towards Data Science
In this article, I will show you three ways to scrape data from multiple URLs. More specifically, I will show how to loop over the page number, loop over a manually created list of URLs and finally, loop over a scraped list of URLs. I am assuming in this tutorial that you have some super basic knowledge of web scraping. If you need a quick refresher on how to inspect and scrape a website, check this out. towardsdatascience.com I will be scraping data from hostels in the beautiful city of Barcelona from Hostelworld, the best website to find hostels anywhere in the world. Alright, now let’s begin! This is the simplest, most straightforward way of scraping multiple pages. Let’s begin by looking at the end of the URL we are scraping the hostels from (full URL available at the end of the article): We see that for the first page, we have page=1. For the second page, we would have page=2, and so on. Therefore, all we need to do is create a β€œfor” loop where we change the very last number. For each page, the loop will collect the information we specified. Here is the code to collect the distance from city centre, the price of a dorm bed, the price of a private room and the average rating given by previous customers for all the hostels found in the first 2 pages of the website. I use selenium here because the hostelworld pages are JavaScript rendered, which BeautifulSoup cannot handle. I scraped the β€œprice-title 5” element because this element allows us to know whether the price is for a dorm or a private room. The sleep function is useful to control the rate at which we make requests to the website server (to avoid slowing down the servers), but it’s also useful to make sure selenium has found the information we want before it keeps going. Normally, we would move on to cleaning the data to make it usable, but I will do this at the very end with the last method. Let’s move on to the next method. That’s great, but what if the different URLs you want to scrape don’t have the page number you can loop through? Also, what if I want specific information that is only available on the actual page of the hostel? Well, the first way to do this is to manually create a list of URLs, and loop through that list. Here is the code to create the list of URLs for the first two hostels: url = ['https://www.hostelworld.com/pwa/hosteldetails.php/Casa-Gracia-Barcelona-Hostel/Barcelona/45620?from=2020-07-03&to=2020-07-05&guests=1' , 'https://www.hostelworld.com/pwa/hosteldetails.php/Itaca-Hostel/Barcelona/1279?from=2020-07-03&to=2020-07-05&guests=1'] Then, you could create a new β€œfor” loop that goes over every element of the list and collects the information you want, in exactly the same way as shown in the first method. That works if you have just a few URLs, but imagine if you have a 100, 1,000 or even 10,000 URLs! Surely, creating a list manually is not what you want to do (unless you got a loooot of free time)! Thankfully, there is a better/smarter way to do things. Here we are, the last method covered in this tutorial. Let’s look closely at the Hostelworld page we are scraping. We see that every hostel listing has a href attribute, which specifies the link to the individual hostel page. That’s the information we want. The method goes as follows: Create a β€œfor” loop scraping all the href attributes (and so the URLs) for all the pages we want.Clean the data and create a list containing all the URLs collected.Create a new loop that goes over the list of URLs to scrape all the information needed.Clean the data and create the final dataframe. Create a β€œfor” loop scraping all the href attributes (and so the URLs) for all the pages we want. Clean the data and create a list containing all the URLs collected. Create a new loop that goes over the list of URLs to scrape all the information needed. Clean the data and create the final dataframe. It’s important to point out that if every page scraped has a different structure, the method will not work properly. The URLs need to come from the same website! For every hostel page, I scraped the name of the hostel, the cheapest price for a bed, the number of reviews and the review score for the 8 categories (location, atmosphere, security, cleanliness, etc.). This makes the first method we saw useless, as with this one, we can get all the same information, and more! Here is the code to get the clean list of URLs. It’s likely that unwanted links will be present in your list of URLs, as was the case here. Generally, there will almost always be a very distinct pattern to differentiate URLs you want from the other URLs (publicity, etc.). In this case, all links to hostels were starting with /pwa/. I added the string β€˜https://www.hostelworld.com’ to every element of the list. That part was needed for the URLs to work in the coming loop. Now that we have the list of clean URLs, we can scrape all the information we want on every hostel page by looping through the list. Since every iteration takes about 15–20 seconds, I will only do it for the first 10 hostels here. You could easily change that by modyfing the range. When I scraped the number of reviews, since that information was present twice on every page, I used the [-1] to only get the number of reviews the last time it was found. There generally were many prices options (depending on the type of dorm). The last price given was always the cheapest one, which is what I wanted to keep. The try/except loop basically keeps the last price if more than one is found, and keeps the price as is if only one is found. This type of loop is a great way to deal with potential errors! With all the data collected, here is the code to clean it and put it into a dataframe: Here is the head of the final dataframe: There you have it, three different ways of scraping over multiple pages/URLs. I really hope this helped and don’t forget to scrape responsibly. Thanks a lot for reading!
[ { "code": null, "e": 404, "s": 171, "text": "In this article, I will show you three ways to scrape data from multiple URLs. More specifically, I will show how to loop over the page number, loop over a manually created list of URLs and finally, loop over a scraped list of URLs." }, { "code": ...
ReactJS | Calculator App ( Styling ) - GeeksforGeeks
29 Jan, 2021 In our previous article, we had added functionality to our Calculator app and we had successfully created a fully functional calculator application using React. But that does not look good despite being fully functional. This is because of the lack of CSS in the code. Let’s add CSS to our app to make it look more attractive and beautiful. Remember we had created a file named β€œindex.css” initially? We will write all our CSS codes in this file. But before that let’s include this file in our index.js file so that we can see the effect of changes we are making in our CSS immediately in the browser. Write the below line of code in our index.js file at the top: import './index.css'; Now, let us begin writing our CSS. The very first thing we will do is setting default values for all elements. Write the below code at top of the index.css file: CSS *{ margin:0px; padding:0px; border-radius: 0px; box-sizing: border-box; font-size: 110%;} #root{ text-align:center;} The next thing we will do is we will add style to our CalculatorTitle component. We had this element the className as β€˜calculator-title’. So, we will add styles using this class. We will align the title to the center, add padding, margin, width, background color, text color, etc. Below code is used to style the CalculatorTitle component: CSS .calculator-title{ font-size:30px; background: #fff; width: 400px; padding: 10px 10px; margin: 0 auto; margin-top: 20px; margin-bottom: 20px; border-radius: 2px; border: 2px solid black; color: #4CAF50;} Next thing, is to style our calculator. Add the below code for the parent element with className as β€œmainCalc”. CSS .mainCalc{ margin:0px; padding:0px; border-radius: 0px; box-sizing: border-box;} Now, we will style the input fields of the ScreenRow component. The class name given to this element is β€˜screen-row’. We will add width, background, color, padding, etc. to this element. Below code is used for this purpose: CSS .screen-row input{ width: 400px; background: #ddd; border: 0px; color: #222; padding: 10px; text-align: right;} The last thing left is to style the buttons. The below code is used to style the buttons of the Calculator app. CSS input[type="button"]{ width: 100px; background: #4CAF50; border: 1px solid #222; padding: 10px 20px; color: black; text-align: center; text-decoration: none; display: inline-block; font-size: 16px;} input[type="button"]:active{ background: #ccc;} Filename- index.css: After adding all the above pieces of code in the index.css file. The index.css file will look like the below code. CSS *{ margin:0px; padding:0px; border-radius: 0px; box-sizing: border-box; font-size: 110%;} .calculator-title{ font-size:30px; background: #fff; width: 400px; padding: 10px 10px; margin: 0 auto; margin-top: 20px; margin-bottom: 20px; border-radius: 2px; border: 2px solid black; color: #4CAF50;} .mainCalc{ margin:0px; padding:0px; border-radius: 0px; box-sizing: border-box;} input[type='button']{ width: 100px; background: #4CAF50; border: 1px solid #222; padding: 10px 20px; color: black; text-align: center; text-decoration: none; display: inline-block; font-size: 16px;} input[type='button']:active{ background: #ccc;} #root{ text-align:center;} .screen-row input{ width: 400px; background: #ddd; border: 0px; color: #222; padding: 10px; text-align: right;} Output: You can see the change in the app in the browser window. You will now have the exact same app as shown in the very first article with the exact same functionalities. Below is a glimpse of the final ready project: Reaching this far was not easy. We had learned a lot through this project and there is a lot more left to learn about React which we will see in the upcoming articles. This was just an introductory project to get your hands ready on React. You can download the complete code of this project from this GitHub repo. shubhamyadav4 ReactJS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to fetch data from an API in ReactJS ? How to redirect to another page in ReactJS ? How to pass data from child component to its parent in ReactJS ? How to pass data from one component to other component in ReactJS ? Create a Responsive Navbar using ReactJS Express.js express.Router() Function Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills How to fetch data from an API in ReactJS ? Difference between var, let and const keywords in JavaScript
[ { "code": null, "e": 27100, "s": 27072, "text": "\n29 Jan, 2021" }, { "code": null, "e": 27766, "s": 27100, "text": "In our previous article, we had added functionality to our Calculator app and we had successfully created a fully functional calculator application using React. Bu...
Apache Solr - Faceting
Faceting in Apache Solr refers to the classification of the search results into various categories. In this chapter, we will discuss the types of faceting available in Apache Solr βˆ’ Query faceting βˆ’ It returns the number of documents in the current search results that also match the given query. Query faceting βˆ’ It returns the number of documents in the current search results that also match the given query. Date faceting βˆ’ It returns the number of documents that fall within certain date ranges. Date faceting βˆ’ It returns the number of documents that fall within certain date ranges. Faceting commands are added to any normal Solr query request, and the faceting counts come back in the same query response. Using the field faceting, we can retrieve the counts for all terms, or just the top terms in any given field. As an example, let us consider the following books.csv file that contains data about various books. id,cat,name,price,inStock,author,series_t,sequence_i,genre_s 0553573403,book,A Game of Thrones,5.99,true,George R.R. Martin,"A Song of Ice and Fire",1,fantasy 0553579908,book,A Clash of Kings,10.99,true,George R.R. Martin,"A Song of Ice and Fire",2,fantasy 055357342X,book,A Storm of Swords,7.99,true,George R.R. Martin,"A Song of Ice and Fire",3,fantasy 0553293354,book,Foundation,7.99,true,Isaac Asimov,Foundation Novels,1,scifi 0812521390,book,The Black Company,4.99,false,Glen Cook,The Chronicles of The Black Company,1,fantasy 0812550706,book,Ender's Game,6.99,true,Orson Scott Card,Ender,1,scifi 0441385532,book,Jhereg,7.95,false,Steven Brust,Vlad Taltos,1,fantasy 0380014300,book,Nine Princes In Amber,6.99,true,Roger Zelazny,the Chronicles of Amber,1,fantasy 0805080481,book,The Book of Three,5.99,true,Lloyd Alexander,The Chronicles of Prydain,1,fantasy 080508049X,book,The Black Cauldron,5.99,true,Lloyd Alexander,The Chronicles of Prydain,2,fantasy Let us post this file into Apache Solr using the post tool. [Hadoop@localhost bin]$ ./post -c Solr_sample sample.csv On executing the above command, all the documents mentioned in the given .csv file will be uploaded into Apache Solr. Now let us execute a faceted query on the field author with 0 rows on the collection/core my_core. Open the web UI of Apache Solr and on the left-hand side of the page, check the checkbox facet, as shown in the following screenshot. On checking the checkbox, you will have three more text fields in order to pass the parameters of the facet search. Now, as parameters of the query, pass the following values. q = *:*, rows = 0, facet.field = author Finally, execute the query by clicking the Execute Query button. On executing, it will produce the following result. It categorizes the documents in the index based on author and specifies the number of books contributed by each author. Following is the Java program to add documents to Apache Solr index. Save this code in a file with the name HitHighlighting.java. import java.io.IOException; import java.util.List; import org.apache.Solr.client.Solrj.SolrClient; import org.apache.Solr.client.Solrj.SolrQuery; import org.apache.Solr.client.Solrj.SolrServerException; import org.apache.Solr.client.Solrj.impl.HttpSolrClient; import org.apache.Solr.client.Solrj.request.QueryRequest; import org.apache.Solr.client.Solrj.response.FacetField; import org.apache.Solr.client.Solrj.response.FacetField.Count; import org.apache.Solr.client.Solrj.response.QueryResponse; import org.apache.Solr.common.SolrInputDocument; public class HitHighlighting { public static void main(String args[]) throws SolrServerException, IOException { //Preparing the Solr client String urlString = "http://localhost:8983/Solr/my_core"; SolrClient Solr = new HttpSolrClient.Builder(urlString).build(); //Preparing the Solr document SolrInputDocument doc = new SolrInputDocument(); //String query = request.query; SolrQuery query = new SolrQuery(); //Setting the query string query.setQuery("*:*"); //Setting the no.of rows query.setRows(0); //Adding the facet field query.addFacetField("author"); //Creating the query request QueryRequest qryReq = new QueryRequest(query); //Creating the query response QueryResponse resp = qryReq.process(Solr); //Retrieving the response fields System.out.println(resp.getFacetFields()); List<FacetField> facetFields = resp.getFacetFields(); for (int i = 0; i > facetFields.size(); i++) { FacetField facetField = facetFields.get(i); List<Count> facetInfo = facetField.getValues(); for (FacetField.Count facetInstance : facetInfo) { System.out.println(facetInstance.getName() + " : " + facetInstance.getCount() + " [drilldown qry:" + facetInstance.getAsFilterQuery()); } System.out.println("Hello"); } } } Compile the above code by executing the following commands in the terminal βˆ’ [Hadoop@localhost bin]$ javac HitHighlighting [Hadoop@localhost bin]$ java HitHighlighting On executing the above command, you will get the following output. [author:[George R.R. Martin (3), Lloyd Alexander (2), Glen Cook (1), Isaac Asimov (1), Orson Scott Card (1), Roger Zelazny (1), Steven Brust (1)]] 46 Lectures 3.5 hours Arnab Chakraborty 23 Lectures 1.5 hours Mukund Kumar Mishra 16 Lectures 1 hours Nilay Mehta 52 Lectures 1.5 hours Bigdata Engineer 14 Lectures 1 hours Bigdata Engineer 23 Lectures 1 hours Bigdata Engineer Print Add Notes Bookmark this page
[ { "code": null, "e": 2206, "s": 2024, "text": "Faceting in Apache Solr refers to the classification of the search results into various categories. In this chapter, we will discuss the types of faceting available in Apache Solr βˆ’" }, { "code": null, "e": 2321, "s": 2206, "text": "...
Split keys and values into separate objects - JavaScript
Suppose, we have an object like this βˆ’ const dataset = { "diamonds":77, "gold-bars":28, "exciting-stuff":52, "oil":51, "sports-cars":7, "bitcoins":40 }; We are required to write a JavaScript function that takes one such object and returns an array of objects that have keys and their values splitted. Therefore, for the above object, the output should be βˆ’ const output = [ {"asset":"diamonds", "quantity":77}, {"asset":"gold-bars", "quantity":28}, {"asset":"exciting-stuff", "quantity":52}, {"asset":"oil", "quantity":51}, {"asset":"bitcoins", "quantity":40} ]; Following is the code βˆ’ const dataset = { "diamonds":77, "gold-bars":28, "exciting-stuff":52, "oil":51, "sports-cars":7, "bitcoins":40 }; const splitKeyValue = obj => { const keys = Object.keys(obj); const res = []; for(let i = 0; i < keys.length; i++){ res.push({ 'asset': keys[i], 'quantity': obj[keys[i]] }); }; return res; }; console.log(splitKeyValue(dataset)); This will produce the following output on console βˆ’ [ { asset: 'diamonds', quantity: 77 }, { asset: 'gold-bars', quantity: 28 }, { asset: 'exciting-stuff', quantity: 52 }, { asset: 'oil', quantity: 51 }, { asset: 'sports-cars', quantity: 7 }, { asset: 'bitcoins', quantity: 40 } ]
[ { "code": null, "e": 1101, "s": 1062, "text": "Suppose, we have an object like this βˆ’" }, { "code": null, "e": 1233, "s": 1101, "text": "const dataset = {\n \"diamonds\":77,\n \"gold-bars\":28,\n \"exciting-stuff\":52,\n \"oil\":51,\n \"sports-cars\":7,\n \"bitcoins\"...
CSS | invert() Function - GeeksforGeeks
03 Jun, 2020 The invert() function is an inbuilt function that is used to apply a filter to the image to set the invert of the color of the sample image. Syntax: invert( amount ) Parameters: This function accepts single parameter amount which holds the amount of conversion. The value of invert is set in terms of value and percentage. The value 0% represents original image and 100% represents the inverted image. The invert() function internally uses the following formula, to computer the inverse of the image: amount * (255 - value) + (1 - amount) * value The value of inversion is controlled by the variable amount, the variable value lies between 0 – 1(floating) range (this is done by converting the passed percentage of color inversion to a value between 0-1). The value is the the color value of the pixel. (255-value) gives the color after subtracting the color value with the max pixel value, assumes that the value of the pixel is in the range 0 – 255 (though input image sample space could be stretched/scaled to meet the specified criteria). Below is a table containing a list of inversion percentages and the result they produce. Below example illustrates CSS invert() function in CSS:Example: <!DOCTYPE html> <html> <head> <title>CSS invert() Function</title> <style> h1 { color:green; } body { text-align:center; } .invert_effect { filter: invert(100%); } </style></head> <body> <h1>GeeksforGeeks</h1> <h2>CSS invert() function</h2> <img class="invert_effect" src= "https://media.geeksforgeeks.org/wp-content/cdn-uploads/20190710102234/download3.png" alt="GeeksforGeeks logo"> </body> </html> Output: Supported Browsers: The browsers supported by invert() function are listed below: Google Chrome Internet Explorer Firefox Safari Opera deosurya2019 CSS-Functions CSS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Create a Responsive Navbar using ReactJS Design a web page using HTML and CSS How to Upload Image into Database and Display it using PHP ? CSS | :not(:last-child):after Selector How to set fixed width for <td> in a table ? Roadmap to Become a Web Developer in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Convert a string to an integer in JavaScript Difference between var, let and const keywords in JavaScript
[ { "code": null, "e": 25126, "s": 25098, "text": "\n03 Jun, 2020" }, { "code": null, "e": 25267, "s": 25126, "text": "The invert() function is an inbuilt function that is used to apply a filter to the image to set the invert of the color of the sample image." }, { "code": ...
How to convert Python Dictionary to a list?
Python's dictionary class has three methods for this purpose. The methods items(), keys() and values() return view objects comprising of tuple of key-value pairs, keys only and values only respectively. The in-built list method converts these view objects in list objects. >>> d1 = {'name': 'Ravi', 'age': 23, 'marks': 56} >>> d1.items() dict_items([('name', 'Ravi'), ('age', 23), ('marks', 56)]) >>> l1 = list(d1.items()) >>> l1 [('name', 'Ravi'), ('age', 23), ('marks', 56)] >>> d1.keys() dict_keys(['name', 'age', 'marks']) >>> l2 = list(d1.keys()) >>> l2 ['name', 'age', 'marks'] >>> l3 = list(d1.values()) >>> l3 ['Ravi', 23, 56]
[ { "code": null, "e": 1335, "s": 1062, "text": "Python's dictionary class has three methods for this purpose. The methods items(), keys() and values() return view objects comprising of tuple of key-value pairs, keys only and values only respectively. The in-built list method converts these view objec...
VBScript - Events
VBScript's interaction with HTML is handled through events that occur when the user or browser manipulates a page. When the page loads, that is an event. When the user clicks a button, that click too is an event. Other examples of events include pressing any key, closing window, resizing window, etc. Developers can use these events to execute VBScript coded responses, which cause buttons to close windows, messages to be displayed to users, data to be validated, and virtually any other type of response imaginable to occur. Events are a part of the Document Object Model (DOM) and every HTML element has a certain set of events, which can trigger VBScript Code. Please go through this small tutorial for a better understanding HTML Event Reference. Here, we will see few examples to understand a relation between Event and VBScript. This is the most frequently used event type, which occurs when a user clicks mouse's left button. You can put your validation, warning, etc., against this event type. <html> <head> <script language = "vbscript" type = "text/vbscript"> Function sayHello() msgbox "Hello World" End Function </script> </head> <body> <input type = "button" onclick = "sayHello()" value = "Say Hello"/> </body> </html> It will produce the following result, and when you click the Hello button, the onclick event will occur which will trigger sayHello() function. Another most important event type is onsubmit. This event occurs when you try to submit a form. So you can put your form validation against this event type. The Form is submitted by clicking on Submit button, the message box appears. The Form is submitted by clicking on Submit button, the message box appears. <html> <head> </head> <body> <script language = "VBScript"> Function fnSubmit() Msgbox("Hello Tutorialspoint.Com") End Function </script> <form action = "/cgi-bin/test.cgi" method = "post" name = "form1" onSubmit = "fnSubmit()"> <input name = "txt1" type = "text"><br> <input name = "btnButton1" type = "submit" value="Submit"> </form> </body> </html> These two event types will help you to create nice effects with images or even with text as well. The onmouseover event occurs when you bring your mouse over any element and the onmouseout occurs when you take your mouse out from that element. <html> <head> </head> <body> <script language = "VBScript"> Function AlertMsg Msgbox("ALERT !") End Function Function onmourse_over() Msgbox("Onmouse Over") End Function Sub txt2_OnMouseOut() Msgbox("Onmouse Out !!!") End Sub Sub btnButton_OnMouseOut() Msgbox("onmouse out on Button !") End Sub </script> <form action = "page.cgi" method = "post" name = "form1"> <input name = "txt1" type = "text" OnMouseOut = "AlertMsg()"><br> <input name = "txt2" type = "text" OnMouseOver = "onmourse_over()"> <br><input name = "btnButton" type = "button" value = "Submit"> </form> </body> </html> It will produce a result when you hover the mouse over the text box and also when you move the focus away from the text box and the button. The standard HTML 4 events are listed here for your reference. Here, script indicates a VBScript function to be executed against that event. 63 Lectures 4 hours Frahaan Hussain Print Add Notes Bookmark this page
[ { "code": null, "e": 2608, "s": 2080, "text": "VBScript's interaction with HTML is handled through events that occur when the user or browser manipulates a page. When the page loads, that is an event. When the user clicks a button, that click too is an event. Other examples of events include pressin...
Versioning Machine Learning Experiments vs Tracking Them | by Maria Khalusova | Towards Data Science
When working on a machine learning project it is common to run numerous experiments in search of a combination of an algorithm, parameters and data preprocessing steps that would yield the best model for the task at hand. To keep track of these experiments Data Scientists used to log them into Excel sheets due to a lack of a better option. However, being mostly manual, this approach had its downsides. To name a few, it was error-prone, inconvenient, slow, and completely detached from the actual experiments. Luckily, over the last few years experiment tracking has come a long way and we have seen a number of tools appear on the market that improve the way experiments can be tracked, e.g. Weights&Biases, MLflow, Neptune. Usually such tools offer an API you can call from your code to log the experiment information. It is then stored in a database, and you use a dashboard to compare experiments visually. With that, once you change your code, you no longer have to worry about forgetting to write the results down β€” that’s done automatically for you. The dashboards help with visualization and sharing. This is a great improvement in keeping track of what has been done, but... Spotting an experiment that has produced the best metrics in a dashboard does not automatically translate into having that model ready for deployment. It’s likely that you need to reproduce the best experiment first. However, the tracking dashboards and tables that you directly observe are weakly connected to the experiments themselves. Thus, you still may need to semi-manually trace your steps back to stitch together the exact code, data and pipeline steps to reproduce the experiment. Could this be automated? In this blog post I’d like to talk about versioning experiments instead of tracking them, and how this can result in easier reproducibility on top of the benefits of experiment tracking. To achieve this I will be using DVC, an open source tool that is mostly known in the context of Data Versioning (after all it’s in the name). However, this tool can actually do a lot more. For instance, you can use DVC to define ML pipelines, run multiple experiments, and compare metrics. You can also version all the moving parts that contribute to an experiment. To start versioning experiments with DVC you’ll need to initialize it from any Git repo as shown below. Note, that DVC expects you to have your project structured in a certain logical way, and you may need to reorganize your folders a bit. $ dvc exp init -iThis command will guide you to set up a default stage in dvc.yaml.See https://dvc.org/doc/user-guide/project-structure/pipelines-files.DVC assumes the following workspace structure:β”œβ”€β”€ dataβ”œβ”€β”€ metrics.jsonβ”œβ”€β”€ modelsβ”œβ”€β”€ params.yamlβ”œβ”€β”€ plots└── srcCommand to execute: python src/train.pyPath to a code file/directory [src, n to omit]: src/train.pyPath to a data file/directory [data, n to omit]: data/images/Path to a model file/directory [models, n to omit]:Path to a parameters file [params.yaml, n to omit]:Path to a metrics file [metrics.json, n to omit]:Path to a plots file/directory [plots, n to omit]: logs.csv──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────default: cmd: python src/train.py deps: - data/images/ - src/train.py params: - model - train outs: - models metrics: - metrics.json: cache: false plots: - logs.csv: cache: falseDo you want to add the above contents to dvc.yaml? [y/n]: yCreated default stage in dvc.yaml. To run, use "dvc exp run".See https://dvc.org/doc/user-guide/experiment-management/running-experiments. You may also notice that DVC assumes that you store parameters and metrics in files instead of logging them with an API. This means you’ll need to modify your code to read parameters from a YAML file and write metrics to a JSON file. Finally, when initializing, DVC automatically creates a basic pipeline and stores it in a dvc.yaml file. With that, your training code, pipeline, parameters, and metrics now live in files that can be versioned. When set up this way, your code no longer depends on an experiment tracking API. Instead of inserting tracking API calls in your code to save experiment information in a central database, you save it in readable files. These are always available in your repo, your code stays clean, and you have less dependencies. Even without DVC, you can read, save, and version your experiment parameters and metrics with Git, though using plain Git is not the most convenient way to compare ML experiments. $ git diff HEAD~1 -- params.yamldiff --git a/params.yaml b/params.yamlindex baad571a2..57d098495 100644--- a/params.yaml+++ b/params.yaml@@ -1,5 +1,5 @@ train: epochs: 10-model:- conv_units: 16+model:+ conv_units: 128 Experiment tracking databases do not capture everything you need to reproduce an experiment. One important piece that is often missing is the pipeline to run the experiment end to end. Let’s take a look at the`dvc.yaml` file, the pipeline file that has been generated. $ cat dvc.yamlstages: default: cmd: python src/train.py deps: - data/images - src/train.py params: - model - train outs: - models metrics: - metrics.json: cache: false plots: - logs.csv: cache: false This pipeline captures the command to run the experiment, parameters and other dependencies, metrics, plots, and other outputs. It has a single `default` stage, but you can add as many stages as you need. When treating all aspects of an experiment as code, including the pipeline, it becomes easier for anyone to reproduce the experiment. In a dashboard, you can see all of your experiments, and I mean ALL of them. At a certain point you will have so many experiments, you will have to sort, label, and filter them simply to keep up. With experiment versioning you have more flexibility in what you share and how you organize things. For instance, you can try an experiment in a new Git branch. If something goes wrong or the results are uninspiring, you can choose not to push the branch. This way you can reduce some unnecessary clutter that you would otherwise encounter in an experiment tracking dashboard. At the same time, if a particular experiment looks promising, you can push it to your repo along with your code so that the results stay in sync with the code and pipeline. The results are shared with the same people, and it’s already organized using your existing branch name. You can keep iterating on that branch, start a new one if an experiment diverges too much, or merge into your main branch to make it your primary model. Even without DVC, you can change your code to read parameters from files and write metrics to other files, and track changes with Git. However, DVC adds a few ML-specific capabilities on top of Git that can simplify comparing and reproducing the experiments. Large data and models aren’t easily tracked in Git, but with DVC you can track them using your own storage, yet they are Git-compatible. When initialized DVC starts tracking the `models` folder, making Git ignore it yet storing and versioning it so you can back up versions anywhere and check them out alongside your experiment code. Codifying the entire experiment pipeline is a good first step towards reproducibility, but it still leaves it to the user to execute that pipeline. With DVC you can reproduce the entire experiment with a single command. Not only that, but it will check for cached inputs and outputs and skip recomputing data that’s been generated before which can be a massive time saver at times. $ dvc exp run'data/images.dvc' didn't change, skippingStage 'default' didn't change, skippingReproduced experiment(s): exp-44136Experiment results have been applied to your workspace.To promote an experiment to a Git branch run:dvc exp branch <exp> <branch> While Git branching is a flexible way to organize and manage experiments, there are often too many experiments to fit any Git branching workflow. DVC tracks experiments so you don’t need to create commits or branches for each one: $ dvc exp show ┏━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━┓ ┃Experiment ┃ Created ┃ loss ┃ acc ┃ ┑━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━┩ β”‚workspace β”‚ - β”‚ 0.25183 β”‚ 0.9137 β”‚ β”‚mybranch β”‚ Oct 23, 2021 β”‚ - β”‚ - β”‚ β”‚β”œβ”€β”€9a4ff1c [exp-333c9] β”‚ 10:40 AM β”‚ 0.25183 β”‚ 0.9137 β”‚ β”‚β”œβ”€β”€138e6ea [exp-55e90] β”‚ 10:28 AM β”‚ 0.25784 β”‚ 0.9084 β”‚ β”‚β”œβ”€β”€51b0324 [exp-2b728] β”‚ 10:17 AM β”‚ 0.25829 β”‚ 0.9058 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Once you decide which of these experiments are worth sharing with the team, they can be converted into Git branches: $ dvc exp branch exp-333c9 conv-units-64Git branch 'conv-units-64' has been created from experiment 'exp-333c9'.To switch to the new branch run:git checkout conv-units-64 This way you will avoid creating a clutter of branches in your repo, and can focus on comparing only promising experiments. To summarize, experiment versioning allows you to codify your experiments in such a way that your experiment logs are always connected to the exact data, code, and pipeline that went into the experiment. You have control over which experiments end up shared with your colleagues for comparison, and this can prevent clutter. Finally, reproducing a versioned experiment becomes as easy as running a single command, and it can even take less time than initially, if some of the pipeline steps have cached outputs that are still relevant. Thank you for staying with me until the end of the post! To learn more about managing experiments with DVC, check out the docs
[ { "code": null, "e": 684, "s": 171, "text": "When working on a machine learning project it is common to run numerous experiments in search of a combination of an algorithm, parameters and data preprocessing steps that would yield the best model for the task at hand. To keep track of these experiment...
Lodash - merge method
_.merge(object, [sources]) This method is like _.assign except that it recursively merges own and inherited enumerable string keyed properties of source objects into the destination object. Source properties that resolve to undefined are skipped if a destination value exists. Array and plain object properties are merged recursively. Other objects and value types are overridden by assignment. Source objects are applied from left to right. Subsequent sources overwrite property assignments of previous sources. object (Object) βˆ’ The destination object. object (Object) βˆ’ The destination object. [sources] (...Object) βˆ’ The source objects. [sources] (...Object) βˆ’ The source objects. (Object) βˆ’ Returns object. (Object) βˆ’ Returns object. var _ = require('lodash'); var object = { 'a': [{ 'b': 2 }, { 'd': 4 }] }; var other = { 'a': [{ 'c': 3 }, { 'e': 5 }] }; console.log(_.merge(object, other)); Save the above program in tester.js. Run the following command to execute this program. \>node tester.js { a: [ { b: 2, c: 3 }, { d: 4, e: 5 } ] } Print Add Notes Bookmark this page
[ { "code": null, "e": 1855, "s": 1827, "text": "_.merge(object, [sources])\n" }, { "code": null, "e": 2341, "s": 1855, "text": "This method is like _.assign except that it recursively merges own and inherited enumerable string keyed properties of source objects into the destinatio...
TypeScript - While Loop
The while loop executes the instructions each time the condition specified evaluates to true. In other words, the loop evaluates the condition before the block of code is executed. while(condition) { // statements if the condition is true } var num:number = 5; var factorial:number = 1; while(num >=1) { factorial = factorial * num; num--; } console.log("The factorial is "+factorial); The above code snippet uses a while loop to calculate the factorial of the value in the variable num. On compiling, it will generate the following JavaScript code βˆ’ //Generated by typescript 1.8.10 var num = 5; var factorial = 1; while (num >= 1) { factorial = factorial * num; num--; } console.log("The factorial is " + factorial); It produces the following output βˆ’ The factorial is 120 45 Lectures 4 hours Antonio Papa 41 Lectures 7 hours Haider Malik 60 Lectures 2.5 hours Skillbakerystudios 77 Lectures 8 hours Sean Bradley 77 Lectures 3.5 hours TELCOMA Global 19 Lectures 3 hours Christopher Frewin Print Add Notes Bookmark this page
[ { "code": null, "e": 2229, "s": 2048, "text": "The while loop executes the instructions each time the condition specified evaluates to true. In other words, the loop evaluates the condition before the block of code is executed." }, { "code": null, "e": 2295, "s": 2229, "text": "w...
Last occurrence of some element in a list in Python
In this article, we are going to see different ways to find the last occurrence of an element in a list. Let's see how to find the last occurrence of an element by reversing the given list. Follow the below steps to write the code. Initialize the list. Reverse the list using reverse method. Find the index of the element using index method. The actual index of the element is len(list) - index - 1. Print the final index. Let's see the code. Live Demo # initializing the list words = ['eat', 'sleep', 'drink', 'sleep', 'drink', 'sleep', 'go', 'come'] element = 'sleep' # reversing the list words.reverse() # finding the index of element index = words.index(element) # printing the final index print(len(words) - index - 1) If you run the above code, then you will get the following result. 5 Another way is to find all the indexes and getting the max from it. Let's see the code. Live Demo # initializing the list words = ['eat', 'sleep', 'drink', 'sleep', 'drink', 'sleep', 'go', 'come'] element = 'sleep' # finding the last occurrence final_index = max(index for index, item in enumerate(words) if item == element) # printing the index print(final_index) If you run the above code, then you will get the following result. 5 If you have any queries in the article, mention them in the comment section.
[ { "code": null, "e": 1167, "s": 1062, "text": "In this article, we are going to see different ways to find the last occurrence of an element in a list." }, { "code": null, "e": 1294, "s": 1167, "text": "Let's see how to find the last occurrence of an element by reversing the give...
Creating a Doubly Linked List using Javascript
Lets start by defining a simple class with a constructor that initializes the head and tail to null. We'll also define another structure on the prototype of the DoublyLinkedList class that'll represent each node in the linked list. class LinkedList { constructor() { this.head = null; this.tail = null; this.length = 0; } } LinkedList.prototype.Node = class { constructor(data) { this.data = data; this.next = null; this.prev = null; } }; Let's also create a display function that'll help us see how our list looks like. This function works as follows. It starts from the head. It iterates over the list using currElem = currElem.next till currElem doesn't become null, ie, we've not reached the end. It prints data for each of the iterations. Here is an illustration for the same βˆ’ Now let's have a look at how we'll implement this βˆ’ display() { let currNode = this.head; while (currNode != null) { console.log(currNode.data + " -> "); currNode = currNode.next; } }
[ { "code": null, "e": 1295, "s": 1062, "text": "Lets start by defining a simple class with a constructor that initializes the head and tail to null. We'll also define another structure on the prototype of the DoublyLinkedList class that'll represent each node in the linked list. " }, { "code"...