title stringlengths 3 221 | text stringlengths 17 477k | parsed listlengths 0 3.17k |
|---|---|---|
How to create NSIndexPath for TableView in iOS? | Index path is generally a set of two values representing row and section of a table view. Index path can be created in objective C as well as Swift as both are native language of iOS Development.
IndexPathForRow is a class method in iOS. To create a index path we need to be sure about the section and row we need to create. Below are the methods of creating an Indexpath.
To create an IndexPath in objective C we can use.
NSIndexPath *myIP = [NSIndexPath indexPathForRow: Int inSection:Int] ;
Example
NSIndexPath *myIP = [NSIndexPath indexPathForRow: 5 inSection: 2] ;
To create an IndexPath in Swift we can use.
IndexPath(row: rowIndex, section: sectionIndex)
Example
IndexPath(row: 2, section: 4)
Both of these are generally used in Cell for row at method, which can be used as
let cell = tblView.cellForRow(at: IndexPath(row: 5, section: 2) | [
{
"code": null,
"e": 1258,
"s": 1062,
"text": "Index path is generally a set of two values representing row and section of a table view. Index path can be created in objective C as well as Swift as both are native language of iOS Development."
},
{
"code": null,
"e": 1435,
"s": 1258,... |
How to display text Right-to-Left Using HTML? | To display text Right-to-Light (rtl) in HTML, use the CSS property direction. Use it with the style attribute. The style attribute specifies an inline style for an element.
You can use the following property values to set direction with the CSS property direction −
Use the style attribute with the CSS property direction to display text Right-to-Left. Just keep in mind, the usage of style attribute overrides any style set globally. It will override any style set in the HTML <style> tag or external style sheet.
You can try to run the following code to display text right-to-left using HTML
Live Demo
<!DOCTYPE html>
<html>
<head>
<title>HTML Text Direction</title>
</head>
<body>
<p style = "direction: rtl;">
Displayed from right to left
</p>
</body>
</html> | [
{
"code": null,
"e": 1235,
"s": 1062,
"text": "To display text Right-to-Light (rtl) in HTML, use the CSS property direction. Use it with the style attribute. The style attribute specifies an inline style for an element."
},
{
"code": null,
"e": 1328,
"s": 1235,
"text": "You can u... |
HTML | Adding Youtube videos - GeeksforGeeks | 18 Sep, 2018
In the early days, adding a video to a webpage was a real challenge since one had to convert the videos to different formats to make them play in all browsers. Converting videos to different formats can be difficult and time-consuming. Now, adding a video to a webpage has become as easy as copying and pasting and a very apt solution to add videos to a website is using Youtube. Youtube helps to host a video for a user so that they can be further embedded on webpages.
YouTube displays an id like “BGAk3_2zi8k”, whenever a video is saved or played. This id is further used as a referral for the youtube video to be embedded in the webpage.
Upload the video that you want to embed on your webpage on YouTube.Copy the video id of the video.Use iframe, object or ’embed’ element in your web page for video definition.Use the src attribute to point to the URL of the video.Dimensions of the player can be adjusted using the width and height attributes.
Upload the video that you want to embed on your webpage on YouTube.
Copy the video id of the video.
Use iframe, object or ’embed’ element in your web page for video definition.
Use the src attribute to point to the URL of the video.
Dimensions of the player can be adjusted using the width and height attributes.
Steps to get the Video Id of a youtube video :
Open the youtube video whose Id you want.Right click on the video, from the menu select “Stats for nerds”.The first value in the box is the Video ID.
Open the youtube video whose Id you want.
Right click on the video, from the menu select “Stats for nerds”.
The first value in the box is the Video ID.
The video id of this video is : il_t1WVLNxk
<!DOCTYPE html><html><body> <iframe height="480" width="500" src="https://www.youtube.com/embed/il_t1WVLNxk"></iframe> </body></html>
Output :
Enabling YouTube autoplay feature:Youtube’s autoplay feature can be used to automatically play a video when a user visits that page.
There are two types of parameters that can be used :
Value 1 : The video starts playing automatically when the player loads.Value 0 (default case) : The video does not play automatically when the player loads.
Value 1 : The video starts playing automatically when the player loads.
Value 0 (default case) : The video does not play automatically when the player loads.
<!DOCTYPE html><html><body> <iframe height="480" width="500" src="https://www.youtube.com/embed/il_t1WVLNxk?autoplay=1"></iframe> </body></html>
Output :
A playlist of youtube videos can be created using comma character which separates the list of videos to play.
The loop parameter is used to loop the number of playbacks on the videos :
Value 1 : The video will keep on looping again and again.Value 0 (default case) : The video plays only once.
Value 1 : The video will keep on looping again and again.
Value 0 (default case) : The video plays only once.
<!DOCTYPE html><html><body> <iframe height="480" width="500" src="https://www.youtube.com/embed/il_t1WVLNxk/ AS_dAPN1Dlk?playlist=AfxHGNRtFac&loop=1"></iframe> </body></html>
Output :
The Youtube Player offers controls like play, pause, volume etc that can be disabled or enabled using the controls parameter.There are two parameters available that can be used :
Value 1 (default case) : Player controls are displayed.Value 0 : Player controls are not displayed.
Value 1 (default case) : Player controls are displayed.
Value 0 : Player controls are not displayed.
For Enabling Controls :
<!DOCTYPE html><html><body> <iframe width="440" height="372" src="https://www.youtube.com/embed/il_t1WVLNxk/?controls=1"></iframe> </body></html>
Output :
For Disabling Controls :
<!DOCTYPE html><html><body> <iframe width="440" height="372" src="https://www.youtube.com/embed/il_t1WVLNxk/?controls=0"></iframe> </body></html>
Output :
<!DOCTYPE html><html><body> <object width="480" height="500"data="https://www.youtube.com/embed/il_t1WVLNxk"></object> </body></html>
Output :
<!DOCTYPE html><html><body> <embed width="480" height="500"src="https://www.youtube.com/embed/il_t1WVLNxk"> </body></html>
Output :
Note: Nowadays, the object and the embed tag are not appreciated, therefore it is recommended to use the iframe tag.
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
prakhar7
HTML
Technical Scripter
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to update Node.js and NPM to next version ?
How to Insert Form Data into Database using PHP ?
CSS to put icon inside an input element in a form
REST API (Introduction)
Types of CSS (Cascading Style Sheet)
Roadmap to Become a Web Developer in 2022
How to fetch data from an API in ReactJS ?
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript | [
{
"code": null,
"e": 24274,
"s": 24246,
"text": "\n18 Sep, 2018"
},
{
"code": null,
"e": 24745,
"s": 24274,
"text": "In the early days, adding a video to a webpage was a real challenge since one had to convert the videos to different formats to make them play in all browsers. Con... |
E-mails Notification Bot with Python | by Joe T. Santhanavanich | Towards Data Science | In data science, the project’s data sources are probably changing over time, and the analysis result changes quickly over time too. Sometimes a little change in the result might not interest you. But to keep tracking this change by yourself may be too tiresome...
This article shows a guide to scripting the e-mail automation bot with Python!
First, let’s prepare an e-mail sender account. I recommend using G-mail as it is easy to register and allows you to adjust the security option manually. You may use your personal G-mail or other email providers of your choice. But I recommend creating a fresh G-mail for our e-mail bot. 🤖
After the G-mail account is ready, lower the account security to let Python access your G-mail account and send e-mails by going to account security settings and adjusting as follows:
Deactivate the sign-in with the phone
Deactivate the sign-in with a 2-step verification
Turn on the less secure app access option
Now, we arrive at a fun part to write a script to log in to your e-mail account and send an e-mail. I suggest using a simple module smtplib to do this job. First, let’s create a send_email.py file and start with importing modules as follows:
import smtplibfrom email.mime.multipart import MIMEMultipartfrom email.mime.text import MIMEText
Then, let's prepare all variables for your e-mail sender and recipient's information, and e-mail contents as structured below. You may input several email recipients into the email_recipients list.
#Email Accountemail_sender_account = "<Your Sender Account>"email_sender_username = "<Your Sender Username>"email_sender_password = "<Your Sender Password>"email_smtp_server = "<SMTP, eg smtp.gmail.com for gmail>"email_smtp_port = <SMTP Porf, eg 587 for gmail>#Email Contentemail_recepients = ["<recepient1>","<recepient2>",".."]email_subject = "<Email Subject>"email_body = "<html of your body here>"
After that, we just need to log in to the e-mail server. Then, we will use for loop to generate and send emails to all recipients.
#login to email serverserver = smtplib.SMTP(email_smtp_server,email_smtp_port)server.starttls()server.login(email_sender_username, email_sender_password)#For loop, sending emails to all email recipientsfor recipient in email_receivers: print(f"Sending email to {recipient}") message = MIMEMultipart('alternative') message['From'] = email_sender_account message['To'] = recipient message['Subject'] = email_subject message.attach(MIMEText(email_body, 'html')) text = message.as_string() server.sendmail(email_sender_account,recipient,text)#All emails sent, log out.server.quit()
Until here, you have a Python script to send e-mails to all recipients. Now, let’s apply it to the example real-world use case.
In this example, I will show how to apply the Python script we created above to send e-mails to the recipients reporting new COVID-19 cases.
First, we need to get updated COVID-19 data. We can scrape this dataset easily from Worldometers by using BeautifulSoap4 and request modules with a few lines of Python as follows:
#import moduleimport requests, datetimefrom bs4 import BeautifulSoup#Set the endpoint: Worldometersurl = "https://www.worldometers.info/coronavirus/"req = requests.get(url)bsObj = BeautifulSoup(req.text, "html.parser")data = bsObj.find_all("div",class_ = "maincounter-number")num_confirmed = data[0].text.strip().replace(',', '')num_deaths = data[1].text.strip().replace(',', '')num_recovered = data[2].text.strip().replace(',', '')
With this script, it finds the location of a class called maincounter-number which represents the number of COVID-19 data: confirmed cases, deaths, and recovered cases. These data are stored in num_confirmed, num_deaths, num_recovered respectively. Then, we can also get the current time using datetime module and pass these variables to create an HTML structure for the E-mail body as follows:
TimeNow = datetime.datetime.now()email_body = '<html><head></head><body>'email_body += '<style type="text/css"></style>'email_body += f'<h2>Reporting COVID-19 Cases at {time}</h2>'email_body += f'<h1><b>Confirmed cases</b>: {confirmed_cases}</h1>'email_body += f'<h1><b>Recovered cases</b>: {recovered_cases}</h1>'email_body += f'<h1><b>Deaths </b>: {deaths}</h1>'email_body += '<br>Reported By'email_body += '<br>COVID-19 BOT</body></html>'
Now, let’s combine everything together and try if it works by inputting all E-mail info then running python <filename>.py And yes, it works! After running the script, I got an e-mail with a real-time updated COVID-19 case directly to my inbox:
You may take a look at the fully combined Python script below:
Now, it is time to automate your bot to run periodically every xx mins or yy hours or zz days. Also, you should add a condition to send messages because you won’t need 100 reported emails in 10 minutes. So here are what you can do:
Setting a threshold to send an e-mail notification only when new XXXX cases have been reported. A common if..else.. algorithm works well.
Schedule your bot using while true: .../ time.sleep()
Schedule your bot using CRON on your Mac/Linux or using Windows Task Scheduler on Windows.
If you want an alternative way to manage your script to run periodically, I recommend using PM2 to do it.
This article gives a walkthrough to building a Python script for sending emails automatically and periodically with an example of a COVID-19 e-mail notification bot. The e-mail sender used in this tutorial is G-mail but can be replaced with any e-mail provider. The main Python modules of this article include smtplib, request, datetime, bs4 with my main concept to use the easiest way to get to job done. ✌
I hope you enjoyed this article and found it useful for your daily work or projects. Please, feel free to contact me if you have any questions.
About me & Check out all my blog contents: Link
Be Safe and Healthy!Thank you for Reading. 👋😄 | [
{
"code": null,
"e": 436,
"s": 172,
"text": "In data science, the project’s data sources are probably changing over time, and the analysis result changes quickly over time too. Sometimes a little change in the result might not interest you. But to keep tracking this change by yourself may be too tir... |
Maximum no. of apples that can be kept in a single basket - GeeksforGeeks | 13 Apr, 2021
Given the ‘N’ number of Basket and the total of Green ‘G’ and Red ‘R’ apples. The task is to distribute all the apples in the Basket and tell the maximum number of apples that can be kept in a basket.Note: None of the basket is empty.
Examples:
Input: N = 2, R = 1, G = 1
Output: Maximum apple kept is = 1
Input: N = 2, R = 1, G = 2
Output: Maximum apple kept is = 2
Approach: The idea is to just check the difference between no. of baskets and total no. of apples(red and Green) i.e. first put 1 apple in 1 basket that means the remaining apples will be extra and can be put together in any basket to make the count maximum. As there is already 1 apple in the basket. So, the maximum number of apples will be (No_of_apples – No_of_baskets) + 1. Since it is mentioned that none of the baskets is empty so apples will always be equal to or greater than no. of baskets.
Below is the implementation of the above approach:
C++
Java
Python3
C#
PHP
Javascript
// C++ implementation of above approach#include <bits/stdc++.h>using namespace std; // Function that will calculate the probabilityint Number(int Basket, int Red, int Green){ return (Green + Red) - Basket + 1;} // Driver codeint main(){ int Basket = 3, Red = 5, Green = 3; cout << "Maximum apple kept is = " << Number(Basket, Red, Green); return 0;}
// Java implementation of above approach import java.io.*; class GFG { // Function that will calculate the probabilitystatic int Number(int Basket, int Red, int Green){ return (Green + Red) - Basket + 1;} // Driver code public static void main (String[] args) { int Basket = 3, Red = 5, Green = 3; System.out.println("Maximum apple kept is = "+ Number(Basket, Red, Green)); }//This Code is Contributed by akt_mit}
# Python 3 implementation of above approach # Function that will calculate# the probabilitydef Number(Basket, Red, Green): return (Green + Red) - Basket + 1 # Driver codeif __name__ == '__main__': Basket = 3 Red = 5 Green = 3 print("Maximum apple kept is =", Number(Basket, Red, Green)) # This code is contributed by# Sanjit_Prasad
//C# implementation of above approach using System; public class GFG{ // Function that will calculate the probabilitystatic int Number(int Basket, int Red, int Green){ return (Green + Red) - Basket + 1;} // Driver code static public void Main (){ int Basket = 3, Red = 5, Green = 3; Console.WriteLine("Maximum apple kept is = "+ Number(Basket, Red, Green)); }//This Code is Contributed by @ajit}
<?php// PHP implementation of above approach // Function that will calculate// the probabilityfunction Number($Basket, $Red, $Green){ return ($Green + $Red) - $Basket + 1;} // Driver code$Basket = 3;$Red = 5 ;$Green = 3; echo "Maximum apple kept is = ", Number($Basket, $Red, $Green); // This code is contributed by ANKITRAI1?>
<script> // Javascript implementation of above approach // Function that will calculate the probabilityfunction Number(Basket, Red, Green){ return (Green + Red) - Basket + 1;} // Driver codevar Basket = 3, Red = 5, Green = 3; document.write("Maximum apple kept is = " + Number(Basket, Red, Green)); // This code is contributed by rutvik_56 </script>
Maximum apple kept is = 6
jit_t
ankthon
Sanjit_Prasad
rutvik_56
School Programming
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
C++ Classes and Objects
Constructors in C++
Interfaces in Java
Operator Overloading in C++
Copy Constructor in C++
C++ Data Types
Overriding in Java
Types of Operating Systems
Python program to check if a string is palindrome or not
Polymorphism in C++ | [
{
"code": null,
"e": 24335,
"s": 24307,
"text": "\n13 Apr, 2021"
},
{
"code": null,
"e": 24570,
"s": 24335,
"text": "Given the ‘N’ number of Basket and the total of Green ‘G’ and Red ‘R’ apples. The task is to distribute all the apples in the Basket and tell the maximum number of... |
PyMC3 and Bayesian Inference Towards Non-Linear Models: Part 1 | by Adam Watts | Towards Data Science | Probabilistic programming languages such as Stan have provided scientists and engineers accessible means into the world of Bayesian statistics. As someone who is comfortable programming in Python, I have been trying to find the perfect Bayesian library that complements Python’s wonderful and succinct syntax.
One of the biggest challenges in learning new material and a new library is finding sufficient resources and examples. I couldn’t find an accessible article on using Python and Bayesian inference for parameter and model uncertainty quantification (UQ) towards non-linear models. Therefore, I hope this article helps others trying to apply these methods towards their own research or work.
This will be 2 part article to make it accessible to a wide audience but also make navigation and reading the articles easier.
Part 1 will quickly discuss two common libraries for Bayesian inference: PyStan, and PyMC3. We will also set up a non-linear function to be used for Bayesian inference. This will include parameter optimization though the SciPy library. We will use the optimal parameters as a starting point for our Bayesian Inference and check our Bayesian method. If you’re already comfortable with parameter optimization, feel free to skip directly to the PyMC3 section in part 2.
Part 2 is where we begin using PyMC3 and dive into the syntax and results. For validation, we will also see how the Bayesian methods compare to the frequentist approach for parameter uncertainty quantification.
I am not going to divine much into Baye’s theorem and the entire Bayesian framework, there are plenty of articles on Medium to satisfy your curiosity. The primary outcome for this article is for the reader to understand how to utilize PyMC3 for quantifying the uncertainty in the parameter(s) and model used in linear, and in particular, non-linear models. The Jupyter Notebook can be found here.
Stan is available for Python in the form of PyStan, however, the syntax is rather offputting. Let’s take a look at an example of creating a model in PyStan which is assigned to a variable of type string. This model will then be compiled to C++ with the help of PyStan.
Above is a simple example of a quadratic model with two parameters, α, and β with the response y. The model is very verbose and difficult to debug when one starts to use more complicated models. There must be something more Pythonic...
I believe the PyMC3 is a perfect library for people entering into the world of probabilistic programming with Python. PyMC3 uses native Python Syntax making it easier to debug and more intuitive. Let’s dive into an example and see the prowess of the library.
I will be using an everyday example of a simple non-linear equation for this example with the hopes that the concepts will be easier for comprehension and retention.
Let’s begin with a cooling of a hot object using Newton’s law of cooling. We are assuming no thermal gradients exist in the object (lumped capacitance).
For those who are rusty at ordinary differential equations (ODE), fear not. The solution is below.
T(t) represents the temperature of the drink at a given time t. τ is our parameter of interest which governs the cooling rate for this problem. T_env is our environmental temperature and T_0 is the temperature of the drink at time t=0. We are assuming that T_env is invariant with time.
In Python:
Let us also assume that we know ahead of time, a priori, that the environmental temperature is 30C and that the object started at 100C. Finally, we will assume that the unknown parameter τ=15.
We will first plot the theoretical solution for the cooling of a hot drink, shown as the solid blue curve. Here we can get an intuition for the equations and parameters.
We will also generate some data with artificial noise to simulate real-life data, shown here as red scatter points. We will assume that the data is IID. The response, temperature of the drink, is Gaussian, i.e., T(t)~ N(μ, σ2). The mean, μ, will follow the solution above. We will address shortly the problem that we don’t know τ from the data itself. The standard deviation in the response is σ=1.5. That is how we generated the fake noise, we will also be able to estimate this from the data too.
From this simulated data, we want to estimate the cooling parameter τ and estimate the uncertainty in its value. Ideally, our estimate parameter and the uncertainty will encapsulate the true value. Similarly, we also would like our uncertainty in the standard error of the model to encapsulate σ.
To clarify what I mean by the term encapsulate, assume we estimate a parameter γ=100 ± 10. We want the true parameter to be within the range of 90–110 which is the same as 100 ± 10.
Before we can estimate the uncertainty in τ, we must first optimize the value of τ that fits the data the best. In more technical terms, we wish to find τ such that it minimizes the residual sum of squares.
Where the function within the left-hand side of the bracket is our data and the function within the right-hand side of the bracket is the solution to our ODE.
We find τ_opt = 14.93, pretty close estimate to the real parameter of τ=15. Returning back to our initial problem, remember that the response, temperature of the drink, is Gaussian, i.e., T(t)~ N(μ, σ2). We finally have an estimate for τ, and we can check for normality.
Remember that if our response came from a Gaussian distribution, then our residuals (data minus the predictions) must also be Gaussian.
residuals = T_data - newton_cooling(tau_opt, T_0, T_env, time_data)
Think, linear transformation, or even more fundamental, the shift one makes for a Z-score.
The residual, o=difference between the data and our model, looks great with a mean of roughly 0. They appear stochastic, homoscedastic, with roughly most of the data between the two red dash lines.
Let’s see how we did for estimating σ
sigma_error_square = (1/(len(time_data)-1))*ss_minsigma_error = np.sqrt(sigma_error_square)print(sigma_error)
We find that the estimate for σ is 1.42, pretty close to 1.5
If you’re still with me, fantastic! Now that we have estimates for τ and σ, we can finally begin our dive into Bayesian statistics and PyMC3. Check out part 2 of the series here. | [
{
"code": null,
"e": 481,
"s": 171,
"text": "Probabilistic programming languages such as Stan have provided scientists and engineers accessible means into the world of Bayesian statistics. As someone who is comfortable programming in Python, I have been trying to find the perfect Bayesian library th... |
Set the width of an element to 50% in Bootstrap 4 | To set the width of an element to 50%, use the .w-50 class in Bootstrap.
You can try to run the following code to set element’s width −
Live Demo
<!DOCTYPE html>
<html>
<head>
<title>Bootstrap Example</title>
<link rel = "stylesheet" href = "https://maxcdn.bootstrapcdn.com/bootstrap/4.1.1/css/bootstrap.min.css">
<script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src = "https://maxcdn.bootstrapcdn.com/bootstrap/4.1.1/js/bootstrap.min.js"></script>
</head>
<body>
<div class = "container">
<h3>Set element width</h3>
<div class = "w-100 bg-danger">Normal width</div>
<div class = "w-50 bg-success">Width is 50%</div>
</div>
</body>
</html> | [
{
"code": null,
"e": 1135,
"s": 1062,
"text": "To set the width of an element to 50%, use the .w-50 class in Bootstrap."
},
{
"code": null,
"e": 1198,
"s": 1135,
"text": "You can try to run the following code to set element’s width −"
},
{
"code": null,
"e": 1209,
... |
Building a Data Warehouse in Python Using PostgreSQL | by M Khorasani | Towards Data Science | With the abundance and proliferation of data in this day and age, there is an inherent need to store and reuse that wealth of information in a meaningful way. It is analogous to having a kitchen inundated with a multitude of utensils and tools to use, without having an organized way to manage them. Well, chances are that you’re going to end up opening your canned lunch with the rear end of a dipper, unless you warehouse up real fast.
Data warehousing is the ability to cache, tokenize, analyze and reuse your curated data on demand in an unparalleled manner. In a similar fashion to how your mother navigates around her immaculately well organized kitchen. Mind you, there is no one size fits all solution, and there are as many ways to warehouse as there are warehouses themselves.
Arguably, there are three key ingredients to implementing a successful data warehouse:
Server: first and foremost you must provision a distributed database system that is both robust and resilient.Indexing: your database system should ideally have some form of indexing that allows you to access records at warp speed. Having a full-text index would be a bonus.Dashboard: you should have a staging area where you can import, export, visualize and mutate your data in an immutable way.
Server: first and foremost you must provision a distributed database system that is both robust and resilient.
Indexing: your database system should ideally have some form of indexing that allows you to access records at warp speed. Having a full-text index would be a bonus.
Dashboard: you should have a staging area where you can import, export, visualize and mutate your data in an immutable way.
In this tutorial we will be addressing the first and last points mentioned above by creating a data warehouse where we can store datasets, arrays and records into. In addition we will create a dashboard where we can graphically interface with our warehouse to load, retrieve, mutate and visualize our data. Perhaps in another article, we will implement the second point i.e. a full-text indexed database.
PostgreSQL otherwise known as Postgres for short, is an open source relational database system that is more often than not the database of choice for developers due to its extended capabilities and relative ease of use. Even though it is brandished as a structured database management system, it can also store non-structured data including but not limited to arrays and binary objects. Most importantly however, Postgres’s graphical user interface makes it too easy to provision and manage databases on the fly, something other database systems should take careful note of.
In our implementation of a data warehouse, we will be using a local Postgres server to store all our data in. Before we proceed, please download and install Postgres using this link. During the installation you will be prompted to set a username, password and a local TCP port to connect to your server. The default port is 5432 which you may keep as is or modify if necessary. Once the installation is complete, you may login to the server by running the pgAdmin 4 application which will open a portal on your browser as shown below.
There will be a default database labeled postgres, however you may create your own by right clicking on the Databases menu and then selecting Create to provision a new database.
Now that you have provisioned your server and database, you should install the package sqlalchemy that will be used to connect to our database through Python. You can download and install this package by typing the following command into Anaconda prompt:
pip install sqlalchemy
In addition download, install and then import all of the other necessary libraries into your Python script as follows:
Firstly, we will need to establish a connection between our records_db database and create a table where we can store records and arrays in. In addition, we will need to create another connection to the datasets_db database where we will store our datasets in:
As per Postgres’s naming convention, table names must start with underscores or letters (not numbers), must not contain dashes and must be less than 64 characters long. For our records table, we will create a name field with a datatype of text declared as a PRIMARY KEY and a details field as text[] which is Postgres’s notation for a single-dimension array. To acquire an exhaustive list of all the datatypes supported by Postgres, please refer to this link. We will use the name field as a primary key that will be used to search for records later on. In addition, please note that a more secure way of storing your database credentials, is to save them in a configuration file and then to invoke them as parameters in your code.
Subsequently, we will create the following five functions to write, update, read and list our data to/from our database:
Please be aware of SQL-injection vulnerabilities when concatenating strings to your queries. You may instead use parametrization to prevent SQL-injection as elaborated in this article.
Streamlit is a pure Python web framework that allows you to develop and deploy user interfaces and applications in real-time at your fingertips. For this tutorial, we will be using Streamlit to render a dashboard which we can use to interface with our Postgres database.
In the snippet shown below, we are using several text input widgets to insert the values for our records, arrays and names for the datasets. Furthermore we are using Streamlit’s functions to interactively visualize our dataset as a chart and as a dataframe.
You can run the dashboard on a local browser, by typing the following commands in Anaconda prompt. First, change your root directory to where your source code is saved:
cd C:/Users/...
Then type the following to run your app:
streamlit run file_name.py
And there you have it, a dashboard that can be used to tokenize, write, read, update, upload and visualize our data in real-time. The beauty of our data warehouse is that it can be scaled up to host as much data as you may need, all within the same structure that we have just created! If you want to learn more about the fundamentals of data warehouses, I would recommend signing up for the University of Colorado’s course on data warehousing. It’s a great way to get up to speed.
If you want to learn more about data visualization and Python, then feel free to check out the following (affiliate linked) courses: | [
{
"code": null,
"e": 610,
"s": 172,
"text": "With the abundance and proliferation of data in this day and age, there is an inherent need to store and reuse that wealth of information in a meaningful way. It is analogous to having a kitchen inundated with a multitude of utensils and tools to use, wit... |
Data anonymization layer using Snowflake and DBT | by Tommaso Peresson | Towards Data Science | HousingAnywhere is an online marketplace platform for mid to long-term rents. Like any other data-driven business we have to deal with both personal information and GDPR regulations. In this brief article, I’ll walk you through our solution to anonymize PII (Personal Identifiable Information). The technologies that I’ll be using are Snowflake[1], a popular data warehouse cloud solution and DBT[2](Data Build Tool) a data transformation tool.
Starting with some context; Every day our reporting and analytics are served by data that we ingest from various sources, spanning every aspect of our business. This raises a problem, how do we prevent users and services from unnecessarily accessing raw data that exposes PIIs of HousingAnywhere’s clients?
from┌──────────────────┬──────────────────────┬─────────┐│ NAME │ EMAIL │ COUNTRY │├──────────────────┼──────────────────────┼─────────┤│ Tommaso Peresson │ placeholder@mail.com │ IT │└──────────────────┴──────────────────────┴─────────┘to┌──────────────────┬──────────────────────┬─────────┐│ NAME │ EMAIL │ COUNTRY │├──────────────────┼──────────────────────┼─────────┤│ 87444be...eb6b21 │ 3c33076...e1ceaf │ IT │└──────────────────┴──────────────────────┴─────────┘
In order for this solution to be effective, you will need two areas in your data warehouse: a stage one to store all the raw data coming from external sources, and another to accommodate the results of the anonymization transformation. To do that I will create two databases and call them PRIVATE_SOURCES and SOURCES respectively. If you need some help to get to this stage follow this[3] guide.
Another key element is to manage the access control and the privileges so that the PRIVATE_SOURCES database is only accessible by admins and data engineers, and the SOURCES database by potentially everybody. I will not get into depth on this topic but if you want to learn more about privileges and access control read this article[4].
We will also need a table containing the information on which column names to affect and their respective anonymization methods (anonymization_mapping) a table containing the salt[5] used for secure hashing.
Without anonymization, anybody who has access to the data warehouse could steal and use personal information. A data leak is as simple to perform as typing select * from users . Taking inspiration from this post[6] we decided to build an anonymization layer using DBT.
Let’s start to uncover our solution by looking at an example of the final usage in a model that we want to anonymize.
As you probably can see after select there is a macro call. This is intended to replace the *operator and adding the anonymization transformation. Our goal was to find a maintainable and scalable solution that isn’t text-intensive since we have hundreds of tables and manually selecting each column isn’t, of course, a considerable solution. Let’s now take a deep dive into the macros to understand the implementation details.
Let’s now take a look at the two DBT macros that make up this system.
1.Starting with anonymize_columns() the aim of this macro is to substitute the * operator plus adding the anonymization transformation to specific columns.
We can say that this macro performs two different operations:
(rows 2:13) by fetching all the column names from the private_sources.information_schema.columns and join them with the anonymization_mapping on the column name. As a result, we will have a table containing the columns of the table in object paired with the right anonymization function. For example, the users table has 3 columns: name, email and country. It might be that in our specific case we want to only anonymize the name and email columns, leaving country untouched.
To do this our anonymization_mapping will have to contain these two rows indicating which columns have to be transformed and with which function:
ANONYMIZATION_MAPPING (config)┌───────────────────┬─────────────┬───────────┐│ TABLE_NAME_FILTER │ COLUMN_NAME │ FUNCTION │├───────────────────┼─────────────┼───────────┤│ production.users │ name │ full_hash ││ production.users │ email │ full_hash │└───────────────────┴─────────────┴───────────┘
After joining it with the information_schema.columns in the query contained in the statement, the mapping will look like this
mapping (sql statement output)┌───────────────────┬─────────────┬───────────┐│ TABLE_NAME_FILTER │ COLUMN_NAME │ FUNCTION │├───────────────────┼─────────────┼───────────┤│ production.users │ name │ full_hash ││ production.users │ email │ full_hash ││ NULL │ country │ NULL │└───────────────────┴─────────────┴───────────┘
N.B. we can also leave the TABLE_NAME_FILTER blank in the anonymization_mapping, this will anonymize every column with a matching name without filtering by table. This can be useful in the case where we have multiple tables with the same column names that have to be anonymized.
(rows 14:27) This part of the macro prints out the right column names and functions to be put in the select statement by mapping column names with the correct anonymization function using the output of the last step (mapping). For example, the output of the macro when following the previous example would be:
SHA256(name) as name,SHA256(email) as email,country
and the final compiled model will look like this:
select SHA256(name) as name, SHA256(email) as email, countryfrom private_sources.production.users
Effectively anonymizing the sensitive content of our table.
2.The second macro was built to group and easily use the different anonymization transformations. This simply applies the selected transformation to a column. To showcase its use I’ve included two different methods, one generic that hashes the whole content of a cell and one that also applies a REGEX to the content of the cell to anonymized only the user part of an email address.
FROM: tompere@housinganywhere.comTO: SHA256(tompere+salt)@housinganywhere.com
It is also important to stress that whenever using a hashing function it is required to add a salt otherwise our hashes would be vulnerable to dictionary attacks.
The purpose of GDPR regulations is to improve consumer confidence in organizations that hold and process personal data, as well as standardizing and simplifying the free flow of information across the EU. In HousingAnywhere we always believed that the privacy of our users is key in establishing trust and confidence towards our platform and that is why we take so much care in designing our data infrastructure.
A final remark should be also made on the fact that Snowflake offers an out-of-the-box solution for anonymization called Dynamic Data Masking available only for enterprise accounts.
Thanks to Julian Smidek for being such a great manager :)
[1] DBT Cloud. Accessed 20 May 2021. https://cloud.getdbt.com/.
[2] DBT Discourse. ‘PII Anonymization and DBT — Modeling’, 26 April 2018. https://discourse.getdbt.com/t/pii-anonymization-and-dbt/29.
[3] Step 2. Create Snowflake Objects — Snowflake Documentation. Accessed 20 May 2021. https://docs.snowflake.com/en/user-guide/getting-started-tutorial-create-objects.html.
[4] ‘A Comprehensive Tutorial of Snowflake Privileges and Access Control — Trevor’s Code’. Accessed 20 May 2021. https://trevorscode.com/comprehensive-tutorial-of-snowflake-privileges-and-access-control/.
[5] Salt (Cryptography). In Wikipedia, 17 May 2021. https://en.wikipedia.org/w/index.php?title=Salt_(cryptography)&oldid=1023634352.
[6] Snowflake. ‘The Data Cloud | Snowflake | Enable the Most Critical Workloads’. Accessed 20 May 2021. https://www.snowflake.com/. | [
{
"code": null,
"e": 617,
"s": 172,
"text": "HousingAnywhere is an online marketplace platform for mid to long-term rents. Like any other data-driven business we have to deal with both personal information and GDPR regulations. In this brief article, I’ll walk you through our solution to anonymize P... |
How to find available WiFi networks using Python? - GeeksforGeeks | 10 Mar, 2022
WiFi (Wireless Fidelity) is a wireless technology that allows devices such as computers (laptops and desktops), mobile devices (smartphones and wearables), and other equipment (printers and video cameras) to interface with the Internet. We can find out the names of the WiFi names with the help of Python. We need to have the basics of networking to help us know what we need and what we do not need.
subprocess: The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. We do not need to use pip to install it as the subprocess module comes preinstalled.
With the subprocess module, we need to use the check_output() method. We will pass a list of the things we will need to know about the WiFi networks. We will need netsh, wlan, show and network. These parameters are passed for storing the outputs in them and then converting it to strings to display the outputs.
Syntax: subprocess.check_output(args, *, stdin=None, stderr=None, shell=False, universal_newlines=False)
Parameters:
args: The arguments used to launch the process. This may be a list or a string.
stdin: Value of standard input stream to be passed as pipe(os.pipe()).
stdout: Value of output obtained from standard output stream.
stderr: also known as standard error, is the default file descriptor where a process can write error messages. Basically the value of the error(if any)
shell: It is a boolean parameter.If True the commands get executed through a new shell environment.
universal_newlines: It is a boolean parameter .If true files containing stdout and stderr are opened in universal newline mode.
Returns: Information about the networks
Now here is the code for it
Python3
# importing the subprocess moduleimport subprocess # using the check_output() for having the network term retrievaldevices = subprocess.check_output(['netsh','wlan','show','network']) # decode it to stringsdevices = devices.decode('ascii')devices= devices.replace("\r","") # displaying the informationprint(devices)
Output:
adnanirshad158
python-utility
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python OOPs Concepts
How to Install PIP on Windows ?
Bar Plot in Matplotlib
Defaultdict in Python
Python Classes and Objects
Deque in Python
Check if element exists in list in Python
How to drop one or multiple columns in Pandas Dataframe
Python - Ways to remove duplicates from list
Class method vs Static method in Python | [
{
"code": null,
"e": 24238,
"s": 24210,
"text": "\n10 Mar, 2022"
},
{
"code": null,
"e": 24639,
"s": 24238,
"text": "WiFi (Wireless Fidelity) is a wireless technology that allows devices such as computers (laptops and desktops), mobile devices (smartphones and wearables), and oth... |
Total digits | Practice | GeeksforGeeks | Given a number n, count the total number of digits required to write all numbers from 1 to n.
Example 1:
Input: n = 13
Output: 17
Explanation: There are total 17
digits required to write all
numbers from 1 to 13.
Example 2:
Input: n = 4
Output: 4
Explanation: There are total 4
digits required to write all
numbers from 1 to 4.
Your Task:
You dont need to read input or print anything. Complete the function totalDigits() which takes n as input parameter and returns the total number of digits required to write all numbers from 1 to n.
Expected Time Complexity: O(logn)
Expected Auxiliary Space: O(1)
Constraints:
1<= n <=100000
+1
shashwatsatna4 months ago
long long int count=0;
for(int i=1;i<=n;i++){
count+=log10(i)+1;
}
return count;
+1
badgujarsachin837 months ago
class Solution {
public:
long long int totalDigits(long long int n){
// code here
long long int count=0;
for(int i=1;i<=n;i++){
string ans=to_string(i);
count+=ans.length();
}
return count;
}
};
0
badgujarsachin837 months ago
0
Pazhanivel K10 months ago
Pazhanivel K
Easy java olution
class Solution { static long totalDigits(long n){ // code here String s; long count=0; for(int i=1;i<=n;i++) { s=Integer.toString(i); count=count+s.length(); } return count; }}
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 344,
"s": 238,
"text": "Given a number n, count the total number of digits required to write all numbers from 1 to n.\n\nExample 1:"
},
{
"code": null,
"e": 456,
"s": 344,
"text": "Input: n = 13\nOutput: 17 \nExplanation: There are total 17 \ndigits required ... |
Online XML Editor | Editable XML Code
123456789101112131415<?xml version="1.0"?><Company> <Employee> <FirstName>Tanmay</FirstName> <LastName>Patil</LastName> <ContactNo>1234567890</ContactNo> <Email>tanmaypatil@xyz.com</Email> <Address> <City>Bangalore</City> <State>Karnataka</State> <Zip>560212</Zip> </Address> </Employee></Company>X
Privacy Policy
Cookies Policy
Terms of Use | [
{
"code": null,
"e": 680,
"s": 662,
"text": "Editable XML Code"
},
{
"code": null,
"e": 1041,
"s": 680,
"text": "123456789101112131415<?xml version=\"1.0\"?><Company> <Employee> <FirstName>Tanmay</FirstName> <LastName>Patil</LastName> <ContactNo>1234567890</Contac... |
How to find the correlation matrix by considering only numerical columns in an R data frame? | While we calculate correlation matrix for a data frame, all the columns must be numerical, if that is not the case then we get an error Error in cor(“data_frame_name”) : 'x' must be numeric. To solve this problem, either we can find the correlations among variables one by one or use apply function.
Consider the below data frame −
set.seed(99)
x1<-rnorm(20)
x2<-rpois(20,5)
x3<-rpois(20,2)
x4<-LETTERS[1:20]
x5<-runif(20,2,10)
x6<-sample(letters[1:3],20,replace=TRUE)
df<-data.frame(x1,x2,x3,x4,x5,x6)
df
x1 x2 x3 x4 x5 x6
1 0.2139625022 7 4 A 6.423159 a
2 0.4796581346 5 1 B 7.176488 a
3 0.0878287050 7 2 C 2.372402 c
4 0.4438585075 8 3 D 6.599771 a
5 -0.3628379205 5 2 E 5.122577 c
6 0.1226740295 8 3 F 3.133224 c
7 -0.8638451881 4 2 G 2.482256 a
8 0.4896242667 4 4 H 4.532982 c
9 -0.3641169125 5 0 I 2.670717 c
10 -1.2942420067 2 3 J 8.597253 a
11 -0.7457690454 6 0 K 2.699053 a
12 0.9215503620 7 2 L 8.743498 b
13 0.7500543504 6 2 M 3.427915 c
14 -2.5085540159 10 2 N 5.928563 a
15 -3.0409340953 4 2 O 3.544168 a
16 0.0002658005 7 0 P 3.710395 c
17 -0.3940189942 2 2 Q 9.609634 c
18 -1.7450276608 5 0 R 5.886087 b
19 0.4986314508 8 2 S 5.507034 c
20 0.2709537888 4 3 T 2.137873 b
Finding the correlation matrix for columns in df −
cor(df)
Error in cor(df) : 'x' must be numeric
Here, the error means all the columns are not numeric.
str(df)
'data.frame': 20 obs. of 6 variables:
$ x1: num 0.214 0.4797 0.0878 0.4439 -0.3628 ...
$ x2: int 7 5 7 8 5 8 4 4 5 2 ...
$ x3: int 4 1 2 3 2 3 2 4 0 3 ...
$ x4: Factor w/ 20 levels "A","B","C","D",..: 1 2 3 4 5 6 7 8 9 10 ...
$ x5: num 6.42 7.18 2.37 6.6 5.12 ...
$ x6: Factor w/ 3 levels "a","b","c": 1 1 3 1 3 3 1 3 3 1 ...
Now to find the correlation matrix for all the numeric columns, we can do the following −
cor(df[sapply(df,is.numeric)])
x1 x2 x3 x5
x1 1.00000000 0.14685889 0.23107456 0.04232205
x2 0.14685889 1.00000000 -0.02664914 -0.14822679
x3 0.23107456 -0.02664914 1.00000000 0.18971761
x5 0.04232205 -0.14822679 0.18971761 1.00000000 | [
{
"code": null,
"e": 1362,
"s": 1062,
"text": "While we calculate correlation matrix for a data frame, all the columns must be numerical, if that is not the case then we get an error Error in cor(“data_frame_name”) : 'x' must be numeric. To solve this problem, either we can find the correlations amo... |
How to get an attribute value from a href link in selenium? | We can get an attribute value from a href link in Selenium. To begin with, we have to first identify the element having an anchor tag with the help of any of the locators like css, id, class, and so on.
Next, we shall use the getAttribute method and pass href as a parameter to the method. Let us investigate an element with an anchor tag having the href attribute. Here, the value of href should contain /about/about_team.htm.
Code Implementation.
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import java.util.concurrent.TimeUnit;
public class HrefValue{
public static void main(String[] args) {
System.setProperty("webdriver.chrome.driver", "C:\\Users\\ghs6kor\\Desktop\\Java\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);
driver.get("https://www.tutorialspoint.com/about/about_team.htm");
// identify element
WebElement l = driver.findElement(By.linkText("Team"));
// href value from getAttribute()
String v = l.getAttribute("href");
System.out.println("Href value of link: "+ v);
driver.close();
}
} | [
{
"code": null,
"e": 1265,
"s": 1062,
"text": "We can get an attribute value from a href link in Selenium. To begin with, we have to first identify the element having an anchor tag with the help of any of the locators like css, id, class, and so on."
},
{
"code": null,
"e": 1490,
"s"... |
C# Program to Convert the Octal String to an Integer Number - GeeksforGeeks | 22 Jun, 2020
Given an octal number as input, we need to write a program to convert the given octal number into equivalent integer. To convert an octal string to an integer, we have to use Convert.ToInt32() function to convert the values.
Examples:
Input : 202
Output : 130
Input : 660
Output : 432
Convert the item to an integer using base value 8 by using a foreach loop .
Program 1:
// C# program to convert an array // of octal strings to integersusing System;using System.Text; class Prog { static void Main(string[] args) { string[] str = { "121", "202", "003" }; int num1 = 0; try { // using foreach loop to access each items // and converting to integer numbers foreach(string item in str) { num1 = Convert.ToInt32(item, 8); Console.WriteLine(num1); } } catch (Exception ex) { Console.WriteLine(ex.ToString()); } }}
Output:
81
130
3
Program 2:
// C# program to convert an array // of octal strings to integersusing System;using System.Text; namespace geeks { class Prog { static void Main(string[] args) { string[] str = { "111", "543", "333", "607", "700" }; int num2 = 0; try { // using foreach loop to access each items // and converting to integer numbers foreach(string item in str) { num2 = Convert.ToInt32(item, 8); Console.WriteLine(num2); } } catch (Exception ex) { Console.WriteLine(ex.ToString()); } }}}
Output:
73
355
219
391
448
C#
C# Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Destructors in C#
Extension Method in C#
HashSet in C# with Examples
Top 50 C# Interview Questions & Answers
C# | How to insert an element in an Array?
Convert String to Character Array in C#
Socket Programming in C#
Program to Print a New Line in C#
Getting a Month Name Using Month Number in C#
Program to find absolute value of a given number | [
{
"code": null,
"e": 24302,
"s": 24274,
"text": "\n22 Jun, 2020"
},
{
"code": null,
"e": 24527,
"s": 24302,
"text": "Given an octal number as input, we need to write a program to convert the given octal number into equivalent integer. To convert an octal string to an integer, we ... |
Stock Price Prediction System using 1D CNN with TensorFlow.js-Machine Learning Easy and Fun | by Gavril Ognjanovski | Towards Data Science | While I was reading about stock prediction on the web, I saw people talking about using 1D CNN to predict the stock price. This caught my attention since CNN is specifically designed to process pixel data and used in image recognition and processing and it looked like a interesting challenge.
This solution is frontend only application using Tensorflow.js library and the best part is that it doesn’t require any server side. The data is available via “IEX Developer Platform” API service. The Image 1 above is from this stock prediction application.
The code is available on my Github repository.
github.com
All the work that need to be done can be set up in 6 steps:
Get the DataGenerate FeaturesGenerate ML ModelTrain the ML ModelTest the ML ModelPredict with ML Model
Get the Data
Generate Features
Generate ML Model
Train the ML Model
Test the ML Model
Predict with ML Model
So let’s start...
This data is gathered using the API from IEX. The API description is available on the following link. In this application I’m are using the chart endpoint which has predefined historical period of 1y (1 year). The placeholder %company% is where we replace the company symbol that we input in the application.
let url = 'https://api.iextrading.com/1.0/stock/%company%/chart/1y'
The result from this API is json array with historical data for the requested company. Below is some example from the response data.
[... { "date":"2018-02-20", "open":169.4694, "high":171.6463, "low":168.8489, "close":169.2724, "volume":33930540, "unadjustedVolume":33930540, "change":-0.5713, "changePercent":-0.336, "vwap":170.3546, "label":"Feb 20, 18", "changeOverTime":0 },...]
After the data is retrieved we need to process it and prepare the feature set and the label set. While I was researching, mostly I found ideas where they were using date fields as feature. However I didn’t like this because of two reasons. First, the date is constantly increasing as feature. Second, the dates are independent (not directly connected with stock price).
What I think is more connected with the current stock price is how the stock price changed in the past. So for example the stock price today is dependent on the stock price changes from the last 7 days. For that reason we define 7 features for our test set and each one of them is labeled with the stock price for the next day.
All this preprocessing of the data is made in processData function defined in helpers.js file. In our case timePortion variable has value 7.
...// Create the train setsfor (let i = timePortion; i < size; i++) { for (let j = (i - timePortion); j < i; j++) { trainX.push(scaledFeatures[j]); } trainY.push(scaledFeatures[i]);}...
And what we get for trainX is flat array of values, but later we will reshape it to matrix with format:[number_of_samples - 7, number_of_features]
Another important thing is that we first normalize our features using minMaxScaler function defined in helpers.js file. This will scale all the values between 0 and 1. This is important so the prediction model better fit our model and to be faster when there is a lot of data. If you want to know more about this min-max normalization you can find references at the end of this blog.
Next step is creating the CNN model. This is done by buildCnn function in prediction.js file. This step is really simplified using the Tensorflow library. What we need to do is define sequential (linear stack of layers) tensorflow model and then add the predefined layers in order to build our CNN model.
But what is CNN ? CNN or Convolutional Neural Network is a class of deep neural networks, most commonly applied to analyzing visual imagery. That’s why using it for predicting stock price is unusual and interesting challenge.
The CNN has 4 important type of layers that makes it different. These are Convolution layer, ReLU layer, Pooling layer and Fully Connected Layer. Each of them has specific task to do. However, I won’t dive deep in explaining CNN here for now.
Let’s continue building the CNN with Tensorflow. We defined total of 7 layers:
inputLayer — has input size [7, 1] because we have 7 featuresconv1d— First convolutional layeraveragePooling1d — First average pooling layerconv1d — Second convolutional layeraveragePooling1d — Second pooling layerflatten — Reduce the dimension, reshape input to [number of samples, number of features]dense — Fully connected layer using linear activation function with 1 unit which returns 1 output value
inputLayer — has input size [7, 1] because we have 7 features
conv1d— First convolutional layer
averagePooling1d — First average pooling layer
conv1d — Second convolutional layer
averagePooling1d — Second pooling layer
flatten — Reduce the dimension, reshape input to [number of samples, number of features]
dense — Fully connected layer using linear activation function with 1 unit which returns 1 output value
Below is the code where we define all these layers in sequential tensorflow model.
After we build our model we proceed to the next step, which is training our model.
Now that we have created our model, we need to get ready and transform our data. That means transforming our train and label sets into tensor data, since tensorflow works with it’s own type of data known as tensor(s).
This is pretty simple step. We create the tensors and reshape our features data into [number_of_samples, timePortion, 1]. timePortion is 7 in this case.
...let tensorData = { tensorTrainX: tf.tensor1d(built.data.trainX).reshape([built.data.size, built.data.timePortion, 1]), tensorTrainY: tf.tensor1d(built.data.trainY)};...
Now that we got our tensors we use them along with the model in cnn function. There, we first set up the optimization algorithm and the loss function. We use the “adam” algorithm as optimizer and “minSquaredError” as loss function.
model.compile({ optimizer: ‘adam’, loss: ‘meanSquaredError’ });
Unlike SGD (scholastic gradient descent), Adam optimizer use different learning rate for each weight.
The last thing we need to do here is to call the fit function on the tensorflow model and send the trainX (features) and trainY (labels) sets. We also set the options for epochs to 100.
Training your network on each item of the set once is one epoch.
...// Train the modelmodel.fit(data.tensorTrainX, data.tensorTrainY, { epochs: epochs}).then(fucntion (result) {...
When the training is over it returns the result and the model is prepared to be used for generating predictions.
In this step we already have our model prepared for making future predictions. What we will do first is use this model to predict on the same set that we trained our model. The idea behind this is that we can compare (visualize) how well our model fits the training set.
We create the prediction simply by calling predict function from the model.
var predictedX = model.predict(tensorData.tensorTrainX);
We get the predicted data by calling:
predictedX.data().then(function (pred) {...
And since we have previously normalized (scaled) our features we need to run inverse min-max operation in order to get the real feature values. We do that by calling minMaxInverseScaler function from helpers.js where we send the predicted data along with min and max values.
var predictedXInverse = minMaxInverseScaler(pred, min, max);
Now we use the plotData function defined in plot.js so we can visualize both (actual and predicted) data sets. P.S. I used Chart.js library for visualization.
What’s left to do is generating the test features for making the next day stock price prediction. This features are generated using generateNextDayPrediction function from helpers.js file.
let nextDayPrediction = generateNextDayPrediction(result.originalData, result.timePortion);
What this function does is pretty simple. It takes the last 7 stock price values from the given data and create the test feature set (for next day prediction).
Then we repeat the same steps where we transform this data into tensor data and reshape it into [number_of_samples, number_of_features, 1] which in this case it will be [1, 7, 1] since we have only one test example.
Next, we call the predict function from our model.
let predictedValue = model.predict(tensorNextDayPrediction);
We get the predicted value and run inverse min-max normalization
predictedValue.data().then(function (predValue) { // Revert the scaled features, so we get the real values let inversePredictedValue = minMaxInverseScaler(predValue, min, max);...predictedXInverse.data[predictedXInverse.data.length] = inversePredictedValue.data[0];
At the end we just add that value at the end of our predicted data (from the train set in the previous “Test the ML Model” section) so it can be visualized on the graph.
Hopefully this was clear and easy to understand. If you think that some part needs better explanation please feel free to add a comment or suggestion. For any questions feel free to contact me.
Hope you enjoyed it! | [
{
"code": null,
"e": 466,
"s": 172,
"text": "While I was reading about stock prediction on the web, I saw people talking about using 1D CNN to predict the stock price. This caught my attention since CNN is specifically designed to process pixel data and used in image recognition and processing and i... |
Python - Three way partitioning of an Array Example - onlinetutorialspoint | PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
In this tutorial, we will see how to do three way partitioning of an array around a given range.
Given an array, and a range, say, [A,B], the task is to partition the array into three parts such that,
All the elements of the array that are less than A appear in the first partition
All the elements of the array that lie in the range of A to B appear in the second partition, and,
All the elements of the array that are greater than B come in the third partition
Note that, A<=B and, the individual elements in each of the partitions needn’t necessarily be sorted.
Does this sound similar to quick sort?
Yes, it is. This is very similar to the partition step of quick sort. In quick sort, we pick a pivot element in an array, and place the pivot in the right index such that, all the elements lesser than the pivot come to its left and all the elements greater than the pivot come to its right. This can be compared to three-way partitioning, where pivot is not just an element, but it is a range [A,B].
Now, let us see how this problem can be done..
This can be solved by sorting the array. But, that will take order of nlogn time. (where n is the size of the array). Can we think of a more efficient solution in a single pass without actually sorting the array?
Take three pointers low, mid and high.
Set low and mid to the 0th index, and high to the (n-1)th index initially.
Say, the partition containing all the elements less than A is the first partition and the block containing all the elements greater than B is the third partition.
Use “low” to maintain the boundary of the first partition and “high” to maintain the boundary of the third partition.
While, “mid” iterates over the array and swaps the right elements to the first and third partition.
The resulting array is divided into 3 partitions.
Initialize low=0, mid=0 and high=n-1.
Use mid to iterate over the array and visit each element. At an element arr[mid],
if arr[mid] is less than A, then, swap it with arr[low] and increment both low and mid by one.
if arr[mid] is greater than B, swap it with arr[high] and decrement high by one.
Otherwise, increment mid by one.
if arr[mid] is less than A, then, swap it with arr[low] and increment both low and mid by one.
if arr[mid] is greater than B, swap it with arr[high] and decrement high by one.
Otherwise, increment mid by one.
The resulting array is the partitioned array.
Python
Java
Python
def threeWayPartition(array, a, b):
low=0
mid=0
high=len(array)-1
while(mid <= high):
if(array[mid] < a): #swap array[mid] with array[low]
temp = array[mid]
array[mid] = array[low]
array[low] = temp
low += 1
mid += 1
elif(array[mid] > b):
#swap array[mid] with array[high]
temp = array[mid]
array[mid] = array[high]
array[high] = temp
high -= 1
else:
mid += 1
return array
if __name__ == "__main__":
print("Enter the array elements seperated by spaces: ")
str_arr = input().split(' ')
arr = [int(num) for num in str_arr]
a, b = [int(x) for x in input("Enter the range [A,B] seperated by spaces (NOTE: A <= B) ").split()]
arr = threeWayPartition(arr, a, b)
print("The array after three way partitioning is ", arr)
Java
import java.util.*;
import java.io.*;
public class test{
public static void main(String[] args){
int arr[] = {2,5,27,56,17,4,9,23,76,1,45};
int n = arr.length, a=15, b=30;
int low=0, mid=0, high=n-1, temp;
while(mid<=high){
if(arr[mid] < a){
//swap arr[mid] and arr[low]
temp = arr[mid];
arr[mid] = arr[low];
arr[low] = temp;
low++;
mid++;
} else if(arr[mid] > b){
//swap arr[mid] and arr[high]
temp = arr[mid];
arr[mid] = arr[high];
arr[high] = temp;
high--;
} else
mid++;
}
System.out.println("The partitioned array is..");
for(int i=0;i<arr.length; i++)
System.out.print(arr[i]+" ");
System.out.println();
}
}
The time complexity of this algorithm is in the order of n, i.e., O(n) as it is a single pass algorithm. The algorithm is in-place and doesn’t take any extra space, making the space complexity constant i.e., O(1).
One of the important applications of this algorithm is, to sort an array with three types of elements where n>=3. Say, we have an array of 0s, 1s and 2s. Then, this approach can be employed to sort it in a single pass with constant space complexity. In this case, A=1 and B=1.
What is an Algorithm
Dutch national flag problem
Happy Learning 🙂
Binary Search using Java
C Program to add elements of an array
C Program – Reverse the Elements of an Array
How to Rotate an Array to Left direction based on user input ?
How to find a missing number in an array ?
Difference between Linkedlist and Array
Using Array in AngularJs Example
Python Selenium Automate the Login Form
AngularJs Directive Example Tutorials
How to Shuffle an Array using Java
What is Python NumPy Library
PHP Array Example Tutorials
How to Convert Java 8 Stream to Array
How to Sort an Array in Parallel in Java 8
Array of objects in AngularJs Example
Binary Search using Java
C Program to add elements of an array
C Program – Reverse the Elements of an Array
How to Rotate an Array to Left direction based on user input ?
How to find a missing number in an array ?
Difference between Linkedlist and Array
Using Array in AngularJs Example
Python Selenium Automate the Login Form
AngularJs Directive Example Tutorials
How to Shuffle an Array using Java
What is Python NumPy Library
PHP Array Example Tutorials
How to Convert Java 8 Stream to Array
How to Sort an Array in Parallel in Java 8
Array of objects in AngularJs Example | [
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
... |
GATE | GATE-CS-2017 (Set 2) | Question 57 - GeeksforGeeks | 29 Sep, 2021
The next state table of a 2 bit saturating up-counter is given below.
The counter is built as synchronous sequential circuit using T flip-flops. The value for T1 and T0 are(A) T1 = Q0Q1T0 = Q’0Q’1(B) T1 = Q’1Q0T0 = Q’1 + Q’0(C) T1 = Q1 + Q0T0 = Q’1 + Q’0(D) T1 = Q’1Q0T0 = Q1 + Q0Answer: (B)Explanation:
Using above excitation table,
T1 = Q’1Q0 = 0100T0 = Q’1 + Q’0 = 1110
Therefore Option B
Another Solution
T1 and T2 are filled by using the property that output of T FF will change when T=1 and will not change when T=0Thus
T1(Q1,Q2) = Q1’ Q2
T2(Q1,Q2) = Q1’Q2’ + Q1’Q2 + Q1Q2’
= Q1’ + Q1Q2’
= Q1’ + Q2’
This solution is contributed by Abhishek Kumar.
YouTubeGeeksforGeeks GATE Computer Science16.1K subscribersSequential Circuits : GATE PYQs on Counters with Rishabh Setiya | GeeksforGeeks GATEWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:38 / 30:50•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=GpghiunxSIk" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>Quiz of this Question
GATE-CS-2017 (Set 2)
GATE-GATE-CS-2017 (Set 2)
GATE
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
GATE | GATE-IT-2004 | Question 83
GATE | GATE-CS-2014-(Set-3) | Question 38
GATE | GATE CS 2018 | Question 37
GATE | GATE-CS-2016 (Set 1) | Question 65
GATE | GATE-CS-2016 (Set 1) | Question 63
GATE | GATE-CS-2014-(Set-3) | Question 65
GATE | GATE-CS-2007 | Question 17
GATE | GATE CS 2019 | Question 37
GATE | GATE-CS-2014-(Set-3) | Question 65
GATE | GATE CS 1997 | Question 25 | [
{
"code": null,
"e": 24386,
"s": 24358,
"text": "\n29 Sep, 2021"
},
{
"code": null,
"e": 24456,
"s": 24386,
"text": "The next state table of a 2 bit saturating up-counter is given below."
},
{
"code": null,
"e": 24692,
"s": 24456,
"text": "The counter is built... |
How to Build Your Own PyTorch Neural Network Layer from Scratch | by Michael Li | Towards Data Science | This is actually an assignment from Jeremy Howard’s fast.ai course, lesson 5. I’ve showcased how easy it is to build a Convolutional Neural Networks from scratch using PyTorch. Today, let’s try to delve down even deeper and see if we could write our own nn.Linear module. Why waste your time writing your own PyTorch module while it’s already been written by the devs over at Facebook?
Well, for one, you’ll gain a deeper understanding of how all the pieces are put together. By comparing your code with the PyTorch code, you will gain knowledge of why and how these libraries are developed.
Also, once you’re done, you’ll have more confidence in implementing and using all these libraries, knowing how things work. There will be no myth to you.
And last but not least, you’ll be able to modify/tweak these modules should the situation require. And this is the difference between a noob and a pro.
OK, enough of the motivation, let’s get to it.
First of all, we need some ‘backdrop’ codes to test whether and how well our module performs. Let’s build a very simple one-layer neural network to solve the good-old MNIST dataset. The code (running in Jupyter Notebook) snippet below:
# We'll use fast.ai to showcase how to build your own 'nn.Linear' module%matplotlib inlinefrom fastai.basics import *import sys# create and download/prepare our MNIST datasetpath = Config().data_path()/'mnist'path.mkdir(parents=True)!wget http://deeplearning.net/data/mnist/mnist.pkl.gz -P {path} # Get the images downloaded into data setwith gzip.open(path/'mnist.pkl.gz', 'rb') as f: ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')# Have a look at the images and shapeplt.imshow(x_train[0].reshape((28,28)), cmap="gray")x_train.shape# convert numpy into PyTorch tensorx_train,y_train,x_valid,y_valid = map(torch.tensor, (x_train,y_train,x_valid,y_valid))n,c = x_train.shapex_train.shape, y_train.min(), y_train.max()# prepare dataset and create fast.ai DataBunch for trainingbs=64train_ds = TensorDataset(x_train, y_train)valid_ds = TensorDataset(x_valid, y_valid)data = DataBunch.create(train_ds, valid_ds, bs=bs)# create a simple MNIST logistic model with only one Linear layerclass Mnist_Logistic(nn.Module): def __init__(self): super().__init__() self.lin = nn.Linear(784, 10, bias=True) def forward(self, xb): return self.lin(xb)model =Mnist_Logistic()lr=2e-2loss_func = nn.CrossEntropyLoss()# define update function with weight decaydef update(x,y,lr): wd = 1e-5 y_hat = model(x) # weight decay w2 = 0. for p in model.parameters(): w2 += (p**2).sum() # add to regular loss loss = loss_func(y_hat, y) + w2*wd loss.requres_grad = True loss.backward() with torch.no_grad(): for p in model.parameters(): p.sub_(lr * p.grad) p.grad.zero_() return loss.item()# iterate through one epoch and plot losseslosses = [update(x,y,lr) for x,y in data.train_dl]plt.plot(losses);
These codes are quite self-explanatory. We used the fast.ai library for this project. Download the MNIST pickle file and unzip it, transfer it into a PyTorch tensor, then stuff it into a fast.ai DataBunch object for further training. Then we created a simple neural network with only one Linear layer. We also write our own update function instead of using the torch.optim optimizers since we could be writing our own optimizers from scratch as the next step of our PyTorch learning journey. Finally, we iterate through the dataset and plot the losses to see whether and how well it works.
All PyTorch modules/layers are extended from thetorch.nn.Module.
class myLinear(nn.Module):
Within the class, we’ll need an __init__ dunder function to initialize our linear layer and a forward function to do the forward calculation. Let’s look at the __init__ function first.
We’ll use the PyTorch official document as a guideline to build our module. From the document, an nn.Linear module has the following attributes:
So we’ll get these three attributes in:
def __init__(self, in_features, out_features, bias=True): super().__init__() self.in_features = in_features self.out_features = out_features self.bias = bias
The class also needs to hold weight and bias parameters so it can be trained. We also initialize those.
self.weight = torch.nn.Parameter(torch.randn(out_features, in_features)) self.bias = torch.nn.Parameter(torch.randn(out_features))
Here we used torch.nn.Parameter to set our weight and bias, otherwise, it won’t train.
Also, note that we used torch.randn instead of what’s described in the document to initialize the parameters. This is not the best way of doing weights initialization, but our purpose is to get it to work first, we’ll tweak it in our next iteration.
OK, now that the __init__ part is done, let’s move on to forward function. This is actually the easy part:
def forward(self, input): _, y = input.shape if y != self.in_features: sys.exit(f'Wrong Input Features. Please use tensor with {self.in_features} Input Features') output = input @ self.weight.t() + self.bias return output
We first get the shape of the input, figure out how many columns are in the input, then check whether the input size match. Then we do the matrix multiplication (Note we did a transpose here to align the weights) and return the results. We can test whether it works by giving it some data:
my = myLinear(20,10)a = torch.randn(5,20)my(a)
We have a 5x20 input, it goes through our layer and gets a 5x10 output. You should get results like this:
OK, now go back to our neural network codes and find the Mnist_Logistic class, change self.lin = nn.Linear(784,10, bias=True) to self.lin = myLinear(784, 10, bias=True). Run the code, you should see something like this plot:
As you can see it doesn’t converge quite well (around 2.5 loss with one epoch). That’s probably because of our poor initialization. Also, we didn’t take care of the bias part. Let’s fix that in the next iteration. The final code for iteration 1 looks like this:
class myLinear(nn.Module): def __init__(self, in_features, out_features, bias=True): super().__init__() self.in_features = in_features self.out_features = out_features self.bias = bias self.weight = torch.nn.Parameter(torch.randn(out_features, in_features)) self.bias = torch.nn.Parameter(torch.randn(out_features)) def forward(self, input): x, y = input.shape if y != self.in_features: sys.exit(f'Wrong Input Features. Please use tensor with {self.in_features} Input Features') output = input @ self.weight.t() + self.bias return output
We’ve handled __init__ and forward, but remember we also have a bias attribute that if False, will not learn additive bias. We have not implemented that yet. Also, we used torch.nn.randn to initialize the weight and bias, which is not optimum. Let’s fix this. The updated __init__ function looks like this:
def __init__(self, in_features, out_features, bias=True): super().__init__() self.in_features = in_features self.out_features = out_features self.bias = bias self.weight = torch.nn.Parameter(torch.Tensor(out_features, in_features)) if bias: self.bias = torch.nn.Parameter(torch.Tensor(out_features)) else: self.register_parameter('bias', None) self.reset_parameters()
First of all, when we create the weight and bias parameters, we didn’t initialize them as the last iteration. We just allocate a regular Tensor object to it. The actual initialization is done in another function reset_parameters(will explain later).
For bias, we added a condition that if True, do what we did the last iteration, but if False, will use register_parameter(‘bias’, None) to give it None value. Now for reset_parameter function, it looks like this:
def reset_parameters(self): torch.nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5)) if self.bias is not None: fan_in, _ torch.nn.init._calculate_fan_in_and_fan_out(self.weight) bound = 1 / math.sqrt(fan_in) torch.nn.init.uniform_(self.bias, -bound, bound)
The above code is taken directly from PyTorch source code. What PyTorch did with weight initialization is called kaiming_uniform_. It’s from a paper Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification — He, K. et al. (2015).
What it actually does is by initializing weight with a normal distribution with mean 0 and variance bound, it avoids the issue of vanishing/exploding gradients issue(though we only have one layer here, when writing the Linear class, we should still keep MLN in mind).
Notice that for self.weight, we actually give the a a value of math.sqrt(5) instead of the math.sqrt(fan_in) , this is explained in this GitHub issue of PyTorch repo for whom might be interested.
Also, we can add some extra_repr string to the model:
def extra_repr(self): return 'in_features={}, out_features={}, bias={}'.format( self.in_features, self.out_features, self.bias is not None )
The final model looks like this:
class myLinear(nn.Module): def __init__(self, in_features, out_features, bias=True): super().__init__() self.in_features = in_features self.out_features = out_features self.bias = bias self.weight = torch.nn.Parameter(torch.Tensor(out_features, in_features)) if bias: self.bias = torch.nn.Parameter(torch.Tensor(out_features)) else: self.register_parameter('bias', None) self.reset_parameters() def reset_parameters(self): torch.nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5)) if self.bias is not None: fan_in, _ = torch.nn.init._calculate_fan_in_and_fan_out(self.weight) bound = 1 / math.sqrt(fan_in) torch.nn.init.uniform_(self.bias, -bound, bound) def forward(self, input): x, y = input.shape if y != self.in_features: print(f'Wrong Input Features. Please use tensor with {self.in_features} Input Features') return 0 output = input.matmul(weight.t()) if bias is not None: output += bias ret = output return ret def extra_repr(self): return 'in_features={}, out_features={}, bias={}'.format( self.in_features, self.out_features, self.bias is not None )
Rerun the code, you should be able to see this plot:
We can see it converges much faster to a 0.5 loss in one epoch.
I hope this helps you clear the cloud on these PyTorchnn.modules a bit. It might seem boring and redundant, but sometimes the fastest( and shortest) way is the ‘boring’ way. Once you get to the very bottom of this, the feeling of knowing that there’s nothing ‘more’ is priceless. You’ll come to the realization that:
Underneath PyTorch, there’s no trick, no myth, no catch, just rock-solid Python code.
Also by writing your own code, then compare it with official source code, you’ll be able to see where the difference is and learn from the best in the industry. How cool is that?
Found this article useful? Follow me (Michael Li) on Medium or you can find me on Twitter @lymenlee or my blog site wayofnumbers.com. You could also check out my most popular articles below! | [
{
"code": null,
"e": 557,
"s": 171,
"text": "This is actually an assignment from Jeremy Howard’s fast.ai course, lesson 5. I’ve showcased how easy it is to build a Convolutional Neural Networks from scratch using PyTorch. Today, let’s try to delve down even deeper and see if we could write our own n... |
Design a BMI Calculator using JavaScript - GeeksforGeeks | 11 Feb, 2021
The Body Mass Index (BMI) Calculator can be used to calculate BMI values based on their height and weight. BMI is a fairly reliable indicator of body fatness for most people.
Formula:
BMI = (weight) / (height * height)
Approach: BMI is a number calculated from an individual’s weight and height. To find out BMI we will take input from the user (both height and weight) which will be stored in height and weight variable for further calculation. The calculation process is simple, we will simply divide weight in kilograms by the square of the height. Now as per the BMI calculated, it will execute the respective if-else statement. We are also checking if the user is pressing submit button without entering the inputs, in that case, we are printing provide height or provide weight.Using HTML we are giving desired structure, option for the input, and submit button. With the help of CSS, we are beautifying our structure by giving colors and desired font, etc.
In the JavaScript section, we are processing the taken input and after calculating, the respective output is printed.
Example:
HTML:
index.html
<!DOCTYPE html><html> <head> <!-- Include JS files --> <script src="app.js"></script></head> <body> <div class="container"> <h1>BMI Calculator</h1> <!-- Option for providing height and weight to the user--> <p>Height (in cm)</p> <input type="text" id="height"> <p>Weight (in kg)</p> <input type="text" id="weight"> <button id="btn">Calculate</button> <div id="result"></div> </div></body> </html>
JavaScript:
app.js
window.onload = () => { let button = document.querySelector("#btn"); // Function for calculating BMI button.addEventListener("click", calculateBMI);}; function calculateBMI() { /* Getting input from user into height variable. Input is string so typecasting is necessary. */ let height = parseInt(document .querySelector("#height").value); /* Getting input from user into weight variable. Input is string so typecasting is necessary.*/ let weight = parseInt(document .querySelector("#weight").value); let result = document.querySelector("#result"); // Checking the user providing a proper // value or not if (height === "" || isNaN(height)) result.innerHTML = "Provide a valid Height!"; else if (weight === "" || isNaN(weight)) result.innerHTML = "Provide a valid Weight!"; // If both input is valid, calculate the bmi else { // Fixing upto 2 decimal places let bmi = (weight / ((height * height) / 10000)).toFixed(2); // Dividing as per the bmi conditions if (bmi < 18.6) result.innerHTML = `Under Weight : <span>${bmi}</span>`; else if (bmi >= 18.6 && bmi < 24.9) result.innerHTML = `Normal : <span>${bmi}</span>`; else result.innerHTML = `Over Weight : <span>${bmi}</span>`; }}
Output:
HTML-Questions
JavaScript-Questions
Technical Scripter 2020
HTML
JavaScript
Technical Scripter
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to update Node.js and NPM to next version ?
Types of CSS (Cascading Style Sheet)
How to Insert Form Data into Database using PHP ?
CSS to put icon inside an input element in a form
REST API (Introduction)
Difference between var, let and const keywords in JavaScript
Convert a string to an integer in JavaScript
Differences between Functional Components and Class Components in React
How to calculate the number of days between two dates in javascript?
File uploading in React.js | [
{
"code": null,
"e": 24439,
"s": 24411,
"text": "\n11 Feb, 2021"
},
{
"code": null,
"e": 24614,
"s": 24439,
"text": "The Body Mass Index (BMI) Calculator can be used to calculate BMI values based on their height and weight. BMI is a fairly reliable indicator of body fatness for m... |
Python program to calculate square of a given number - GeeksforGeeks | 12 Oct, 2021
Given a number, the task is to write a Python program to calculate square of the given number.
Examples:
Input: 4
Output: 16
Input: 3
Output: 9
Input: 10
Output: 100
We will provide the number, and we will get the square of that number as output. We have three ways to do so:
Multiplying the number to get the square (N * N)
Using Exponent Operator
Using math.pow() Method
Method 1: Multiplication
In this approach, we will multiply the number with one another to get the square of the number.
Example:
Python3
# Declaring the number.
n = 4
# Finding square by multiplying them
# with each other
square = n * n
# Printing square
print(square)
Output:
16
Method 2: Using Exponent Operator
In this approach, we use the exponent operator to find the square of the number.
Exponent Operator: **
Return: a ** b will return a raised to power b as output.
Example:
Python3
# Declaring the number.
n = 4
# Finding square by using exponent operator
# This will yield n power 2 i.e. square of n
square = n ** 2
# Printing square
print(square)
Output:
16
Method 3: Using pow() Method
In this approach, we will use the pow() method to find the square of the number. This function computes x**y and returns a float value as output.
Syntax: float pow(x,y)
Parameters :
x : Number whose power has to be calculated.
y : Value raised to compute power.
Return Value : Returns the value x**y in float.
Example:
Python3
# Declaring the number.
n = 4
# Finding square by using pow method
# This will yield n power 2 i.e. square of n
square = pow(n, 2)
# Printing square
print(square)
Output:
16.0
snehashishghosh21
Picked
Technical Scripter 2020
Python
Python Programs
Technical Scripter
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How to drop one or multiple columns in Pandas Dataframe
Python OOPs Concepts
Python | Get unique values from a list
Check if element exists in list in Python
Defaultdict in Python
Python | Get dictionary keys as a list
Python | Split string into list of characters
Python | Convert a list to dictionary
Python program to check whether a number is Prime or not | [
{
"code": null,
"e": 24312,
"s": 24281,
"text": " \n12 Oct, 2021\n"
},
{
"code": null,
"e": 24407,
"s": 24312,
"text": "Given a number, the task is to write a Python program to calculate square of the given number."
},
{
"code": null,
"e": 24417,
"s": 24407,
"... |
Find maximum element of ArrayList with Java Collections | In order to compute maximum element of ArrayList with Java Collections, we use the Collections.max() method. The java.util.Collections.max() returns the maximum element of the given collection. All elements must be mutually comparable and implement the comparable interface. They shouldn’t throw a ClassCastException.
Declaration −The Collections.max() method is declared as follows −
public static <T extends Object & Comparable> T max(Collection c)
where c is the collection object whose maximum is to be found.
Let us see a program to find the maximum element of ArrayList with Java collections −
Live Demo
import java.util.*;
public class Example {
public static void main (String[] args) {
List<Integer> list = new ArrayList<Integer>();
try {
list.add(14);
list.add(2);
list.add(73);
System.out.println("Maximum element : " + Collections.max(list));
}
catch (ClassCastException | NoSuchElementException e) {
System.out.println("Exception caught : " + e);
}
}
}
Maximum element : 73 | [
{
"code": null,
"e": 1380,
"s": 1062,
"text": "In order to compute maximum element of ArrayList with Java Collections, we use the Collections.max() method. The java.util.Collections.max() returns the maximum element of the given collection. All elements must be mutually comparable and implement the ... |
Forecasting with Python and Tableau | by Greg Rafferty | Towards Data Science | Update: I’ve written a book about Facebook Prophet which has been published by Packt Publishing! The book is available for purchase on Amazon.
The book covers every detail of using Prophet starting with installation through model evaluation and tuning. Over a dozen datasets have been made available and used to demonstrate Prophet functionality from the simple to the advanced with fully working code. If you enjoy this Medium post, please consider ordering it here: https://amzn.to/373oIcf! At more than 250 pages, it covers far more material than can be taught on Medium!
Thank you so much for supporting my book!
I’m Greg Rafferty, a data scientist in the Bay Area. You can check out the code for this project on my github. Feel free to contact me with any questions!
In this post, I’ll show how I used Python code within Tableau to build an interactive dashboard implementing a time-series forecast. If you just want to play around with the dashboard first and explore the SARIMAX algorithm, download the full python-implemented dashboard here or go to this slightly dumbed-down version on Tableau Public (Tableau Public wisely but disappointingly disallows external scripts to be uploaded, so I had to fake the Python scripts with hard-coded data sets).
Last year, Tableau released version 10.2 which included integration with Python. It’s not super straightforward how to use it though, so I thought I’d figure it out when a client asked for a time-series forecast dashboard. ARIMA models are not built into Tableau (Tableau’s Forecast module uses exponential smoothing), and in this particular case I really needed to use the higher predictive capability of the ARIMA algorithm, so TabPy seemed to be my only option.
I won’t go into too much detail on how to install TabPy (hint: pip install tabpy-server) and no detail at all on how to install Python (I figure, if you want to run Python code inside Tableau, you probably already know how to use Python. If not, start here.)
Once you have installed the TabPy distribution you will need to navigate to the source code contained in /site-packages and go subsequently into the tabpy-server directory (in my case, on MacOS with Anaconda 3 installed in the default location, /anaconda3/lib/python3.7/site-packages/tabpy_server). From there run sh startup.sh or python tabpy.py to start up a server. You need to instruct Tableau to constantly sniff port 9004, which is how Tableau and Python communicate. To do this, from within Tableau,
Go to Help → Settings and Performance → Manage External Service Connection...Enter the Server (localhost if running TabPy on the same computer) and the Port (default is 9004).
Go to Help → Settings and Performance → Manage External Service Connection...
Enter the Server (localhost if running TabPy on the same computer) and the Port (default is 9004).
If you’ve had any trouble so far, try this tutorial.
ARIMA stands for Auto-Regressive Integrated Moving Average. In this tutorial, I’m using its more advanced sibling, SARIMAX (Seasonal Auto-Regressive Integrated Moving Average with eXogenous regressors). OK, so what is that?
Let’s start with Regression. You know what that is, right? Basically, given a set of points, it calculates the line which will best explain the pattern.
Next up is an ARMA model (Auto-Regressive Moving Average). The auto-regressive part means that it’s a regression model which predicts upcoming values based upon previous values. It’s similar to saying that it will be warm tomorrow because it’s been warm the previous three days. (Side note: This is why time-series models are so much more complicated than standard regression. The data points are not independent from each other!). The moving average part is not actually a moving average at all (don’t ask me why, I have no idea). It simply means that the regression error can be modeled as a linear combination of errors.
If an ARMA model doesn’t quite fit your data, you might next try ARIMA. The additional I stands for Integrated. This term accounts for differences between previous values. Intuitively, this means that tomorrow is likely to be the same temperature as today because the past week hasn’t varied too much.
Finally, we move on to SARIMAX. The S stands for Seasonal — it helps to model recurring patterns. These seasonal patterns don’t necessarily have to occur annually per se; for instance, if we were modeling metro traffic in a busy urban area, the patterns would repeat on a weekly scale. And the X is for eXogenous (Sorry. I didn’t come up with this). This term allows for external variables to be added to the model, such as weather forecasts (my model in this tutorial doesn’t add any exogenous variables though).
A SARIMAX model takes the form of SARIMAX(p, d, q) x (P, D, Q)m, where p is the AR term, d is the I term, and q is the MA term. The capital P, D, and Q are the same terms but related to the seasonal component. The lowercase m is the number of seasonal periods before the pattern repeats (so, if you’re working with monthly data, like in this tutorial, m will be 12). When implemented, these parameters will all be integers, and smaller numbers are usually better (ie, less complex). For my model, the parameters I’ve chosen which best fit the model are SARIMAX(2, 1, 2) x (0, 2, 2)12.
I performed a grid search to arrive at these terms. The error I sought to minimize is the Akaike information criterion (AIC). The AIC is a measure of how well the model fits the data, while penalizing complexity. In the Tableau dashboards, I report Mean Squared Error because that is much more intuitive.
With that out of the way, let’s take a look at implementing this in Tableau!
If you want to follow along, you can download the packaged Tableau workbook on my github. I’m using the Air Passengers dataset (https://www.kaggle.com/rakannimer/air-passengers) which contains monthly data on the number of airline passengers from 1949–1961. Let’s take a look at what’s going on:
import warnings
import itertools
import pandas as pd
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
/anaconda3/lib/python3.6/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.
from pandas.core import datetools
df = pd.read_csv('AirPassengers.csv')
df['Month'] = pd.to_datetime(df['Month'])
df.head()
y = pd.Series(data=df['#Passengers'].values, index=df['Month'])
y.head()
Month
1949-01-01 112
1949-02-01 118
1949-03-01 132
1949-04-01 129
1949-05-01 121
dtype: int64
y.plot(figsize=(15, 6))
plt.show()
# Define the p, d and q parameters to take any value between 0 and 3
p = d = q = range(0, 3)
# Generate all different combinations of p, q and q triplets
pdq = list(itertools.product(p, d, q))
# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
warnings.filterwarnings("ignore") # specify to ignore warning messages
best_result = [0, 0, 10000000]
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
mod = sm.tsa.statespace.SARIMAX(y,
order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
# print('ARIMA{} x {} - AIC: {}'.format(param, param_seasonal, results.aic))
if results.aic < best_result[2]:
best_result = [param, param_seasonal, results.aic]
except:
continue
print('\nBest Result:', best_result)
Best Result: [(2, 1, 2), (0, 2, 2, 12), 715.1568901618965]
mod = sm.tsa.statespace.SARIMAX(y,
order=(best_result[0][0], best_result[0][1], best_result[0][1]),
seasonal_order=(best_result[1][0], best_result[1][1], best_result[1][2], best_result[1][3]),
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print(results.summary().tables[1])
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1856 0.195 -6.093 0.000 -1.567 -0.804
ar.L2 -0.4353 0.099 -4.411 0.000 -0.629 -0.242
ma.L1 0.7787 0.204 3.812 0.000 0.378 1.179
ma.S.L12 -1.4145 0.227 -6.239 0.000 -1.859 -0.970
ma.S.L24 0.4839 0.143 3.381 0.001 0.203 0.764
sigma2 118.5128 28.037 4.227 0.000 63.561 173.465
==============================================================================
results.plot_diagnostics(figsize=(15, 12))
plt.show()
pred = results.get_prediction(start=pd.to_datetime('1958-01-01'), dynamic=False)
pred_ci = pred.conf_int()
ax = y['1949':].plot(label='Observed', figsize=(15, 12))
pred.predicted_mean.plot(ax=ax, label='One-Step Ahead Forecast', alpha=.7)
ax.fill_between(pred_ci.index,
pred_ci.iloc[:, 0],
pred_ci.iloc[:, 1], color='k', alpha=.25)
ax.fill_betweenx(ax.get_ylim(), pd.to_datetime('1958-01-01'), y.index[-1],
alpha=.1, zorder=-1)
ax.set_xlabel('Date')
ax.set_ylabel('Airline Passengers')
plt.legend()
plt.show()
# Extract the predicted and true values of our time series
y_forecasted = pred.predicted_mean
y_truth = y['1958-01-01':]
# Compute the mean square error
mse = ((y_forecasted - y_truth) ** 2).mean()
print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2)))
The Mean Squared Error of our forecasts is 194.81
pred_dynamic = results.get_prediction(start=pd.to_datetime('1958-01-01'), dynamic=True, full_results=True)
pred_dynamic_ci = pred_dynamic.conf_int()
ax = y['1949':].plot(label='Observed', figsize=(15, 12))
pred_dynamic.predicted_mean.plot(label='Dynamic Forecast', ax=ax)
ax.fill_between(pred_dynamic_ci.index,
pred_dynamic_ci.iloc[:, 0],
pred_dynamic_ci.iloc[:, 1], color='k', alpha=.25)
ax.fill_betweenx(ax.get_ylim(), pd.to_datetime('1958-01-01'), y.index[-1],
alpha=.1, zorder=-1)
ax.set_xlabel('Date')
ax.set_ylabel('Airline Passengers')
plt.legend()
plt.show()
# Extract the predicted and true values of our time series
y_forecasted = pred_dynamic.predicted_mean
y_truth = y['1958-01-01':]
# Compute the mean square error
mse = ((y_forecasted - y_truth) ** 2).mean()
print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2)))
The Mean Squared Error of our forecasts is 551.55
# Get forecast 24 steps ahead in future
pred_uc = results.get_forecast(steps=24)
# Get confidence intervals of forecasts
pred_ci = pred_uc.conf_int()
ax = y.plot(label='Observed', figsize=(15, 12))
pred_uc.predicted_mean.plot(ax=ax, label='Forecast')
ax.fill_between(pred_ci.index,
pred_ci.iloc[:, 0],
pred_ci.iloc[:, 1], color='k', alpha=.25)
ax.set_xlabel('Date')
ax.set_ylabel('CO2 Levels')
plt.legend()
plt.show()
I wanted my dashboard to be fully interactive, so I could change all the p’s, d’s, and q’s while observing their effect on the model. So first (and by ‘first’, I mean ‘second, after you’ve connected to the Air Passengers dataset linked to above’), let’s create parameters for these variables.
You will need to create 8 parameters: AR (Time Lag), I (Seasonality), MA (Moving Average), Months Forecast, Period, Seasonal AR (Time Lag), Seasonal I (Seasonality), and Seasonality MA (Moving Average). Make sure all data types are Integer, or else Python will throw some errors later (and TabPy very unhelpfully declines to provide you with a line number for errors). For Months Forecast and Period, I used a Range for Allowable values, from 1 to 48 (for Months Forecast) and 1 to 24 (for Period). We’ll next need to create a calculated field called Forecast date so that Tableau will extend the x-axis to include forecasted values. In the calculated field, input:
DATE(DATETRUNC('month', DATEADD('month', [Months Forecast], [Month])))
We’ll also create a Number of Passengers calculated field to ensure that our SARIMAX data lines up with actual data:
LOOKUP(SUM([#Passengers]), [Months Forecast])
Finally, one more calculated field called Past vs Future which we’ll use later to format the forecast as a different color:
IF LAST() < [Months Forecast]THEN 'Model Forecast'ELSE 'Model Prediction'END
OK, finally! On to the Python. Let’s create our first script. Create a calculated field and name it Forecast. In the field, paste the following code:
We’ll also create a calculated field called Mean Squared Error, so that we can have a fancy-pants dynamic title on our chart:
Now, drag Forecast date to the columns shelf, and Number of Passengers and Forecast to the rows shelf. Make it a dual-axis chart and synchronize the axes. Put Past vs Future on the color mark of the Forecast card and you’re done! That was easy, wasn’t it? (Nope, it actually wasn’t. It took me an embarrassingly long amount of time to figure out how to make those scripts work and Tableau’s supremely unhelpful habit of not showing where errors are occurring makes trouble-shooting an exercise in frustration). To spice things up a bit, add those parameters to the side bar so you can change them interactively. Oh! And I almost forgot about that snazzy title. Right click on the title → Edit Title... → and type in this code:
Below is what the final dashboard should look like. You’ll have some formatting tasks to take care of if you want it to look identical to mine, or alternately you could just download my dashboard here and be done with it. Tableau Public won’t allow any external scripts to be uploaded so sadly that means I can’t share this exact version as shown below. Instead, I just ran through a couple hundred permutations of the SARIMAX parameters and saved each forecast to a csv, and that version, though not as pretty or as programmatically cool, can be toyed with directly on Tableau Public here.
As you can see, the model is pretty good! That’s the power of SARIMAX: it accurately models the general increasing trend over time, the up-and-down of the seasons throughout the year, and even the increasing variance of data (the increasing distance between the highs of the peaks and the lows of the troughs) as time goes by. You do need to be careful picking your parameters, some pretty interesting stuff can happen if you’re not. My worst performing model for instance, is this (look at that error magnitude!): | [
{
"code": null,
"e": 315,
"s": 172,
"text": "Update: I’ve written a book about Facebook Prophet which has been published by Packt Publishing! The book is available for purchase on Amazon."
},
{
"code": null,
"e": 747,
"s": 315,
"text": "The book covers every detail of using Proph... |
JavaScript String - sup() Method | This method causes a string to be displayed as a superscript, as if it were in a <sup> tag.
Its syntax is as follows −
string.sup( )
Returns the string with <sup> tag.
Try the following example.
<html>
<head>
<title>JavaScript String sup() Method</title>
</head>
<body>
<script type = "text/javascript">
var str = new String("Hello world");
alert(str.sup());
</script>
</body>
</html>
<sup>Hello world</sup>
25 Lectures
2.5 hours
Anadi Sharma
74 Lectures
10 hours
Lets Kode It
72 Lectures
4.5 hours
Frahaan Hussain
70 Lectures
4.5 hours
Frahaan Hussain
46 Lectures
6 hours
Eduonix Learning Solutions
88 Lectures
14 hours
Eduonix Learning Solutions
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2558,
"s": 2466,
"text": "This method causes a string to be displayed as a superscript, as if it were in a <sup> tag."
},
{
"code": null,
"e": 2585,
"s": 2558,
"text": "Its syntax is as follows −"
},
{
"code": null,
"e": 2600,
"s": 2585,
"... |
TypeScript - String substring() | This method returns a subset of a String object.
string.substring(indexA, [indexB])
indexA − An integer between 0 and one less than the length of the string.
indexA − An integer between 0 and one less than the length of the string.
indexB − (optional) An integer between 0 and the length of the string.
indexB − (optional) An integer between 0 and the length of the string.
The substring method returns the new sub-string based on given parameters.
var str = "Apples are round, and apples are juicy.";
console.log("(1,2): " + str.substring(1,2));
console.log("(0,10): " + str.substring(0, 10));
console.log("(5): " + str.substring(5));
On compiling, it will generate the same code in JavaScript.
Its output is as follows −
(1,2): p
(0,10): Apples are
(5): s are round, and apples are juicy.
45 Lectures
4 hours
Antonio Papa
41 Lectures
7 hours
Haider Malik
60 Lectures
2.5 hours
Skillbakerystudios
77 Lectures
8 hours
Sean Bradley
77 Lectures
3.5 hours
TELCOMA Global
19 Lectures
3 hours
Christopher Frewin
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2097,
"s": 2048,
"text": "This method returns a subset of a String object."
},
{
"code": null,
"e": 2133,
"s": 2097,
"text": "string.substring(indexA, [indexB])\n"
},
{
"code": null,
"e": 2207,
"s": 2133,
"text": "indexA − An integer betwe... |
Intro to Stanford’s CoreNLP for Pythoners | by Laura Bravo Priegue | Towards Data Science | Hello there! I’m back and I want this to be the first of a series of post on Stanford’s CoreNLP library. In this article I will focus on the installation of the library and an introduction to its basic features for Java newbies like myself. I will firstly go through the installation steps and a couple of tests from the command line. I will later walk you through a two very simple Java scripts that you will be able to easily incorporate into your Python NLP pipeline. You can find the complete code on github!
CoreNLP is a toolkit with which you can generate a quite complete NLP pipeline with only a few lines of code. The library includes pre-built methods for all the main NLP procedures, such as Part of Speech (POS) tagging, Named Entity Recognition (NER), Dependency Parsing or Sentiment Analysis. It also supports other languages apart from English, more specifically Arabic, Chinese, German, French, and Spanish.
I am a big fan of the library, mainly because of HOW COOL its Sentiment Analysis model is ❤ (I will talk more about it in the next post). However, I can see why most people would rather use other libraries like NLTK or SpaCy, as CoreNLP can be a bit of an overkill. The reality is that coreNLP can be much more computationally expensive than other libraries, and for shallow NLP processes the results are not even significantly better. Plus it’s written in Java, and getting started with it is a bit of a pain for Python users (however it is doable, as you will see below, and it also has a Python API if you can’t be bothered).
CoreNLP Pipeline and Basic Annotators
The basic building block of coreNLP is the coreNLP pipeline. The pipeline takes an input text, processes it and outputs the results of this processing in the form of a coreDocument object. A coreNLP pipeline can be customised and adapted to the needs of your NLP project. The properties objects allow to do this customization by adding, removing or editing annotators.
That was a lot of jargon, so let’s break it down with an example. All the information and figures were extracted from the official coreNLP page.
In the figure above we have a basic coreNLP Pipeline, the one that is ran by default when you first run the coreNLP Pipeline class without changing anything. At the very left we have the input text entering the pipeline, this will usually be a plain .txt file. The pipeline itself is composed by 6 annotators. Each of these annotators will process the input text sequentially, the intermediate outputs of the processing sometimes being used as inputs by some other annotator. If we wanted to change this pipeline by adding or removing annotators, we would use the properties object. The final output is a set of annotations in the form of a coreDocument object.
We will be working with this basic pipeline throughout the article. The nature of the objects will be more clear later on when we look at an example. For the moment let’s note down what each of the annotator does:
Annotator 1: Tokenization → turns raw text into tokens.
Annotator 2: Sentence Splitting → divides raw text into sentences.
Annotator 3: Part of Speech (POS) Tagging → assigns part of speech labels to tokens, such as whether they are verbs or nouns. Each token in the text will be given a tag.
Annotator 4: Lemmatization → converts every word into its lemma, its dictionary form. For example the word “was” is mapped to “be”.
Annotator 5: Named Entity Recognition (NER) → Recognises when an entity (a person, country, organization etc...) is named in a text. It also recognises numerical entities such as dates.
Annotator 6: Dependency Parsing → Will parse the text and highlight dependencies between words.
Lastly, all the outputs from the 6 annotators are organised into a CoreDocument. These are basically data objects that contain annotation information in a structured way. CoreDocuments make our lives easier since, as you will see later on, they store all the information so that we can access it with a simple API.
Installation
You will need to have Java installed. You can download the latest version here. For downloading CoreNLP I followed the official guide:
Downloading the CoreNLP zip file using curl or wget
Downloading the CoreNLP zip file using curl or wget
curl -O -L http://nlp.stanford.edu/software/stanford-corenlp-latest.zip
2. Unzip the file
unzip stanford-corenlp-latest.zip
3. Move into the newly created directory
cd stanford-corenlp-4.1.0
Let’s now go through a couple of examples to make sure everything works.
Example using the command line and an input.txt file
For this example, firstly we will open the terminal and create a test file that we will use as input. The code was adapted from coreNLP’s official site. You can use the following command:
echo "the quick brown fox jumped over the lazy dog" > test.txt
echoprints the sentence "the quick brown fox jumped over the lazy dog" on the test.txt file.
Let’s now run a default coreNLP pipeline on the test sentence.
java -cp “*” -mx3g edu.stanford.nlp.pipeline.StanfordCoreNLP -outputFormat xml -file test.txt
This is a java command that loads and runs the coreNLP pipeline from the class edu.stanford.nlp.pipeline.StanfordCoreNLP. Since we have not changed anything from that class, the settings will be set to default. The pipeline will use as input the test.txt file and will output an XML file.
Once you run the command the pipeline will start annotating the text. You will notice it takes a while... (around 20 seconds for a 9-word-sentence 🙄). The output will be a file named test.txt.xml. This process will also automatically generate as a side product an XSLT stylesheet (CoreNLP-to-HTML.xsl), which will convert the XML into HTML if you open it in a browser.
Seems that everything is working fine!! We see the standard pipeline is actually quite complex. It included all the annotators we saw in the section above: tokenization, sentence splitting, lemattization, POS, NER tagging and dependency parsing.
Note: I displayed it using Firefox, however I took me ages to figure out how to do this because apparently in 2019 Firefox stopped allowing this. One can get around this by going to the about:config page and changing the privacy.file_unique_origin setting to False. If it doesn’t work for you you can choose json as the outputFormat or open the XML file with a text editor.
Example using the interactive shell mode
For our second example you will also use exclusively the terminal. CoreNLP has an cool interactive shell mode that you can enter by running the following command.
java -cp “*” -mx3g edu.stanford.nlp.pipeline.StanfordCoreNLP
Once you enter this interactive mode, you just have to type a sentence or group of sentences and they will be processed by the basic annotators on the fly! Below you can see an example of how the sentence “Hello my name is Laura” is analysed.
We can see the same annotations we saw in the XML file printed in the Terminal in a different format! You can also try it out with longer texts.
Example using very simple Java code
Now let’s go through a couple of Java code examples! We will basically create and tune the pipeline using Java, and then we will output the results onto a .txt file that then can be incorporated into our Python or R NLP pipeline. The code was adapted from coreNLP’s official site.
Example 1
Find the complete code in my github. I will firstly run you through the coreNLP_pipeline1_LBP.java file. We start the file importing all the needed dependencies. Then we make up an example of text that we will use for our analysis. You can change this to any other example:
public static String text = "Marie was born in Paris.";
Now we set up the pipeline, we create a document and annotate it using the following lines:
// set up pipeline propertiesProperties props = new Properties();// set the list of annotators to runprops.setProperty("annotators","tokenize,ssplit,pos,lemma,ner,depparse");// build pipelineStanfordCoreNLP pipeline = new StanfordCoreNLP(props);// create a document object and annotate itCoreDocument document = pipeline.processToCoreDocument(text);pipeline.annotate(document);
The rest of the lines of the file will print out on the terminal several tests to make sure the pipeline worked fine. For instance, we firstly get the list of sentences of the input document.
// get sentences of the document List <CoreSentence> sentences = document.sentences(); System.out.println("Sentences of the document"); System.out.println(sentences); System.out.println();
Notice that we get the list of sentences using the method .sentences() on the document object. Similarly, we get the list of tokens of a sentence using the method .tokens() on the object sentence and the individual word and lemma using the methods .word() and .lemma() on the object tok.
List<CoreLabel> tokens = sentence.tokens();System.out.println("Tokens of the sentence:"); for (CoreLabel tok : tokens) { System.out.println("Token: " + tok.word()); System.out.println("Lemma: " + tok.lemma()); }
For running the file you only need to save it on your stanford-corenlp-4.1.0 directory and use the command
java -cp "*" coreNLP_pipeline1_LBP.java
The results should look like:
Example 2
The second example coreNLP_pipeline2_LBP.java is slightly different, since it reads a file coreNLP_input.txt as input document and outputs the results onto a coreNLP_output.txt file.
We used as the input text the short story of The Fox and the Grapes. It is a document with 2 paragraphs and 6 sentences. The processing will be similar to the one in the example above, except this time we will also keep track of the paragraph and sentence number.
The biggest changes will be regarding reading the input and writing the final output. This bit of code below will create the output file (if it doesn’t exist yet) and print the column names using PrintWriter...
File file = new File("coreNLP_output.txt"); //create the file if it doesn't exist if(!file.exists()){ file.createNewFile();}PrintWriter out = new PrintWriter(file);//print column names on the output document out.println("par_id;sent_id;words;lemmas;posTags;nerTags;depParse");
...and this other bit will read the input document using Scanner. The input document will be saved as a String text that we will be able to use as the one in Example 1.
Scanner myReader = new Scanner(myObj);while (myReader.hasNextLine()) { String text = myReader.nextLine();
Once the file coreNLP_pipeline2_LBP.java is ran and the output generated, one can open it as a dataframe using the following python code:
df = pd.read_csv('coreNLP_output.txt', delimiter=';',header=0)
The resulting dataframe will look like this, and can be used for further analysis!
Conclusions
As you have seen coreNLP can be very easy to use and easily incorporated into a Python NLP pipeline! You could also print it directly onto a .csv file and use other delimitors, but I was having some annoying parsing problems.... Hope you enjoyed the post anyways and remember the complete code is available on github.
In the following post we will start talking about the Recursive Sentiment Analysis model and how to use it with coreNLP and Java. Keep posted to learn more about coreNLP ✌🏻 | [
{
"code": null,
"e": 684,
"s": 171,
"text": "Hello there! I’m back and I want this to be the first of a series of post on Stanford’s CoreNLP library. In this article I will focus on the installation of the library and an introduction to its basic features for Java newbies like myself. I will firstly... |
R - For Loop | A For loop is a repetition control structure that allows you to efficiently write a loop that needs to execute a specific number of times.
The basic syntax for creating a for loop statement in R is −
for (value in vector) {
statements
}
R’s for loops are particularly flexible in that they are not limited to integers, or even numbers in the input. We can pass character vectors, logical vectors, lists or expressions.
v <- LETTERS[1:4]
for ( i in v) {
print(i)
}
When the above code is compiled and executed, it produces the following result −
[1] "A"
[1] "B"
[1] "C"
[1] "D"
12 Lectures
2 hours
Nishant Malik
10 Lectures
1.5 hours
Nishant Malik
12 Lectures
2.5 hours
Nishant Malik
20 Lectures
2 hours
Asif Hussain
10 Lectures
1.5 hours
Nishant Malik
48 Lectures
6.5 hours
Asif Hussain
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2541,
"s": 2402,
"text": "A For loop is a repetition control structure that allows you to efficiently write a loop that needs to execute a specific number of times."
},
{
"code": null,
"e": 2602,
"s": 2541,
"text": "The basic syntax for creating a for loop st... |
Find Multiples of 2 or 3 or 5 less than or equal to N - GeeksforGeeks | 23 Mar, 2021
Given an integer . The task is to count all such numbers that are less than or equal to N which are divisible by any of 2 or 3 or 5.Note: If a number less than N is divisible by both 2 or 3, or 3 or 5, or all of 2,3 and 5 then also it should be counted only once.Examples:
Input : N = 5
Output : 4
Input : N = 10
Output : 8
Simple Approach: A simple approach is to traverse from 1 to N and count multiple of 2, 3, 5 which are less than equal to N. To do this, iterate up to N and just check whether a number is divisible by 2 or 3 or 5. If it is divisible, increment the counter and after reaching N, print the result.Time Complexity: O(N).Efficient Approach: An efficient approach is to use the concept of set theory. As we have to find numbers that are divisible by 2 or 3 or 5.
Now the task is to find n(a),n(b),n(c),n(ab), n(bc), n(ac), and n(abc). All these terms can be calculated using Bit masking. In this problem we have taken three numbers 2,3, and 5. So, the bit mask should be of 2^3 bits i.e 8 to generate all combination of 2,3, and 5.Now according to the formula of set union, all terms containing odd numbers of (2,3,5) will add into the result and terms containing even number of (2,3,5) will get subtracted.Below is the implementation of the above approach:
C++
Java
Python 3
C#
PHP
Javascript
// CPP program to count number of multiples// of 2 or 3 or 5 less than or equal to N #include <bits/stdc++.h> using namespace std; // Function to count number of multiples// of 2 or 3 or 5 less than or equal to Nint countMultiples(int n){ // As we have to check divisibility // by three numbers, So we can implement // bit masking int multiple[] = { 2, 3, 5 }; int count = 0, mask = pow(2, 3); for (int i = 1; i < mask; i++) { // we check whether jth bit // is set or not, if jth bit // is set, simply multiply // to prod int prod = 1; for (int j = 0; j < 3; j++) { // check for set bit if (i & 1 << j) prod = prod * multiple[j]; } // check multiple of product if (__builtin_popcount(i) % 2 == 1) count = count + n / prod; else count = count - n / prod; } return count;} // Driver codeint main(){ int n = 10; cout << countMultiples(n) << endl; return 0;}
// Java program to count number of multiples// of 2 or 3 or 5 less than or equal to N class GFG{static int count_setbits(int N){ int cnt=0; while(N>0) { cnt+=(N&1); N=N>>1; } return cnt;} // Function to count number of multiples// of 2 or 3 or 5 less than or equal to Nstatic int countMultiples(int n){ // As we have to check divisibility // by three numbers, So we can implement // bit masking int multiple[] = { 2, 3, 5 }; int count = 0, mask = (int)Math.pow(2, 3); for (int i = 1; i < mask; i++) { // we check whether jth bit // is set or not, if jth bit // is set, simply multiply // to prod int prod = 1; for (int j = 0; j < 3; j++) { // check for set bit if ((i & 1 << j)>0) prod = prod * multiple[j]; } // check multiple of product if (count_setbits(i) % 2 == 1) count = count + n / prod; else count = count - n / prod; } return count;} // Driver codepublic static void main(String[] args){ int n = 10; System.out.println(countMultiples(n));}}// this code is contributed by mits
# Python3 program to count number of multiples# of 2 or 3 or 5 less than or equal to N # Function to count number of multiples# of 2 or 3 or 5 less than or equal to Ndef countMultiples( n): # As we have to check divisibility # by three numbers, So we can implement # bit masking multiple = [ 2, 3, 5 ] count = 0 mask = int(pow(2, 3)) for i in range(1,mask): # we check whether jth bit # is set or not, if jth bit # is set, simply multiply # to prod prod = 1 for j in range(3): # check for set bit if (i & (1 << j)): prod = prod * multiple[j] # check multiple of product if (bin(i).count('1') % 2 == 1): count = count + n // prod else: count = count - n // prod return count # Driver codeif __name__=='__main__': n = 10 print(countMultiples(n)) # This code is contributed by ash264
// C# program to count number of multiples// of 2 or 3 or 5 less than or equal to N using System; public class GFG{ static int count_setbits(int N){ int cnt=0; while(N>0) { cnt+=(N&1); N=N>>1; } return cnt;} // Function to count number of multiples// of 2 or 3 or 5 less than or equal to Nstatic int countMultiples(int n){ // As we have to check divisibility // by three numbers, So we can implement // bit masking int []multiple = { 2, 3, 5 }; int count = 0, mask = (int)Math.Pow(2, 3); for (int i = 1; i < mask; i++) { // we check whether jth bit // is set or not, if jth bit // is set, simply multiply // to prod int prod = 1; for (int j = 0; j < 3; j++) { // check for set bit if ((i & 1 << j)>0) prod = prod * multiple[j]; } // check multiple of product if (count_setbits(i) % 2 == 1) count = count + n / prod; else count = count - n / prod; } return count;} // Driver code static public void Main (){ int n = 10; Console.WriteLine(countMultiples(n));}}//This code is contributed by ajit.
<?php// PHP program to count number// of multiples of 2 or 3 or 5// less than or equal to N // Bit count functionfunction popcount($value){ $count = 0; while($value) { $count += ($value & 1); $value = $value >> 1; } return $count;} // Function to count number of // multiples of 2 or 3 or 5 less// than or equal to Nfunction countMultiples($n){ // As we have to check divisibility // by three numbers, So we can // implement bit masking $multiple = array(2, 3, 5); $count = 0; $mask = pow(2, 3); for ($i = 1; $i < $mask; $i++) { // we check whether jth bit // is set or not, if jth bit // is set, simply multiply // to prod $prod = 1; for ($j = 0; $j < 3; $j++) { // check for set bit if ($i & 1 << $j) $prod = $prod * $multiple[$j]; } // check multiple of product if (popcount($i) % 2 == 1) $count = $count + (int)($n / $prod); else $count = $count - (int)($n / $prod); } return $count;} // Driver code$n = 10; echo countMultiples($n); // This code is contributed by ash264?>
<script> // javascript program to count number of multiples// of 2 or 3 or 5 less than or equal to Nfunction count_setbits(N){ var cnt=0; while(N>0) { cnt+=(N&1); N=N>>1; } return cnt;} // Function to count number of multiples// of 2 or 3 or 5 less than or equal to Nfunction countMultiples(n){ // As we have to check divisibility // by three numbers, So we can implement // bit masking var multiple = [ 2, 3, 5 ]; var count = 0, mask = parseInt(Math.pow(2, 3)); for (i = 1; i < mask; i++) { // we check whether jth bit // is set or not, if jth bit // is set, simply multiply // to prod var prod = 1; for (j = 0; j < 3; j++) { // check for set bit if ((i & 1 << j)>0) prod = prod * multiple[j]; } // check multiple of product if (count_setbits(i) % 2 == 1) count = count + parseInt(n / prod); else count = count - parseInt(n / prod); } return count;} // Driver codevar n = 10; document.write(countMultiples(n)); // This code is contributed by 29AjayKumar </script>
8
ash264
Mithun Kumar
jit_t
29AjayKumar
Algorithms-Bit Algorithms
factor
Mathematical
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Merge two sorted arrays
Program to find GCD or HCF of two numbers
Modulo Operator (%) in C/C++ with Examples
Prime Numbers
Sieve of Eratosthenes
Program for Decimal to Binary Conversion
Modulo 10^9+7 (1000000007)
Find all factors of a natural number | Set 1
Program to find sum of elements in a given array
The Knight's tour problem | Backtracking-1 | [
{
"code": null,
"e": 24771,
"s": 24743,
"text": "\n23 Mar, 2021"
},
{
"code": null,
"e": 25046,
"s": 24771,
"text": "Given an integer . The task is to count all such numbers that are less than or equal to N which are divisible by any of 2 or 3 or 5.Note: If a number less than N i... |
Difference Between LinearLayout and RelativeLayout in Android - GeeksforGeeks | 07 Dec, 2021
LinearLayout is a type of view group which is responsible for holding views in it either Horizontally or vertically. It is a type of Layout where one can arrange groups either Horizontally or Vertically.
Example Diagram:
Syntax:
XML
<LinearLayoutandroid:layout_width="wrap_content"android:layout_height="wrap_content"android:orientation="either vertical or horizontal"> <!--ImageView, TextView, ButtonView etc.--> </LinearLayout>
RelativeLayout is a layout in which we can arrange views/widgets according to the position of other view/widgets. It is independent of horizontal and vertical view and we can arrange it according to one’s satisfaction.
Example Diagram:
Syntax:
XML
<RelativeLayoutandroid:layout_width="wrap_content"android:layout_height="wrap_content"> <!--ImageView, TextView, ButtonView etc with specified position--> </RelativeLayout>
LinearLayout
RelativeLayout
layout_weight attribute in the linear layout is used to specify the equal or specific size to the particular widget and view by using the following attribute.
android:layout_weight = ‘0’
Here Weight is specified as 0 in order to give equal size or space to each view or widget.
Syntax:
<LinearLayout>
<!–Views, widgets–>
</LinearLayout>
Syntax:
<RelativeLayout>
<!–Views, Widgets–>
</RelativeLayout>
XML
<?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@color/white" android:orientation="horizontal"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:layout_marginLeft="35dp" android:layout_marginTop="20sp" android:layout_marginRight="10sp" android:layout_weight="0" android:background="#004d00" android:text=" Geeks" android:textColor="#ffffff" android:textSize="40sp" android:textStyle="bold" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="10dp" android:layout_marginTop="20sp" android:layout_marginRight="10sp" android:layout_weight="0" android:background="#f2f2f2" android:text="For" android:textColor="#004d00" android:textSize="40sp" android:textStyle="bold" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="10dp" android:layout_marginTop="20sp" android:layout_marginRight="10sp" android:layout_weight="0" android:background="#004d00" android:text="Geeks" android:textColor="@color/white" android:textSize="40sp" android:textStyle="bold" /> </LinearLayout>
Output:
XML
<?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@color/white"> <ImageView android:id="@+id/image_gfg" android:layout_width="100dp" android:layout_height="110dp" android:layout_marginLeft="10dp" android:layout_marginTop="10dp" android:layout_marginRight="10dp" android:scaleType="centerCrop" android:src="@drawable/gfg" /> <TextView android:id="@+id/gfg_text" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="10dp" android:layout_toRightOf="@id/image_gfg" android:paddingTop="5dp" android:text="Geeks For Geeks" android:textColor="#004d00" android:textSize="32sp" android:textStyle="bold" /> <TextView android:id="@+id/gfg_location" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@id/gfg_text" android:layout_marginLeft="10dp" android:layout_toRightOf="@id/image_gfg" android:text="Noida,UttarPradesh" android:textColor="#00b300" android:textSize="25sp" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@id/gfg_location" android:layout_marginLeft="10dp" android:layout_toRightOf="@id/image_gfg" android:text="Portal for CS Student" android:textColor="#009900" android:textSize="24sp" /> </RelativeLayout>
Output:
sonuckpacc
kk9826225
android
Technical Scripter 2020
Android
Difference Between
Technical Scripter
Android
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Create and Add Data to SQLite Database in Android?
Broadcast Receiver in Android With Example
Content Providers in Android with Example
How to change the color of Action Bar in an Android App?
How to Read Data from SQLite Database in Android?
Difference between BFS and DFS
Class method vs Static method in Python
Differences between TCP and UDP
Differences between IPv4 and IPv6
Difference Between Method Overloading and Method Overriding in Java | [
{
"code": null,
"e": 23803,
"s": 23775,
"text": "\n07 Dec, 2021"
},
{
"code": null,
"e": 24007,
"s": 23803,
"text": "LinearLayout is a type of view group which is responsible for holding views in it either Horizontally or vertically. It is a type of Layout where one can arrange g... |
Python - Sets | Mathematically a set is a collection of items not in any particular order. A Python set is similar to this mathematical definition with below additional conditions.
The elements in the set cannot be duplicates.
The elements in the set cannot be duplicates.
The elements in the set are immutable(cannot be modified) but the set as a whole is mutable.
The elements in the set are immutable(cannot be modified) but the set as a whole is mutable.
There is no index attached to any element in a python set. So they do not support any indexing or slicing operation.
There is no index attached to any element in a python set. So they do not support any indexing or slicing operation.
The sets in python are typically used for mathematical operations like union, intersection, difference and complement etc. We can create a set, access it’s elements and carry out these mathematical operations as shown below.
A set is created by using the set() function or placing all the elements within a pair of curly braces.
Days=set(["Mon","Tue","Wed","Thu","Fri","Sat","Sun"])
Months={"Jan","Feb","Mar"}
Dates={21,22,17}
print(Days)
print(Months)
print(Dates)
When the above code is executed, it produces the following result. Please note how the order of the elements has changed in the result.
set(['Wed', 'Sun', 'Fri', 'Tue', 'Mon', 'Thu', 'Sat'])
set(['Jan', 'Mar', 'Feb'])
set([17, 21, 22])
We cannot access individual values in a set. We can only access all the elements together as shown above. But we can also get a list of individual elements by looping through the set.
Days=set(["Mon","Tue","Wed","Thu","Fri","Sat","Sun"])
for d in Days:
print(d)
When the above code is executed, it produces the following result −
Wed
Sun
Fri
Tue
Mon
Thu
Sat
We can add elements to a set by using add() method. Again as discussed there is no specific index attached to the newly added element.
Days=set(["Mon","Tue","Wed","Thu","Fri","Sat"])
Days.add("Sun")
print(Days)
When the above code is executed, it produces the following result −
set(['Wed', 'Sun', 'Fri', 'Tue', 'Mon', 'Thu', 'Sat'])
We can remove elements from a set by using discard() method. Again as discussed there is no specific index attached to the newly added element.
Days=set(["Mon","Tue","Wed","Thu","Fri","Sat"])
Days.discard("Sun")
print(Days)
When the above code is executed, it produces the following result.
set(['Wed', 'Fri', 'Tue', 'Mon', 'Thu', 'Sat'])
The union operation on two sets produces a new set containing all the distinct elements from both the sets. In the below example the element “Wed” is present in both the sets.
DaysA = set(["Mon","Tue","Wed"])
DaysB = set(["Wed","Thu","Fri","Sat","Sun"])
AllDays = DaysA|DaysB
print(AllDays)
When the above code is executed, it produces the following result. Please note the result has only one “wed”.
set(['Wed', 'Fri', 'Tue', 'Mon', 'Thu', 'Sat'])
The intersection operation on two sets produces a new set containing only the common elements from both the sets. In the below example the element “Wed” is present in both the sets.
DaysA = set(["Mon","Tue","Wed"])
DaysB = set(["Wed","Thu","Fri","Sat","Sun"])
AllDays = DaysA & DaysB
print(AllDays)
When the above code is executed, it produces the following result. Please note the result has only one “wed”.
set(['Wed'])
The difference operation on two sets produces a new set containing only the elements from the first set and none from the second set. In the below example the element “Wed” is present in both the sets so it will not be found in the result set.
DaysA = set(["Mon","Tue","Wed"])
DaysB = set(["Wed","Thu","Fri","Sat","Sun"])
AllDays = DaysA - DaysB
print(AllDays)
When the above code is executed, it produces the following result. Please note the result has only one “wed”.
set(['Mon', 'Tue'])
We can check if a given set is a subset or superset of another set. The result is True or False depending on the elements present in the sets.
DaysA = set(["Mon","Tue","Wed"])
DaysB = set(["Mon","Tue","Wed","Thu","Fri","Sat","Sun"])
SubsetRes = DaysA <= DaysB
SupersetRes = DaysB >= DaysA
print(SubsetRes)
print(SupersetRes)
When the above code is executed, it produces the following result −
True
True
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2492,
"s": 2327,
"text": "Mathematically a set is a collection of items not in any particular order. A Python set is similar to this mathematical definition with below additional conditions."
},
{
"code": null,
"e": 2538,
"s": 2492,
"text": "The elements in t... |
Create a list from rows in Pandas dataframe - GeeksforGeeks | 26 Jan, 2019
Python list is easy to work with and also list has a lot of in-built functions to do a whole lot of operations on lists. Pandas dataframe’s columns consist of series but unlike the columns, Pandas dataframe rows are not having any similar association. In this post, we are going to discuss several ways in which we can extract the whole row of the dataframe at a time.
Solution #1: In order to iterate over the rows of the Pandas dataframe we can use DataFrame.iterrows() function and then we can append the data of each row to the end of the list.
# importing pandas as pdimport pandas as pd # Create the dataframedf = pd.DataFrame({'Date':['10/2/2011', '11/2/2011', '12/2/2011', '13/2/11'], 'Event':['Music', 'Poetry', 'Theatre', 'Comedy'], 'Cost':[10000, 5000, 15000, 2000]}) # Print the dataframeprint(df)
Output :
Now we will use the DataFrame.iterrows() function to iterate over each of the row of the given Dataframe and construct a list out of the data of each row.
# Create an empty listRow_list =[] # Iterate over each rowfor index, rows in df.iterrows(): # Create list for the current row my_list =[rows.Date, rows.Event, rows.Cost] # append the list to the final list Row_list.append(my_list) # Print the listprint(Row_list)
Output :
As we can see in the output, we have successfully extracted each row of the given dataframe into a list. Just like any other Python’s list we can perform any list operation on the extracted list.
# Find the length of the newly # created listprint(len(Row_list)) # Print the first 3 elementsprint(Row_list[:3])
Output :
Solution #2: In order to iterate over the rows of the Pandas dataframe we can use DataFrame.itertuples() function and then we can append the data of each row to the end of the list.
# importing pandas as pdimport pandas as pd # Create the dataframedf = pd.DataFrame({'Date':['10/2/2011', '11/2/2011', '12/2/2011', '13/2/11'], 'Event':['Music', 'Poetry', 'Theatre', 'Comedy'], 'Cost':[10000, 5000, 15000, 2000]}) # Print the dataframeprint(df)
Output :
Now we will use the DataFrame.itertuples() function to iterate over each of the row of the given Dataframe and construct a list out of the data of each row.
# Create an empty listRow_list =[] # Iterate over each rowfor rows in df.itertuples(): # Create list for the current row my_list =[rows.Date, rows.Event, rows.Cost] # append the list to the final list Row_list.append(my_list) # Print the listprint(Row_list)
Output :
As we can see in the output, we have successfully extracted each row of the given dataframe into a list. Just like any other Python’s list we can perform any list operation on the extracted list.
# Find the length of the newly # created listprint(len(Row_list)) # Print the first 3 elementsprint(Row_list[:3])
Output :
pandas-dataframe-program
Python pandas-dataFrame
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Box Plot in Python using Matplotlib
Python Dictionary
Bar Plot in Matplotlib
Enumerate() in Python
Python | Get dictionary keys as a list
Python | Convert set into a list
Ways to filter Pandas DataFrame by column values
Graph Plotting in Python | Set 1
Python - Call function from another file
loops in python | [
{
"code": null,
"e": 24431,
"s": 24403,
"text": "\n26 Jan, 2019"
},
{
"code": null,
"e": 24800,
"s": 24431,
"text": "Python list is easy to work with and also list has a lot of in-built functions to do a whole lot of operations on lists. Pandas dataframe’s columns consist of seri... |
Get names of all keys in the MongoDB collection? | The syntax to get names of all keys in the collection is as follows:
var yourVariableName1=db.yourCollectionName.findOne();
for(var yourVariableName 2 in yourVariableName1) { print(yourVariableName); }
To understand the above syntax, let us create a collection with documents. The collection name
we are creating is “studentGetKeysDemo”.
The following is the query to create documents:
>db.studentGetKeysDemo.insert({"StudentId":1,"StudentName":"Larry","StudentAge":23,"StudentAddress":"US",
... "StudentHobby":["Cricket","Football","ReadingNovel"],
"StudentMathMarks":89,"StudentDOB":ISODate('1998-04-06')});
The following is the output:
WriteResult({ "nInserted" : 1 })
Display all documents from a collection with the help of find() method. The query is as follows:
> db.studentGetKeysDemo.find().pretty();
The following is the output:
{
"_id" : ObjectId("5c6c12dd68174aae23f5ef5f"),
"StudentId" : 1,
"StudentName" : "Larry",
"StudentAge" : 23,
"StudentAddress" : "US",
"StudentHobby" : [
"Cricket",
"Football",
"Reading Novel"
],
"StudentMathMarks" : 89,
"StudentDOB" : ISODate("1998-04-06T00:00:00Z")
}
Here is the query to get names of all keys from the collection “studentGetKeysDemo”:
> var allKeys=db.studentGetKeysDemo.findOne();
> for(var myKey in allKeys){print(myKey);}
The following is the output displaying all the keys:
_id
StudentId
StudentName
StudentAge
StudentAddress
StudentHobby
StudentMathMarks
StudentDOB | [
{
"code": null,
"e": 1131,
"s": 1062,
"text": "The syntax to get names of all keys in the collection is as follows:"
},
{
"code": null,
"e": 1265,
"s": 1131,
"text": "var yourVariableName1=db.yourCollectionName.findOne();\n\nfor(var yourVariableName 2 in yourVariableName1) { prin... |
Perl - Hashes | A hash is a set of key/value pairs. Hash variables are preceded by a percent (%) sign. To refer to a single element of a hash, you will use the hash variable name preceded by a "$" sign and followed by the "key" associated with the value in curly brackets..
Here is a simple example of using the hash variables −
#!/usr/bin/perl
%data = ('John Paul', 45, 'Lisa', 30, 'Kumar', 40);
print "\$data{'John Paul'} = $data{'John Paul'}\n";
print "\$data{'Lisa'} = $data{'Lisa'}\n";
print "\$data{'Kumar'} = $data{'Kumar'}\n";
This will produce the following result −
$data{'John Paul'} = 45
$data{'Lisa'} = 30
$data{'Kumar'} = 40
Hashes are created in one of the two following ways. In the first method, you assign a value to a named key on a one-by-one basis −
$data{'John Paul'} = 45;
$data{'Lisa'} = 30;
$data{'Kumar'} = 40;
In the second case, you use a list, which is converted by taking individual pairs from the list: the first element of the pair is used as the key, and the second, as the value. For example −
%data = ('John Paul', 45, 'Lisa', 30, 'Kumar', 40);
For clarity, you can use => as an alias for , to indicate the key/value pairs as follows −
%data = ('John Paul' => 45, 'Lisa' => 30, 'Kumar' => 40);
Here is one more variant of the above form, have a look at it, here all the keys have been preceded by hyphen (-) and no quotation is required around them −
%data = (-JohnPaul => 45, -Lisa => 30, -Kumar => 40);
But it is important to note that there is a single word, i.e., without spaces keys have been used in this form of hash formation and if you build-up your hash this way then keys will be accessed using hyphen only as shown below.
$val = %data{-JohnPaul}
$val = %data{-Lisa}
When accessing individual elements from a hash, you must prefix the variable with a dollar sign ($) and then append the element key within curly brackets after the name of the variable. For example −
#!/usr/bin/perl
%data = ('John Paul' => 45, 'Lisa' => 30, 'Kumar' => 40);
print "$data{'John Paul'}\n";
print "$data{'Lisa'}\n";
print "$data{'Kumar'}\n";
This will produce the following result −
45
30
40
You can extract slices of a hash just as you can extract slices from an array. You will need to use @ prefix for the variable to store the returned value because they will be a list of values −
#!/uer/bin/perl
%data = (-JohnPaul => 45, -Lisa => 30, -Kumar => 40);
@array = @data{-JohnPaul, -Lisa};
print "Array : @array\n";
This will produce the following result −
Array : 45 30
You can get a list of all of the keys from a hash by using keys function, which has the following syntax −
keys %HASH
This function returns an array of all the keys of the named hash. Following is the example −
#!/usr/bin/perl
%data = ('John Paul' => 45, 'Lisa' => 30, 'Kumar' => 40);
@names = keys %data;
print "$names[0]\n";
print "$names[1]\n";
print "$names[2]\n";
This will produce the following result −
Lisa
John Paul
Kumar
Similarly, you can use values function to get a list of all the values. This function has the following syntax −
values %HASH
This function returns a normal array consisting of all the values of the named hash. Following is the example −
#!/usr/bin/perl
%data = ('John Paul' => 45, 'Lisa' => 30, 'Kumar' => 40);
@ages = values %data;
print "$ages[0]\n";
print "$ages[1]\n";
print "$ages[2]\n";
This will produce the following result −
30
45
40
If you try to access a key/value pair from a hash that doesn't exist, you'll normally get the undefined value, and if you have warnings switched on, then you'll get a warning generated at run time. You can get around this by using the exists function, which returns true if the named key exists, irrespective of what its value might be −
#!/usr/bin/perl
%data = ('John Paul' => 45, 'Lisa' => 30, 'Kumar' => 40);
if( exists($data{'Lisa'} ) ) {
print "Lisa is $data{'Lisa'} years old\n";
} else {
print "I don't know age of Lisa\n";
}
Here we have introduced the IF...ELSE statement, which we will study in a separate chapter. For now you just assume that if( condition ) part will be executed only when the given condition is true otherwise else part will be executed. So when we execute the above program, it produces the following result because here the given condition exists($data{'Lisa'} returns true −
Lisa is 30 years old
You can get the size - that is, the number of elements from a hash by using the scalar context on either keys or values. Simply saying first you have to get an array of either the keys or values and then you can get the size of array as follows −
#!/usr/bin/perl
%data = ('John Paul' => 45, 'Lisa' => 30, 'Kumar' => 40);
@keys = keys %data;
$size = @keys;
print "1 - Hash size: is $size\n";
@values = values %data;
$size = @values;
print "2 - Hash size: is $size\n";
This will produce the following result −
1 - Hash size: is 3
2 - Hash size: is 3
Adding a new key/value pair can be done with one line of code using simple assignment operator. But to remove an element from the hash you need to use delete function as shown below in the example −
#!/usr/bin/perl
%data = ('John Paul' => 45, 'Lisa' => 30, 'Kumar' => 40);
@keys = keys %data;
$size = @keys;
print "1 - Hash size: is $size\n";
# adding an element to the hash;
$data{'Ali'} = 55;
@keys = keys %data;
$size = @keys;
print "2 - Hash size: is $size\n";
# delete the same element from the hash;
delete $data{'Ali'};
@keys = keys %data;
$size = @keys;
print "3 - Hash size: is $size\n";
This will produce the following result −
1 - Hash size: is 3
2 - Hash size: is 4
3 - Hash size: is 3
46 Lectures
4.5 hours
Devi Killada
11 Lectures
1.5 hours
Harshit Srivastava
30 Lectures
6 hours
TELCOMA Global
24 Lectures
2 hours
Mohammad Nauman
68 Lectures
7 hours
Stone River ELearning
58 Lectures
6.5 hours
Stone River ELearning
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2478,
"s": 2220,
"text": "A hash is a set of key/value pairs. Hash variables are preceded by a percent (%) sign. To refer to a single element of a hash, you will use the hash variable name preceded by a \"$\" sign and followed by the \"key\" associated with the value in curly br... |
Ngx-Bootstrap - Alerts | Alerts provides contextual messages for typical user actions like info, error with available and flexible alert messages.
Displays collapsible content panels for presenting information in a limited amount of space.
alert,bs-alert
alert,bs-alert
dismissible − boolean, If set, displays an inline "Close" button, default: false
dismissible − boolean, If set, displays an inline "Close" button, default: false
dismissOnTimeout − string | number, Number in milliseconds, after which alert will be closed
dismissOnTimeout − string | number, Number in milliseconds, after which alert will be closed
isOpen − boolean, Is alert visible, default: true
isOpen − boolean, Is alert visible, default: true
type − string, alert type. Provides one of four bootstrap supported contextual classes: success, info, warning and danger, default: warning
type − string, alert type. Provides one of four bootstrap supported contextual classes: success, info, warning and danger, default: warning
onClose − This event fires immediately after close instance method is called, $event is an instance of Alert component.
onClose − This event fires immediately after close instance method is called, $event is an instance of Alert component.
onClosed − This event fires when alert closed, $event is an instance of Alert component
onClosed − This event fires when alert closed, $event is an instance of Alert component
dismissible − boolean, is alerts are dismissible by default, default: false
dismissible − boolean, is alerts are dismissible by default, default: false
dismissOnTimeout − number, default time before alert will dismiss, default: undefined
dismissOnTimeout − number, default time before alert will dismiss, default: undefined
type − string, default alert type, default: warning
type − string, default alert type, default: warning
As we're going to use alerts, We've to update app.module.ts used in ngx-bootstrap Accordion chapter to use AlertModule and AlertConfig.
Update app.module.ts to use the AlertModule and AlertConfig.
app.module.ts
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
import { AppComponent } from './app.component';
import { TestComponent } from './test/test.component';
import { AccordionModule } from 'ngx-bootstrap/accordion';
import { AlertModule, AlertConfig } from 'ngx-bootstrap/alert';
@NgModule({
declarations: [
AppComponent,
TestComponent
],
imports: [
BrowserAnimationsModule,
BrowserModule,
AccordionModule,
AlertModule
],
providers: [AlertConfig],
bootstrap: [AppComponent]
})
export class AppModule { }
Update test.component.html to use the alerts.
test.component.html
<alert type="success"
[dismissible]="dismissible"
[isOpen]="open"
(onClosed)="log($event)"
[dismissOnTimeout]="timeout">
<h4 class="alert-heading">Well done!</h4>
<p>Success Message</p>
</alert>
<alert type="info">
<strong>Heads up!</strong> Info
</alert>
<alert type="warning">
<strong>Warning!</strong> Warning
</alert>
<alert type="danger">
<strong>Oh snap!</strong> Error
</alert>
Update test.component.ts for corresponding variables and methods.
test.component.ts
import { Component, OnInit } from '@angular/core';
@Component({
selector: 'app-test',
templateUrl: './test.component.html',
styleUrls: ['./test.component.css']
})
export class TestComponent implements OnInit {
open: boolean = true;
dismissible: boolean = true;
timeout: number = 10000;
constructor() { }
ngOnInit(): void {
}
log(alert){
console.log('alert message closed');
}
}
Run the following command to start the angular server.
ng serve
Once server is up and running. Open http://localhost:4200 and verify the following output.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2222,
"s": 2100,
"text": "Alerts provides contextual messages for typical user actions like info, error with available and flexible alert messages."
},
{
"code": null,
"e": 2315,
"s": 2222,
"text": "Displays collapsible content panels for presenting informati... |
CamelCase Pattern Matching - GeeksforGeeks | 21 Apr, 2022
Given a list of words where each word follows CamelCase notation, the task is to print all words in the dictionary that match with a given pattern consisting of uppercase characters only.
Examples
Input: arr[] = [ “WelcomeGeek”, “WelcomeToGeeksForGeeks”, “GeeksForGeeks” ], pattern = “WTG” Output: WelcomeToGeeksForGeeks Explanation: There is only one abbreviation for the given pattern i.e., WelcomeToGeeksForGeeks.
Input: arr[] = [ “Hi”, “Hello”, “HelloWorld”, “HiTech”, “HiGeek”, “HiTechWorld”, “HiTechCity”, “HiTechLab” ], pattern = “HA” Output: No match found Explanation: There is no such abbreviation for the given pattern.
Approach: 1. Traverse through every word and keep Hashing that word with every uppercase letter found in the given string. For Example:
For string = "GeeksForGeeks"
then the hashing after every uppercase letter found is:
map {
{G, GeeksForGeeks},
{GF, GeeksForGeeks},
{GFG, GeeksForGeeks}
}
2 .After creating hashing for all the string in the list. Search for the given pattern in the map and print all the string mapped to it.
Below is the implementation of the above approach:
C++
Java
Python3
C#
// C++ to find CamelCase Pattern// matching#include "bits/stdc++.h"using namespace std; // Function that prints the camel// case pattern matchingvoid CamelCase(vector<string>& words, string pattern){ // Map to store the hashing // of each words with every // uppercase letter found map<string, vector<string> > map; // Traverse the words array // that contains all the // string for (int i = 0; i < words.size(); i++) { // Initialise str as // empty string str = ""; // length of string words[i] int l = words[i].length(); for (int j = 0; j < l; j++) { // For every uppercase // letter found map // that uppercase to // original words if (words[i][j] >= 'A' && words[i][j] <= 'Z') { str += words[i][j]; map[str].push_back(words[i]); } } } bool wordFound = false; // Traverse the map for pattern // matching for (auto& it : map) { // If pattern matches then // print the corresponding // mapped words if (it.first == pattern) { wordFound = true; for (auto& itt : it.second) { cout << itt << endl; } } } // If word not found print // "No match found" if (!wordFound) { cout << "No match found"; }} // Driver's Codeint main(){ vector<string> words = { "Hi", "Hello", "HelloWorld", "HiTech", "HiGeek", "HiTechWorld", "HiTechCity", "HiTechLab" }; // Pattern to be found string pattern = "HT"; // Function call to find the // words that match to the // given pattern CamelCase(words, pattern); return 0;}
// Java to find CamelCase Pattern// matchingimport java.util.*; class GFG{ // Function that prints the camel// case pattern matchingstatic void CamelCase(ArrayList<String> words, String pattern){ // Map to store the hashing // of each words with every // uppercase letter found Map<String, List<String>> map = new HashMap<String, List<String>>(); // Traverse the words array // that contains all the // String for (int i = 0; i < words.size(); i++) { // Initialise str as // empty String str = ""; // length of String words[i] int l = words.get(i).length(); for (int j = 0; j < l; j++) { // For every uppercase // letter found map // that uppercase to // original words if (words.get(i).charAt(j) >= 'A' && words.get(i).charAt(j) <= 'Z') { str += words.get(i).charAt(j); map.put(str,list(map.get(str),words.get(i))); } } } boolean wordFound = false; // Traverse the map for pattern // matching for (Map.Entry<String,List<String>> it : map.entrySet()) { // If pattern matches then // print the corresponding // mapped words if (it.getKey().equals(pattern)) { wordFound = true; for(String s : it.getValue()) System.out.print(s +"\n"); } } // If word not found print // "No match found" if (!wordFound) { System.out.print("No match found"); }} private static List<String> list(List<String> list, String str) { List<String> temp = new ArrayList<String>(); if(list != null) temp.addAll(list); temp.add(str); return temp;} // Driver's Codepublic static void main(String[] args){ String arr[] = {"Hi", "Hello", "HelloWorld", "HiTech", "HiGeek", "HiTechWorld", "HiTechCity", "HiTechLab" }; ArrayList<String> words = new ArrayList<String>(Arrays.asList(arr)); // Pattern to be found String pattern = "HT"; // Function call to find the // words that match to the // given pattern CamelCase(words, pattern); }} // This code is contributed by PrinciRaj1992
# Python3 to find CamelCase Pattern# matching # Function that prints the camel# case pattern matchingdef CamelCase(words, pattern) : # Map to store the hashing # of each words with every # uppercase letter found map = dict.fromkeys(words,None); # Traverse the words array # that contains all the # string for i in range(len(words)) : # Initialise str as # empty string = ""; # length of string words[i] l = len(words[i]); for j in range(l) : # For every uppercase # letter found map # that uppercase to # original words if (words[i][j] >= 'A' and words[i][j] <= 'Z') : string += words[i][j]; if string not in map : map[string] = [words[i]] elif map[string] is None : map[string] = [words[i]] else : map[string].append(words[i]); wordFound = False; # Traverse the map for pattern # matching for key,value in map.items() : # If pattern matches then # print the corresponding # mapped words if (key == pattern) : wordFound = True; for itt in value : print(itt); # If word not found print # "No match found" if (not wordFound) : print("No match found"); # Driver's Codeif __name__ == "__main__" : words = [ "Hi", "Hello", "HelloWorld", "HiTech", "HiGeek", "HiTechWorld", "HiTechCity", "HiTechLab" ]; # Pattern to be found pattern = "HT"; # Function call to find the # words that match to the # given pattern CamelCase(words, pattern); # This code is contributed by AnkitRai01
// C# to find CamelCase Pattern// matchingusing System;using System.Collections.Generic; class GFG{ // Function that prints the camel// case pattern matchingstatic void CamelCase(List<String> words, String pattern){ // Map to store the hashing // of each words with every // uppercase letter found Dictionary<String, List<String>> map = new Dictionary<String, List<String>>(); // Traverse the words array // that contains all the // String for (int i = 0; i < words.Count; i++) { // Initialise str as // empty String str = ""; // length of String words[i] int l = words[i].Length; for (int j = 0; j < l; j++) { // For every uppercase // letter found map // that uppercase to // original words if (words[i][j] >= 'A' && words[i][j] <= 'Z') { str += words[i][j]; if(map.ContainsKey(str)) map[str] = list(map[str],words[i]); else map.Add(str,list(null,words[i])); } } } bool wordFound = false; // Traverse the map for pattern // matching foreach (KeyValuePair<String,List<String>> it in map) { // If pattern matches then // print the corresponding // mapped words if (it.Key.Equals(pattern)) { wordFound = true; foreach(String s in it.Value) Console.Write(s +"\n"); } } // If word not found print // "No match found" if (!wordFound) { Console.Write("No match found"); }} private static List<String> list(List<String> list, String str) { List<String> temp = new List<String>(); if(list != null) temp.AddRange(list); temp.Add(str); return temp;} // Driver's Codepublic static void Main(String[] args){ String []arr = {"Hi", "Hello", "HelloWorld", "HiTech", "HiGeek", "HiTechWorld", "HiTechCity", "HiTechLab" }; List<String> words = new List<String>(arr); // Pattern to be found String pattern = "HT"; // Function call to find the // words that match to the // given pattern CamelCase(words, pattern); }} // This code is contributed by Rajput-Ji
HiTech
HiTechWorld
HiTechCity
HiTechLab
Time Complexity: O(N*M) where N is the length of list containing the strings and M is the length of longest string.
Efficient Approach:
Prepare a string by concatenating all array elements & semicolon as a delimiter after every array element.Traverse through concatenated string and look for Upper Case characters or delimiter.Hold a temporary string with all Upper Case characters until delimiter comes in traversal. Add this temporary string as a key (if key doesn’t exist) in dictionary or append the word if key already exists.Once delimiter reached, reset the temporary variables.
Prepare a string by concatenating all array elements & semicolon as a delimiter after every array element.
Traverse through concatenated string and look for Upper Case characters or delimiter.
Hold a temporary string with all Upper Case characters until delimiter comes in traversal. Add this temporary string as a key (if key doesn’t exist) in dictionary or append the word if key already exists.
Once delimiter reached, reset the temporary variables.
Below is the implementation of the above approach:
C++
C#
Python3
#include <bits/stdc++.h>using namespace std; void PrintMatchingCamelCase(vector<string> arr, string pattern){ // Concatenating all array elements // using Aggregate function of LINQ // putting semicolon as delimiter after each element string cctdString = ""; for (int i = 0; i < arr.size(); i++) { cctdString += arr[i]; if (i != arr.size() - 1) cctdString.push_back(';'); } // Map to store the hashing // of each words with every // uppercase letter found unordered_map<string, vector<string> > maps; // temporary Variables int charPos = 0; int wordPos = 0; string strr = ""; // Traversing through concatenated String for (; charPos < cctdString.length(); charPos++) { // Identifying if the current Character is // CamelCase If so, then adding to map // accordingly if (cctdString[charPos] >= 'A' && cctdString[charPos] <= 'Z') { strr += cctdString[charPos]; // If pattern matches then // print the corresponding // mapped words if (maps.find(strr) != maps.end()) { vector<string> temp; temp.insert(temp.end(), maps[strr].begin(), maps[strr].end()); temp.push_back(arr[wordPos]); maps[strr] = temp; } else { vector<string> vec = { arr[wordPos] }; maps[strr] = vec; } } // If delimiter has reached then resetting // temporary string also incrementing word // position value else if (cctdString[charPos] == ';') { wordPos++; strr = ""; } } // If pattern matches then // print the corresponding // mapped words if (maps.find(pattern) != maps.end()) { for (int i = 0; i < maps[pattern].size(); i++) { cout << maps[pattern][i] << endl; } } else { cout << "No Match Found" << endl; }} // Driver codeint main(){ // Array of words vector<string> arr = { "Hi", "Hello", "HelloWorld", "HiTech", "HiGeek", "HiTechWorld", "HiTechCity", "HiTechLab" }; // Pattern to be found string pattern = "HT"; // Function call to find the // words that match to the // given pattern PrintMatchingCamelCase(arr, pattern);} // This code is contributed by parthmanchanda81
using System;using System.Collections.Generic;using System.Linq; public class GFG { public static void PrintMatchingCamelCase(String[] arr, String pattern) { // Concatenating all array elements // using Aggregate function of LINQ // putting semicolon as delimiter after each element String cctdString = arr.Aggregate((i, j) = > i + ';' + j); // Map to store the hashing // of each words with every // uppercase letter found Dictionary<String, List<String> > map = new Dictionary<string, List<string> >(); // temporary Variables int charPos = 0; int wordPos = 0; string strr = string.Empty; // Traversing through concatenated String for (; charPos < cctdString.Length; charPos++) { // Identifying if the current Character is // CamelCase If so, then adding to map // accordingly if (cctdString[charPos] >= 'A' && cctdString[charPos] <= 'Z') { strr += cctdString[charPos]; if (map.ContainsKey(strr)) { List<String> temp = new List<string>(); temp.AddRange(map[strr]); temp.Add(arr[wordPos]); map[strr] = temp; } else { map.Add(strr, new List<string>{ arr[wordPos] }); } } // If delimiter has reached then resetting // temporary string also incrementing word // position value else if (cctdString[charPos] == ';') { wordPos++; strr = string.Empty; } } // If pattern matches then // print the corresponding // mapped words if (map.ContainsKey(pattern)) { foreach(String word in map[pattern]) { Console.WriteLine(word); } } else { Console.WriteLine("No Match Found"); } } // Driver's Code public static void Main(String[] args) { // Array of Words String[] arr = { "Hi", "Hello", "HelloWorld", "HiTech", "HiGeek", "HiTechWorld", "HiTechCity", "HiTechLab" }; // Pattern to be found String pattern = "HT"; // Function call to find the // words that match to the // given pattern PrintMatchingCamelCase(arr, pattern); } // This code is contributed by Rishabh Singh}
def PrintMatchingCamelCase(arr, pattern): # Concatenating all array elements # using Aggregate function of LINQ # putting semicolon as delimiter after each element cctdString = ';'.join(arr) # Map to store the hashing # of each words with every # uppercase letter found maps = dict() # temporary Variables wordPos = 0 strr = "" # Traversing through concatenated String for charPos in range(len(cctdString)): # Identifying if the current Character is # CamelCase If so, then adding to map # accordingly if (cctdString[charPos] >= 'A' and cctdString[charPos] <= 'Z'): strr += cctdString[charPos] # If pattern matches then # print the corresponding # mapped words if strr in maps: temp = [] temp.extend(maps[strr]) temp.append(arr[wordPos]) maps[strr] = temp else: vec = [arr[wordPos], ] maps[strr] = vec # If delimiter has reached then resetting # temporary string also incrementing word # position value elif (cctdString[charPos] == ';'): wordPos += 1 strr = "" # If pattern matches then # print the corresponding # mapped words if (pattern in maps): for i in range(len(maps[pattern])): print(maps[pattern][i]) else: print("No Match Found") # Driver codeif __name__ == '__main__': # Array of words arr = ["Hi", "Hello", "HelloWorld", "HiTech", "HiGeek", "HiTechWorld", "HiTechCity", "HiTechLab"] # Pattern to be found pattern = "HT" # Function call to find the # words that match to the # given pattern PrintMatchingCamelCase(arr, pattern) # This code is contributed by Amartya Ghosh
HiTech
HiTechWorld
HiTechCity
HiTechLab
Time Complexity: O(N)Auxiliary Space: O(N)
ankthon
princiraj1992
Rajput-Ji
rishabhgfg
gabaa406
pankajsharmagfg
parthmanchanda81
amartyaghoshgfg
Advanced Computer Subject
Arrays
Data Structures
Hash
Strings
Data Structures
Arrays
Hash
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
System Design Tutorial
Copying Files to and from Docker Containers
KDD Process in Data Mining
ML | Underfitting and Overfitting
Clustering in Machine Learning
Arrays in Java
Arrays in C/C++
Program for array rotation
Stack Data Structure (Introduction and Program)
Top 50 Array Coding Problems for Interviews | [
{
"code": null,
"e": 24518,
"s": 24490,
"text": "\n21 Apr, 2022"
},
{
"code": null,
"e": 24706,
"s": 24518,
"text": "Given a list of words where each word follows CamelCase notation, the task is to print all words in the dictionary that match with a given pattern consisting of up... |
How to create a new local user in windows using PowerShell? | To create a new local user in the Windows operating system using PowerShell, we can use the New-LocalUser cmdlet. The below command will create the TestUser with no password.
New-LocalUser -Name TestUser -NoPassword
Name Enabled Description
---- ------- -----------
TestUser True
TestUser account has been enabled here. To provide the password for the user, the password should be in the secure string format. We can pass the password as shown below.
$pass = "Admin@123" | ConvertTo-SecureString -AsPlainText
-Force
New-LocalUser -Name TestUser -Password $pass
The above commands will create the TestUser with the password. To add the password and account-related settings we can directly provide parameters but for ease, we will use the splatting method as shown below.
$Localuseraccount = @{
Name = 'TestUser'
Password = ("Admin#123" | ConvertTo-SecureString -AsPlainText -Force)
AccountNeverExpires = $true
PasswordNeverExpires = $true
Verbose = $true
}
New-LocalUser @Localuseraccount
The above command will create testuser with a password and set its property to Account Never Expires and Password Never Expires. | [
{
"code": null,
"e": 1237,
"s": 1062,
"text": "To create a new local user in the Windows operating system using PowerShell, we can use the New-LocalUser cmdlet. The below command will create the TestUser with no password."
},
{
"code": null,
"e": 1278,
"s": 1237,
"text": "New-Loc... |
Java Method Parameters | Information can be passed to methods as parameter. Parameters act as variables inside the method.
Parameters are specified after the method name, inside the parentheses.
You can add as many parameters as you want, just separate them with a comma.
The following example has a
method that takes a String called fname as parameter.
When the method is called, we pass along a first name,
which is used inside the method to print the full name:
public class Main {
static void myMethod(String fname) {
System.out.println(fname + " Refsnes");
}
public static void main(String[] args) {
myMethod("Liam");
myMethod("Jenny");
myMethod("Anja");
}
}
// Liam Refsnes
// Jenny Refsnes
// Anja Refsnes
Try it Yourself »
When a parameter is passed to the method, it is called an argument. So, from the example above: fname is a parameter, while Liam, Jenny and Anja are arguments.
You can have as many parameters as you like:
public class Main {
static void myMethod(String fname, int age) {
System.out.println(fname + " is " + age);
}
public static void main(String[] args) {
myMethod("Liam", 5);
myMethod("Jenny", 8);
myMethod("Anja", 31);
}
}
// Liam is 5
// Jenny is 8
// Anja is 31
Try it Yourself »
Note that when you are working with multiple parameters, the method call must
have the same number of arguments as there are parameters, and the arguments must be passed in the same order.
The void keyword, used in the examples above, indicates that the method should not return a value. If you
want the method to return a value, you can use a primitive data type (such as int,
char, etc.) instead of void, and use the return
keyword inside the method:
public class Main {
static int myMethod(int x) {
return 5 + x;
}
public static void main(String[] args) {
System.out.println(myMethod(3));
}
}
// Outputs 8 (5 + 3)
Try it Yourself »
This example returns the sum of a method's two parameters:
public class Main {
static int myMethod(int x, int y) {
return x + y;
}
public static void main(String[] args) {
System.out.println(myMethod(5, 3));
}
}
// Outputs 8 (5 + 3)
Try it Yourself »
You can also store the result in a variable (recommended, as it is easier to read and maintain):
public class Main {
static int myMethod(int x, int y) {
return x + y;
}
public static void main(String[] args) {
int z = myMethod(5, 3);
System.out.println(z);
}
}
// Outputs 8 (5 + 3)
Try it Yourself »
It is common to use if...else statements inside methods:
public class Main {
// Create a checkAge() method with an integer variable called age
static void checkAge(int age) {
// If age is less than 18, print "access denied"
if (age < 18) {
System.out.println("Access denied - You are not old enough!");
// If age is greater than, or equal to, 18, print "access granted"
} else {
System.out.println("Access granted - You are old enough!");
}
}
public static void main(String[] args) {
checkAge(20); // Call the checkAge method and pass along an age of 20
}
}
// Outputs "Access granted - You are old enough!"
Try it Yourself »
Add a fname parameter of type String to myMethod, and output "John Doe":
static void myMethod( ) {
System.out.println( + " Doe");
}
public static void main(String[] args) {
myMethod("John");
}
Start the Exercise
We just launchedW3Schools videos
Get certifiedby completinga course today!
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
help@w3schools.com
Your message has been sent to W3Schools. | [
{
"code": null,
"e": 98,
"s": 0,
"text": "Information can be passed to methods as parameter. Parameters act as variables inside the method."
},
{
"code": null,
"e": 247,
"s": 98,
"text": "Parameters are specified after the method name, inside the parentheses.\nYou can add as many... |
How to wait until an element is present in Selenium? | We can wait until an element is present in Selenium webdriver. This can be done with the help of synchronization concept. We have an explicit wait condition where we can pause or wait for an element before proceeding to the next step.
The explicit wait waits for a specific amount of time before throwing an exception. To verify if an element is present, we can use the expected condition presenceOfElementLocated.
Code Implementation.
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
public class ElementPresenceWait{
public static void main(String[] args) {
System.setProperty("webdriver.chrome.driver", "C:\\Users\\ghs6kor\\Desktop\\Java\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
String url = "https://www.tutorialspoint.com/about/about_careers.htm";
driver.get(url);
driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);
// identify element and click()
WebElement l=driver.findElement(By.linkText("Terms of Use"));
l.click();
// explicit wait condition
WebDriverWait w = new WebDriverWait(driver,3);
// presenceOfElementLocated condition
w.until(ExpectedConditions.presenceOfElementLocated (By.cssSelector("h1")));
// get text of element and print
System.out.println("Element present having text:" + l.getText());
driver.quit()
}
} | [
{
"code": null,
"e": 1297,
"s": 1062,
"text": "We can wait until an element is present in Selenium webdriver. This can be done with the help of synchronization concept. We have an explicit wait condition where we can pause or wait for an element before proceeding to the next step."
},
{
"cod... |
TensorFlow - Quick Guide | TensorFlow is a software library or framework, designed by the Google team to implement machine learning and deep learning concepts in the easiest manner. It combines the computational algebra of optimization techniques for easy calculation of many mathematical expressions.
The official website of TensorFlow is mentioned below −
www.tensorflow.org
Let us now consider the following important features of TensorFlow −
It includes a feature of that defines, optimizes and calculates mathematical expressions easily with the help of multi-dimensional arrays called tensors.
It includes a feature of that defines, optimizes and calculates mathematical expressions easily with the help of multi-dimensional arrays called tensors.
It includes a programming support of deep neural networks and machine learning techniques.
It includes a programming support of deep neural networks and machine learning techniques.
It includes a high scalable feature of computation with various data sets.
It includes a high scalable feature of computation with various data sets.
TensorFlow uses GPU computing, automating management. It also includes a unique feature of optimization of same memory and the data used.
TensorFlow uses GPU computing, automating management. It also includes a unique feature of optimization of same memory and the data used.
TensorFlow is well-documented and includes plenty of machine learning libraries. It offers a few important functionalities and methods for the same.
TensorFlow is also called a “Google” product. It includes a variety of machine learning and deep learning algorithms. TensorFlow can train and run deep neural networks for handwritten digit classification, image recognition, word embedding and creation of various sequence models.
To install TensorFlow, it is important to have “Python” installed in your system. Python version 3.4+ is considered the best to start with TensorFlow installation.
Consider the following steps to install TensorFlow in Windows operating system.
Step 1 − Verify the python version being installed.
Step 2 − A user can pick up any mechanism to install TensorFlow in the system. We recommend “pip” and “Anaconda”. Pip is a command used for executing and installing modules in Python.
Before we install TensorFlow, we need to install Anaconda framework in our system.
After successful installation, check in command prompt through “conda” command. The execution of command is displayed below −
Step 3 − Execute the following command to initialize the installation of TensorFlow −
conda create --name tensorflow python = 3.5
It downloads the necessary packages needed for TensorFlow setup.
Step 4 − After successful environmental setup, it is important to activate TensorFlow module.
activate tensorflow
Step 5 − Use pip to install “Tensorflow” in the system. The command used for installation is mentioned as below −
pip install tensorflow
And,
pip install tensorflow-gpu
After successful installation, it is important to know the sample program execution of TensorFlow.
Following example helps us understand the basic program creation “Hello World” in TensorFlow.
The code for first program implementation is mentioned below −
>> activate tensorflow
>> python (activating python shell)
>> import tensorflow as tf
>> hello = tf.constant(‘Hello, Tensorflow!’)
>> sess = tf.Session()
>> print(sess.run(hello))
Artificial Intelligence includes the simulation process of human intelligence by machines and special computer systems. The examples of artificial intelligence include learning, reasoning and self-correction. Applications of AI include speech recognition, expert systems, and image recognition and machine vision.
Machine learning is the branch of artificial intelligence, which deals with systems and algorithms that can learn any new data and data patterns.
Let us focus on the Venn diagram mentioned below for understanding machine learning and deep learning concepts.
Machine learning includes a section of machine learning and deep learning is a part of machine learning. The ability of program which follows machine learning concepts is to improve its performance of observed data. The main motive of data transformation is to improve its knowledge in order to achieve better results in the future, provide output closer to the desired output for that particular system. Machine learning includes “pattern recognition” which includes the ability to recognize the patterns in data.
The patterns should be trained to show the output in desirable manner.
Machine learning can be trained in two different ways −
Supervised training
Unsupervised training
Supervised learning or supervised training includes a procedure where the training set is given as input to the system wherein, each example is labeled with a desired output value. The training in this type is performed using minimization of a particular loss function, which represents the output error with respect to the desired output system.
After completion of training, the accuracy of each model is measured with respect to disjoint examples from training set, also called the validation set.
The best example to illustrate “Supervised learning” is with a bunch of photos given with information included in them. Here, the user can train a model to recognize new photos.
In unsupervised learning or unsupervised training, include training examples, which are not labeled by the system to which class they belong. The system looks for the data, which share common characteristics, and changes them based on internal knowledge features.This type of learning algorithms are basically used in clustering problems.
The best example to illustrate “Unsupervised learning” is with a bunch of photos with no information included and user trains model with classification and clustering. This type of training algorithm works with assumptions as no information is given.
It is important to understand mathematical concepts needed for TensorFlow before creating the basic application in TensorFlow. Mathematics is considered as the heart of any machine learning algorithm. It is with the help of core concepts of Mathematics, a solution for specific machine learning algorithm is defined.
An array of numbers, which is either continuous or discrete, is defined as a vector. Machine learning algorithms deal with fixed length vectors for better output generation.
Machine learning algorithms deal with multidimensional data so vectors play a crucial role.
The pictorial representation of vector model is as shown below −
Scalar can be defined as one-dimensional vector. Scalars are those, which include only magnitude and no direction. With scalars, we are only concerned with the magnitude.
Examples of scalar include weight and height parameters of children.
Matrix can be defined as multi-dimensional arrays, which are arranged in the format of rows and columns. The size of matrix is defined by row length and column length. Following figure shows the representation of any specified matrix.
Consider the matrix with “m” rows and “n” columns as mentioned above, the matrix representation will be specified as “m*n matrix” which defined the length of matrix as well.
In this section, we will learn about the different Mathematical Computations in TensorFlow.
Addition of two or more matrices is possible if the matrices are of the same dimension. The addition implies addition of each element as per the given position.
Consider the following example to understand how addition of matrices works −
Example:A=[1234]B=[5678]thenA+B=[1+52+63+74+8]=[681012]
The subtraction of matrices operates in similar fashion like the addition of two matrices. The user can subtract two matrices provided the dimensions are equal.
Example:A−[1234]B−[5678]thenA−B−[1−52−63−74−8]−[−4−4−4−4]
For two matrices A m*n and B p*q to be multipliable, n should be equal to p. The resulting matrix is −
C m*q
A=[1234]B=[5678]
c11=[12][57]=1×5+2×7=19c12=[12][68]=1×6+2×8=22
c21=[34][57]=3×5+4×7=43c22=[34][68]=3×6+4×8=50
C=[c11c12c21c22]=[19224350]
The transpose of a matrix A, m*n is generally represented by AT (transpose) n*m and is obtained by transposing the column vectors as row vectors.
Example:A=[1234]thenAT[1324]
Any vector of dimension n can be represented as a matrix v = R^n*1.
v1=[v11v12⋅⋅⋅v1n]v2=[v21v22⋅⋅⋅v2n]
The dot product of two vectors is the sum of the product of corresponding components − Components along the same dimension and can be expressed as
v1⋅v2=v1Tv2=v2Tv1=v11v21+v12v22+⋅⋅+v1nv2n=∑k=1nv1kv2k
The example of dot product of vectors is mentioned below −
Example:v1=[123]v2=[35−1]v1⋅v2=v1Tv2=1×3+2×5−3×1=10
Artificial Intelligence is one of the most popular trends of recent times. Machine learning and deep learning constitute artificial intelligence. The Venn diagram shown below explains the relationship of machine learning and deep learning −
Machine learning is the art of science of getting computers to act as per the algorithms designed and programmed. Many researchers think machine learning is the best way to make progress towards human-level AI. Machine learning includes the following types of patterns
Supervised learning pattern
Unsupervised learning pattern
Deep learning is a subfield of machine learning where concerned algorithms are inspired by the structure and function of the brain called artificial neural networks.
All the value today of deep learning is through supervised learning or learning from labelled data and algorithms.
Each algorithm in deep learning goes through the same process. It includes a hierarchy of nonlinear transformation of input that can be used to generate a statistical model as output.
Consider the following steps that define the Machine Learning process
Identifies relevant data sets and prepares them for analysis.
Chooses the type of algorithm to use
Builds an analytical model based on the algorithm used.
Trains the model on test data sets, revising it as needed.
Runs the model to generate test scores.
In this section, we will learn about the difference between Machine Learning and Deep Learning.
Machine learning works with large amounts of data. It is useful for small amounts of data too. Deep learning on the other hand works efficiently if the amount of data increases rapidly. The following diagram shows the working of machine learning and deep learning with the amount of data −
Deep learning algorithms are designed to heavily depend on high-end machines unlike the traditional machine learning algorithms. Deep learning algorithms perform a number of matrix multiplication operations, which require a large amount of hardware support.
Feature engineering is the process of putting domain knowledge into specified features to reduce the complexity of data and make patterns that are visible to learning algorithms it works.
Example − Traditional machine learning patterns focus on pixels and other attributes needed for feature engineering process. Deep learning algorithms focus on high-level features from data. It reduces the task of developing new feature extractor of every new problem.
The traditional machine learning algorithms follow a standard procedure to solve the problem. It breaks the problem into parts, solve each one of them and combine them to get the required result. Deep learning focusses in solving the problem from end to end instead of breaking them into divisions.
Execution time is the amount of time required to train an algorithm. Deep learning requires a lot of time to train as it includes a lot of parameters which takes a longer time than usual. Machine learning algorithm comparatively requires less execution time.
Interpretability is the major factor for comparison of machine learning and deep learning algorithms. The main reason is that deep learning is still given a second thought before its usage in industry.
In this section, we will learn about the different applications of Machine Learning and Deep Learning.
Computer vision which is used for facial recognition and attendance mark through fingerprints or vehicle identification through number plate.
Computer vision which is used for facial recognition and attendance mark through fingerprints or vehicle identification through number plate.
Information Retrieval from search engines like text search for image search.
Information Retrieval from search engines like text search for image search.
Automated email marketing with specified target identification.
Automated email marketing with specified target identification.
Medical diagnosis of cancer tumors or anomaly identification of any chronic disease.
Medical diagnosis of cancer tumors or anomaly identification of any chronic disease.
Natural language processing for applications like photo tagging. The best example to explain this scenario is used in Facebook.
Natural language processing for applications like photo tagging. The best example to explain this scenario is used in Facebook.
Online Advertising.
Online Advertising.
With the increasing trend of using data science and machine learning in the industry, it will become important for each organization to inculcate machine learning in their businesses.
With the increasing trend of using data science and machine learning in the industry, it will become important for each organization to inculcate machine learning in their businesses.
Deep learning is gaining more importance than machine learning. Deep learning is proving to be one of the best techniques in state-of-art performance.
Deep learning is gaining more importance than machine learning. Deep learning is proving to be one of the best techniques in state-of-art performance.
Machine learning and deep learning will prove beneficial in research and academics field.
Machine learning and deep learning will prove beneficial in research and academics field.
In this article, we had an overview of machine learning and deep learning with illustrations and differences also focusing on future trends. Many of AI applications utilize machine learning algorithms primarily to drive self-service, increase agent productivity and workflows more reliable. Machine learning and deep learning algorithms include an exciting prospect for many businesses and industry leaders.
In this chapter, we will learn about the basics of TensorFlow. We will begin by understanding the data structure of tensor.
Tensors are used as the basic data structures in TensorFlow language. Tensors represent the connecting edges in any flow diagram called the Data Flow Graph. Tensors are defined as multidimensional array or list.
Tensors are identified by the following three parameters −
Unit of dimensionality described within tensor is called rank. It identifies the number of dimensions of the tensor. A rank of a tensor can be described as the order or n-dimensions of a tensor defined.
The number of rows and columns together define the shape of Tensor.
Type describes the data type assigned to Tensor’s elements.
A user needs to consider the following activities for building a Tensor −
Build an n-dimensional array
Convert the n-dimensional array.
TensorFlow includes various dimensions. The dimensions are described in brief below −
One dimensional tensor is a normal array structure which includes one set of values of the same data type.
Declaration
>>> import numpy as np
>>> tensor_1d = np.array([1.3, 1, 4.0, 23.99])
>>> print tensor_1d
The implementation with the output is shown in the screenshot below −
The indexing of elements is same as Python lists. The first element starts with index of 0; to print the values through index, all you need to do is mention the index number.
>>> print tensor_1d[0]
1.3
>>> print tensor_1d[2]
4.0
Sequence of arrays are used for creating “two dimensional tensors”.
The creation of two-dimensional tensors is described below −
Following is the complete syntax for creating two dimensional arrays −
>>> import numpy as np
>>> tensor_2d = np.array([(1,2,3,4),(4,5,6,7),(8,9,10,11),(12,13,14,15)])
>>> print(tensor_2d)
[[ 1 2 3 4]
[ 4 5 6 7]
[ 8 9 10 11]
[12 13 14 15]]
>>>
The specific elements of two dimensional tensors can be tracked with the help of row number and column number specified as index numbers.
>>> tensor_2d[3][2]
14
In this section, we will learn about Tensor Handling and Manipulations.
To begin with, let us consider the following code −
import tensorflow as tf
import numpy as np
matrix1 = np.array([(2,2,2),(2,2,2),(2,2,2)],dtype = 'int32')
matrix2 = np.array([(1,1,1),(1,1,1),(1,1,1)],dtype = 'int32')
print (matrix1)
print (matrix2)
matrix1 = tf.constant(matrix1)
matrix2 = tf.constant(matrix2)
matrix_product = tf.matmul(matrix1, matrix2)
matrix_sum = tf.add(matrix1,matrix2)
matrix_3 = np.array([(2,7,2),(1,4,2),(9,0,2)],dtype = 'float32')
print (matrix_3)
matrix_det = tf.matrix_determinant(matrix_3)
with tf.Session() as sess:
result1 = sess.run(matrix_product)
result2 = sess.run(matrix_sum)
result3 = sess.run(matrix_det)
print (result1)
print (result2)
print (result3)
Output
The above code will generate the following output −
We have created multidimensional arrays in the above source code. Now, it is important to understand that we created graph and sessions, which manage the Tensors and generate the appropriate output. With the help of graph, we have the output specifying the mathematical calculations between Tensors.
After understanding machine-learning concepts, we can now shift our focus to deep learning concepts. Deep learning is a division of machine learning and is considered as a crucial step taken by researchers in recent decades. The examples of deep learning implementation include applications like image recognition and speech recognition.
Following are the two important types of deep neural networks −
Convolutional Neural Networks
Recurrent Neural Networks
In this chapter, we will focus on the CNN, Convolutional Neural Networks.
Convolutional Neural networks are designed to process data through multiple layers of arrays. This type of neural networks is used in applications like image recognition or face recognition. The primary difference between CNN and any other ordinary neural network is that CNN takes input as a two-dimensional array and operates directly on the images rather than focusing on feature extraction which other neural networks focus on.
The dominant approach of CNN includes solutions for problems of recognition. Top companies like Google and Facebook have invested in research and development towards recognition projects to get activities done with greater speed.
A convolutional neural network uses three basic ideas −
Local respective fields
Convolution
Pooling
Let us understand these ideas in detail.
CNN utilizes spatial correlations that exist within the input data. Each concurrent layer of a neural network connects some input neurons. This specific region is called local receptive field. Local receptive field focusses on the hidden neurons. The hidden neurons process the input data inside the mentioned field not realizing the changes outside the specific boundary.
Following is a diagram representation of generating local respective fields −
If we observe the above representation, each connection learns a weight of the hidden neuron with an associated connection with movement from one layer to another. Here, individual neurons perform a shift from time to time. This process is called “convolution”.
The mapping of connections from the input layer to the hidden feature map is defined as “shared weights” and bias included is called “shared bias”.
CNN or convolutional neural networks use pooling layers, which are the layers, positioned immediately after CNN declaration. It takes the input from the user as a feature map that comes out of convolutional networks and prepares a condensed feature map. Pooling layers helps in creating layers with neurons of previous layers.
In this section, we will learn about the TensorFlow implementation of CNN. The steps,which require the execution and proper dimension of the entire network, are as shown below −
Step 1 − Include the necessary modules for TensorFlow and the data set modules, which are needed to compute the CNN model.
import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
Step 2 − Declare a function called run_cnn(), which includes various parameters and optimization variables with declaration of data placeholders. These optimization variables will declare the training pattern.
def run_cnn():
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
learning_rate = 0.0001
epochs = 10
batch_size = 50
Step 3 − In this step, we will declare the training data placeholders with input parameters - for 28 x 28 pixels = 784. This is the flattened image data that is drawn from mnist.train.nextbatch().
We can reshape the tensor according to our requirements. The first value (-1) tells function to dynamically shape that dimension based on the amount of data passed to it. The two middle dimensions are set to the image size (i.e. 28 x 28).
x = tf.placeholder(tf.float32, [None, 784])
x_shaped = tf.reshape(x, [-1, 28, 28, 1])
y = tf.placeholder(tf.float32, [None, 10])
Step 4 − Now it is important to create some convolutional layers −
layer1 = create_new_conv_layer(x_shaped, 1, 32, [5, 5], [2, 2], name = 'layer1')
layer2 = create_new_conv_layer(layer1, 32, 64, [5, 5], [2, 2], name = 'layer2')
Step 5 − Let us flatten the output ready for the fully connected output stage - after two layers of stride 2 pooling with the dimensions of 28 x 28, to dimension of 14 x 14 or minimum 7 x 7 x,y co-ordinates, but with 64 output channels. To create the fully connected with "dense" layer, the new shape needs to be [-1, 7 x 7 x 64]. We can set up some weights and bias values for this layer, then activate with ReLU.
flattened = tf.reshape(layer2, [-1, 7 * 7 * 64])
wd1 = tf.Variable(tf.truncated_normal([7 * 7 * 64, 1000], stddev = 0.03), name = 'wd1')
bd1 = tf.Variable(tf.truncated_normal([1000], stddev = 0.01), name = 'bd1')
dense_layer1 = tf.matmul(flattened, wd1) + bd1
dense_layer1 = tf.nn.relu(dense_layer1)
Step 6 − Another layer with specific softmax activations with the required optimizer defines the accuracy assessment, which makes the setup of initialization operator.
wd2 = tf.Variable(tf.truncated_normal([1000, 10], stddev = 0.03), name = 'wd2')
bd2 = tf.Variable(tf.truncated_normal([10], stddev = 0.01), name = 'bd2')
dense_layer2 = tf.matmul(dense_layer1, wd2) + bd2
y_ = tf.nn.softmax(dense_layer2)
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits = dense_layer2, labels = y))
optimiser = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
init_op = tf.global_variables_initializer()
Step 7 − We should set up recording variables. This adds up a summary to store the accuracy of data.
tf.summary.scalar('accuracy', accuracy)
merged = tf.summary.merge_all()
writer = tf.summary.FileWriter('E:\TensorFlowProject')
with tf.Session() as sess:
sess.run(init_op)
total_batch = int(len(mnist.train.labels) / batch_size)
for epoch in range(epochs):
avg_cost = 0
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size = batch_size)
_, c = sess.run([optimiser, cross_entropy], feed_dict = {
x:batch_x, y: batch_y})
avg_cost += c / total_batch
test_acc = sess.run(accuracy, feed_dict = {x: mnist.test.images, y:
mnist.test.labels})
summary = sess.run(merged, feed_dict = {x: mnist.test.images, y:
mnist.test.labels})
writer.add_summary(summary, epoch)
print("\nTraining complete!")
writer.add_graph(sess.graph)
print(sess.run(accuracy, feed_dict = {x: mnist.test.images, y:
mnist.test.labels}))
def create_new_conv_layer(
input_data, num_input_channels, num_filters,filter_shape, pool_shape, name):
conv_filt_shape = [
filter_shape[0], filter_shape[1], num_input_channels, num_filters]
weights = tf.Variable(
tf.truncated_normal(conv_filt_shape, stddev = 0.03), name = name+'_W')
bias = tf.Variable(tf.truncated_normal([num_filters]), name = name+'_b')
#Out layer defines the output
out_layer =
tf.nn.conv2d(input_data, weights, [1, 1, 1, 1], padding = 'SAME')
out_layer += bias
out_layer = tf.nn.relu(out_layer)
ksize = [1, pool_shape[0], pool_shape[1], 1]
strides = [1, 2, 2, 1]
out_layer = tf.nn.max_pool(
out_layer, ksize = ksize, strides = strides, padding = 'SAME')
return out_layer
if __name__ == "__main__":
run_cnn()
Following is the output generated by the above code −
See @{tf.nn.softmax_cross_entropy_with_logits_v2}.
2018-09-19 17:22:58.802268: I
T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140]
Your CPU supports instructions that this TensorFlow binary was not compiled to
use: AVX2
2018-09-19 17:25:41.522845: W
T:\src\github\tensorflow\tensorflow\core\framework\allocator.cc:101] Allocation
of 1003520000 exceeds 10% of system memory.
2018-09-19 17:25:44.630941: W
T:\src\github\tensorflow\tensorflow\core\framework\allocator.cc:101] Allocation
of 501760000 exceeds 10% of system memory.
Epoch: 1 cost = 0.676 test accuracy: 0.940
2018-09-19 17:26:51.987554: W
T:\src\github\tensorflow\tensorflow\core\framework\allocator.cc:101] Allocation
of 1003520000 exceeds 10% of system memory.
Recurrent neural networks is a type of deep learning-oriented algorithm, which follows a sequential approach. In neural networks, we always assume that each input and output is independent of all other layers. These type of neural networks are called recurrent because they perform mathematical computations in sequential manner.
Consider the following steps to train a recurrent neural network −
Step 1 − Input a specific example from dataset.
Step 2 − Network will take an example and compute some calculations using randomly initialized variables.
Step 3 − A predicted result is then computed.
Step 4 − The comparison of actual result generated with the expected value will produce an error.
Step 5 − To trace the error, it is propagated through same path where the variables are also adjusted.
Step 6 − The steps from 1 to 5 are repeated until we are confident that the variables declared to get the output are defined properly.
Step 7 − A systematic prediction is made by applying these variables to get new unseen input.
The schematic approach of representing recurrent neural networks is described below −
In this section, we will learn how to implement recurrent neural network with TensorFlow.
Step 1 − TensorFlow includes various libraries for specific implementation of the recurrent neural network module.
#Import necessary modules
from __future__ import print_function
import tensorflow as tf
from tensorflow.contrib import rnn
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot = True)
As mentioned above, the libraries help in defining the input data, which forms the primary part of recurrent neural network implementation.
Step 2 − Our primary motive is to classify the images using a recurrent neural network, where we consider every image row as a sequence of pixels. MNIST image shape is specifically defined as 28*28 px. Now we will handle 28 sequences of 28 steps for each sample that is mentioned. We will define the input parameters to get the sequential pattern done.
n_input = 28 # MNIST data input with img shape 28*28
n_steps = 28
n_hidden = 128
n_classes = 10
# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes]
weights = {
'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))
}
biases = {
'out': tf.Variable(tf.random_normal([n_classes]))
}
Step 3 − Compute the results using a defined function in RNN to get the best results. Here, each data shape is compared with current input shape and the results are computed to maintain the accuracy rate.
def RNN(x, weights, biases):
x = tf.unstack(x, n_steps, 1)
# Define a lstm cell with tensorflow
lstm_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
# Get lstm cell output
outputs, states = rnn.static_rnn(lstm_cell, x, dtype = tf.float32)
# Linear activation, using rnn inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out']
pred = RNN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = pred, labels = y))
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
Step 4 − In this step, we will launch the graph to get the computational results. This also helps in calculating the accuracy for test results.
with tf.Session() as sess:
sess.run(init)
step = 1
# Keep training until reach max iterations
while step * batch_size < training_iters:
batch_x, batch_y = mnist.train.next_batch(batch_size)
batch_x = batch_x.reshape((batch_size, n_steps, n_input))
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
if step % display_step == 0:
# Calculate batch accuracy
acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
# Calculate batch loss
loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
"{:.6f}".format(loss) + ", Training Accuracy= " + \
"{:.5f}".format(acc))
step += 1
print("Optimization Finished!")
test_len = 128
test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
test_label = mnist.test.labels[:test_len]
print("Testing Accuracy:", \
sess.run(accuracy, feed_dict={x: test_data, y: test_label}))
The screenshots below show the output generated −
TensorFlow includes a visualization tool, which is called the TensorBoard. It is used for analyzing Data Flow Graph and also used to understand machine-learning models. The important feature of TensorBoard includes a view of different types of statistics about the parameters and details of any graph in vertical alignment.
Deep neural network includes up to 36,000 nodes. TensorBoard helps in collapsing these nodes in high-level blocks and highlighting the identical structures. This allows better analysis of graph focusing on the primary sections of the computation graph. The TensorBoard visualization is said to be very interactive where a user can pan, zoom and expand the nodes to display the details.
The following schematic diagram representation shows the complete working of TensorBoard visualization −
The algorithms collapse nodes into high-level blocks and highlight the specific groups with identical structures, which separate high-degree nodes. The TensorBoard thus created is useful and is treated equally important for tuning a machine learning model. This visualization tool is designed for the configuration log file with summary information and details that need to be displayed.
Let us focus on the demo example of TensorBoard visualization with the help of the following code −
import tensorflow as tf
# Constants creation for TensorBoard visualization
a = tf.constant(10,name = "a")
b = tf.constant(90,name = "b")
y = tf.Variable(a+b*2,name = 'y')
model = tf.initialize_all_variables() #Creation of model
with tf.Session() as session:
merged = tf.merge_all_summaries()
writer = tf.train.SummaryWriter("/tmp/tensorflowlogs",session.graph)
session.run(model)
print(session.run(y))
The following table shows the various symbols of TensorBoard visualization used for the node representation −
Word embedding is the concept of mapping from discrete objects such as words to vectors and real numbers. It is important for input for machine learning. The concept includes standard functions, which effectively transform discrete input objects to useful vectors.
The sample illustration of input of word embedding is as shown below −
blue: (0.01359, 0.00075997, 0.24608, ..., -0.2524, 1.0048, 0.06259)
blues: (0.01396, 0.11887, -0.48963, ..., 0.033483, -0.10007, 0.1158)
orange: (-0.24776, -0.12359, 0.20986, ..., 0.079717, 0.23865, -0.014213)
oranges: (-0.35609, 0.21854, 0.080944, ..., -0.35413, 0.38511, -0.070976)
Word2vec is the most common approach used for unsupervised word embedding technique. It trains the model in such a way that a given input word predicts the word’s context by using skip-grams.
TensorFlow enables many ways to implement this kind of model with increasing levels of sophistication and optimization and using multithreading concepts and higher-level abstractions.
import os
import math
import numpy as np
import tensorflow as tf
from tensorflow.contrib.tensorboard.plugins import projector
batch_size = 64
embedding_dimension = 5
negative_samples = 8
LOG_DIR = "logs/word2vec_intro"
digit_to_word_map = {
1: "One",
2: "Two",
3: "Three",
4: "Four",
5: "Five",
6: "Six",
7: "Seven",
8: "Eight",
9: "Nine"}
sentences = []
# Create two kinds of sentences - sequences of odd and even digits.
for i in range(10000):
rand_odd_ints = np.random.choice(range(1, 10, 2), 3)
sentences.append(" ".join([digit_to_word_map[r] for r in rand_odd_ints]))
rand_even_ints = np.random.choice(range(2, 10, 2), 3)
sentences.append(" ".join([digit_to_word_map[r] for r in rand_even_ints]))
# Map words to indices
word2index_map = {}
index = 0
for sent in sentences:
for word in sent.lower().split():
if word not in word2index_map:
word2index_map[word] = index
index += 1
index2word_map = {index: word for word, index in word2index_map.items()}
vocabulary_size = len(index2word_map)
# Generate skip-gram pairs
skip_gram_pairs = []
for sent in sentences:
tokenized_sent = sent.lower().split()
for i in range(1, len(tokenized_sent)-1):
word_context_pair = [[word2index_map[tokenized_sent[i-1]],
word2index_map[tokenized_sent[i+1]]], word2index_map[tokenized_sent[i]]]
skip_gram_pairs.append([word_context_pair[1], word_context_pair[0][0]])
skip_gram_pairs.append([word_context_pair[1], word_context_pair[0][1]])
def get_skipgram_batch(batch_size):
instance_indices = list(range(len(skip_gram_pairs)))
np.random.shuffle(instance_indices)
batch = instance_indices[:batch_size]
x = [skip_gram_pairs[i][0] for i in batch]
y = [[skip_gram_pairs[i][1]] for i in batch]
return x, y
# batch example
x_batch, y_batch = get_skipgram_batch(8)
x_batch
y_batch
[index2word_map[word] for word in x_batch] [index2word_map[word[0]] for word in y_batch]
# Input data, labels train_inputs = tf.placeholder(tf.int32, shape = [batch_size])
train_labels = tf.placeholder(tf.int32, shape = [batch_size, 1])
# Embedding lookup table currently only implemented in CPU with
tf.name_scope("embeddings"):
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_dimension], -1.0, 1.0),
name = 'embedding')
# This is essentialy a lookup table
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
# Create variables for the NCE loss
nce_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_dimension], stddev = 1.0 /
math.sqrt(embedding_dimension)))
nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
loss = tf.reduce_mean(
tf.nn.nce_loss(weights = nce_weights, biases = nce_biases, inputs = embed,
labels = train_labels,num_sampled = negative_samples,
num_classes = vocabulary_size)) tf.summary.scalar("NCE_loss", loss)
# Learning rate decay
global_step = tf.Variable(0, trainable = False)
learningRate = tf.train.exponential_decay(learning_rate = 0.1,
global_step = global_step, decay_steps = 1000, decay_rate = 0.95, staircase = True)
train_step = tf.train.GradientDescentOptimizer(learningRate).minimize(loss)
merged = tf.summary.merge_all()
with tf.Session() as sess:
train_writer = tf.summary.FileWriter(LOG_DIR,
graph = tf.get_default_graph())
saver = tf.train.Saver()
with open(os.path.join(LOG_DIR, 'metadata.tsv'), "w") as metadata:
metadata.write('Name\tClass\n') for k, v in index2word_map.items():
metadata.write('%s\t%d\n' % (v, k))
config = projector.ProjectorConfig()
embedding = config.embeddings.add() embedding.tensor_name = embeddings.name
# Link this tensor to its metadata file (e.g. labels).
embedding.metadata_path = os.path.join(LOG_DIR, 'metadata.tsv')
projector.visualize_embeddings(train_writer, config)
tf.global_variables_initializer().run()
for step in range(1000):
x_batch, y_batch = get_skipgram_batch(batch_size) summary, _ = sess.run(
[merged, train_step], feed_dict = {train_inputs: x_batch, train_labels: y_batch})
train_writer.add_summary(summary, step)
if step % 100 == 0:
saver.save(sess, os.path.join(LOG_DIR, "w2v_model.ckpt"), step)
loss_value = sess.run(loss, feed_dict = {
train_inputs: x_batch, train_labels: y_batch})
print("Loss at %d: %.5f" % (step, loss_value))
# Normalize embeddings before using
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims = True))
normalized_embeddings = embeddings /
norm normalized_embeddings_matrix = sess.run(normalized_embeddings)
ref_word = normalized_embeddings_matrix[word2index_map["one"]]
cosine_dists = np.dot(normalized_embeddings_matrix, ref_word)
ff = np.argsort(cosine_dists)[::-1][1:10] for f in ff: print(index2word_map[f])
print(cosine_dists[f])
The above code generates the following output −
For understanding single layer perceptron, it is important to understand Artificial Neural Networks (ANN). Artificial neural networks is the information processing system the mechanism of which is inspired with the functionality of biological neural circuits. An artificial neural network possesses many processing units connected to each other. Following is the schematic representation of artificial neural network −
The diagram shows that the hidden units communicate with the external layer. While the input and output units communicate only through the hidden layer of the network.
The pattern of connection with nodes, the total number of layers and level of nodes between inputs and outputs with the number of neurons per layer define the architecture of a neural network.
There are two types of architecture. These types focus on the functionality artificial neural networks as follows −
Single Layer Perceptron
Multi-Layer Perceptron
Single layer perceptron is the first proposed neural model created. The content of the local memory of the neuron consists of a vector of weights. The computation of a single layer perceptron is performed over the calculation of sum of the input vector each with the value multiplied by corresponding element of vector of the weights. The value which is displayed in the output will be the input of an activation function.
Let us focus on the implementation of single layer perceptron for an image classification problem using TensorFlow. The best example to illustrate the single layer perceptron is through representation of “Logistic Regression”.
Now, let us consider the following basic steps of training logistic regression −
The weights are initialized with random values at the beginning of the training.
The weights are initialized with random values at the beginning of the training.
For each element of the training set, the error is calculated with the difference between desired output and the actual output. The error calculated is used to adjust the weights.
For each element of the training set, the error is calculated with the difference between desired output and the actual output. The error calculated is used to adjust the weights.
The process is repeated until the error made on the entire training set is not less than the specified threshold, until the maximum number of iterations is reached.
The process is repeated until the error made on the entire training set is not less than the specified threshold, until the maximum number of iterations is reached.
The complete code for evaluation of logistic regression is mentioned below −
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot = True)
import tensorflow as tf
import matplotlib.pyplot as plt
# Parameters
learning_rate = 0.01
training_epochs = 25
batch_size = 100
display_step = 1
# tf Graph Input
x = tf.placeholder("float", [None, 784]) # mnist data image of shape 28*28 = 784
y = tf.placeholder("float", [None, 10]) # 0-9 digits recognition => 10 classes
# Create model
# Set model weights
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
# Construct model
activation = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
# Minimize error using cross entropy
cross_entropy = y*tf.log(activation)
cost = tf.reduce_mean\ (-tf.reduce_sum\ (cross_entropy,reduction_indices = 1))
optimizer = tf.train.\ GradientDescentOptimizer(learning_rate).minimize(cost)
#Plot settings
avg_set = []
epoch_set = []
# Initializing the variables init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = \ mnist.train.next_batch(batch_size)
# Fit training using batch data sess.run(optimizer, \ feed_dict = {
x: batch_xs, y: batch_ys})
# Compute average loss avg_cost += sess.run(cost, \ feed_dict = {
x: batch_xs, \ y: batch_ys})/total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print ("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))
avg_set.append(avg_cost) epoch_set.append(epoch+1)
print ("Training phase finished")
plt.plot(epoch_set,avg_set, 'o', label = 'Logistic Regression Training phase')
plt.ylabel('cost')
plt.xlabel('epoch')
plt.legend()
plt.show()
# Test model
correct_prediction = tf.equal(tf.argmax(activation, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print
("Model accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
The above code generates the following output −
The logistic regression is considered as a predictive analysis. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal or independent variables.
In this chapter, we will focus on the basic example of linear regression implementation using TensorFlow. Logistic regression or linear regression is a supervised machine learning approach for the classification of order discrete categories. Our goal in this chapter is to build a model by which a user can predict the relationship between predictor variables and one or more independent variables.
The relationship between these two variables is cons −idered linear. If y is the dependent variable and x is considered as the independent variable, then the linear regression relationship of two variables will look like the following equation −
Y = Ax+b
We will design an algorithm for linear regression. This will allow us to understand the following two important concepts −
Cost Function
Gradient descent algorithms
The schematic representation of linear regression is mentioned below −
The graphical view of the equation of linear regression is mentioned below −
We will now learn about the steps that help in designing an algorithm for linear regression.
It is important to import the necessary modules for plotting the linear regression module. We start importing the Python library NumPy and Matplotlib.
import numpy as np
import matplotlib.pyplot as plt
Define the number of coefficients necessary for logistic regression.
number_of_points = 500
x_point = []
y_point = []
a = 0.22
b = 0.78
Iterate the variables for generating 300 random points around the regression equation −
Y = 0.22x+0.78
for i in range(number_of_points):
x = np.random.normal(0.0,0.5)
y = a*x + b +np.random.normal(0.0,0.1) x_point.append([x])
y_point.append([y])
View the generated points using Matplotlib.
fplt.plot(x_point,y_point, 'o', label = 'Input Data') plt.legend() plt.show()
The complete code for logistic regression is as follows −
import numpy as np
import matplotlib.pyplot as plt
number_of_points = 500
x_point = []
y_point = []
a = 0.22
b = 0.78
for i in range(number_of_points):
x = np.random.normal(0.0,0.5)
y = a*x + b +np.random.normal(0.0,0.1) x_point.append([x])
y_point.append([y])
plt.plot(x_point,y_point, 'o', label = 'Input Data') plt.legend()
plt.show()
The number of points which is taken as input is considered as input data.
TFLearn can be defined as a modular and transparent deep learning aspect used in TensorFlow framework. The main motive of TFLearn is to provide a higher level API to TensorFlow for facilitating and showing up new experiments.
Consider the following important features of TFLearn −
TFLearn is easy to use and understand.
TFLearn is easy to use and understand.
It includes easy concepts to build highly modular network layers, optimizers and various metrics embedded within them.
It includes easy concepts to build highly modular network layers, optimizers and various metrics embedded within them.
It includes full transparency with TensorFlow work system.
It includes full transparency with TensorFlow work system.
It includes powerful helper functions to train the built in tensors which accept multiple inputs, outputs and optimizers.
It includes powerful helper functions to train the built in tensors which accept multiple inputs, outputs and optimizers.
It includes easy and beautiful graph visualization.
It includes easy and beautiful graph visualization.
The graph visualization includes various details of weights, gradients and activations.
The graph visualization includes various details of weights, gradients and activations.
Install TFLearn by executing the following command −
pip install tflearn
Upon execution of the above code, the following output will be generated −
The following illustration shows the implementation of TFLearn with Random Forest classifier −
from __future__ import division, print_function, absolute_import
#TFLearn module implementation
import tflearn
from tflearn.estimators import RandomForestClassifier
# Data loading and pre-processing with respect to dataset
import tflearn.datasets.mnist as mnist
X, Y, testX, testY = mnist.load_data(one_hot = False)
m = RandomForestClassifier(n_estimators = 100, max_nodes = 1000)
m.fit(X, Y, batch_size = 10000, display_step = 10)
print("Compute the accuracy on train data:")
print(m.evaluate(X, Y, tflearn.accuracy_op))
print("Compute the accuracy on test set:")
print(m.evaluate(testX, testY, tflearn.accuracy_op))
print("Digits for test images id 0 to 5:")
print(m.predict(testX[:5]))
print("True digits:")
print(testY[:5])
In this chapter, we will focus on the difference between CNN and RNN −
Following illustration shows the schematic representation of CNN and RNN −
Keras is compact, easy to learn, high-level Python library run on top of TensorFlow framework. It is made with focus of understanding deep learning techniques, such as creating layers for neural networks maintaining the concepts of shapes and mathematical details. The creation of freamework can be of the following two types −
Sequential API
Functional API
Consider the following eight steps to create deep learning model in Keras −
Loading the data
Preprocess the loaded data
Definition of model
Compiling the model
Fit the specified model
Evaluate it
Make the required predictions
Save the model
We will use the Jupyter Notebook for execution and display of output as shown below −
Step 1 − Loading the data and preprocessing the loaded data is implemented first to execute the deep learning model.
import warnings
warnings.filterwarnings('ignore')
import numpy as np
np.random.seed(123) # for reproducibility
from keras.models import Sequential
from keras.layers import Flatten, MaxPool2D, Conv2D, Dense, Reshape, Dropout
from keras.utils import np_utils
Using TensorFlow backend.
from keras.datasets import mnist
# Load pre-shuffled MNIST data into train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
Y_train = np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)
This step can be defined as “Import libraries and Modules” which means all the libraries and modules are imported as an initial step.
Step 2 − In this step, we will define the model architecture −
model = Sequential()
model.add(Conv2D(32, 3, 3, activation = 'relu', input_shape = (28,28,1)))
model.add(Conv2D(32, 3, 3, activation = 'relu'))
model.add(MaxPool2D(pool_size = (2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation = 'softmax'))
Step 3 − Let us now compile the specified model −
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
Step 4 − We will now fit the model using training data −
model.fit(X_train, Y_train, batch_size = 32, epochs = 10, verbose = 1)
The output of iterations created is as follows −
Epoch 1/10 60000/60000 [==============================] - 65s -
loss: 0.2124 -
acc: 0.9345
Epoch 2/10 60000/60000 [==============================] - 62s -
loss: 0.0893 -
acc: 0.9740
Epoch 3/10 60000/60000 [==============================] - 58s -
loss: 0.0665 -
acc: 0.9802
Epoch 4/10 60000/60000 [==============================] - 62s -
loss: 0.0571 -
acc: 0.9830
Epoch 5/10 60000/60000 [==============================] - 62s -
loss: 0.0474 -
acc: 0.9855
Epoch 6/10 60000/60000 [==============================] - 59s -
loss: 0.0416 -
acc: 0.9871
Epoch 7/10 60000/60000 [==============================] - 61s -
loss: 0.0380 -
acc: 0.9877
Epoch 8/10 60000/60000 [==============================] - 63s -
loss: 0.0333 -
acc: 0.9895
Epoch 9/10 60000/60000 [==============================] - 64s -
loss: 0.0325 -
acc: 0.9898
Epoch 10/10 60000/60000 [==============================] - 60s -
loss: 0.0284 -
acc: 0.9910
This chapter will focus on how to get started with distributed TensorFlow. The aim is to help developers understand the basic distributed TF concepts that are reoccurring, such as TF servers. We will use the Jupyter Notebook for evaluating distributed TensorFlow. The implementation of distributed computing with TensorFlow is mentioned below −
Step 1 − Import the necessary modules mandatory for distributed computing −
import tensorflow as tf
Step 2 − Create a TensorFlow cluster with one node. Let this node be responsible for a job that that has name "worker" and that will operate one take at localhost:2222.
cluster_spec = tf.train.ClusterSpec({'worker' : ['localhost:2222']})
server = tf.train.Server(cluster_spec)
server.target
The above scripts generate the following output −
'grpc://localhost:2222'
The server is currently running.
Step 3 − The server configuration with respective session can be calculated by executing the following command −
server.server_def
The above command generates the following output −
cluster {
job {
name: "worker"
tasks {
value: "localhost:2222"
}
}
}
job_name: "worker"
protocol: "grpc"
Step 4 − Launch a TensorFlow session with the execution engine being the server. Use TensorFlow to create a local server and use lsof to find out the location of the server.
sess = tf.Session(target = server.target)
server = tf.train.Server.create_local_server()
Step 5 − View devices available in this session and close the respective session.
devices = sess.list_devices()
for d in devices:
print(d.name)
sess.close()
The above command generates the following output −
/job:worker/replica:0/task:0/device:CPU:0
Here, we will focus on MetaGraph formation in TensorFlow. This will help us understand export module in TensorFlow. The MetaGraph contains the basic information, which is required to train, perform evaluation, or run inference on a previously trained graph.
Following is the code snippet for the same −
def export_meta_graph(filename = None, collection_list = None, as_text = False):
"""this code writes `MetaGraphDef` to save_path/filename.
Arguments:
filename: Optional meta_graph filename including the path. collection_list:
List of string keys to collect. as_text: If `True`,
writes the meta_graph as an ASCII proto.
Returns:
A `MetaGraphDef` proto. """
One of the typical usage model for the same is mentioned below −
# Build the model ...
with tf.Session() as sess:
# Use the model ...
# Export the model to /tmp/my-model.meta.
meta_graph_def = tf.train.export_meta_graph(filename = '/tmp/my-model.meta')
Multi-Layer perceptron defines the most complicated architecture of artificial neural networks. It is substantially formed from multiple layers of perceptron.
The diagrammatic representation of multi-layer perceptron learning is as shown below −
MLP networks are usually used for supervised learning format. A typical learning algorithm for MLP networks is also called back propagation’s algorithm.
Now, we will focus on the implementation with MLP for an image classification problem.
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot = True)
import tensorflow as tf
import matplotlib.pyplot as plt
# Parameters
learning_rate = 0.001
training_epochs = 20
batch_size = 100
display_step = 1
# Network Parameters
n_hidden_1 = 256
# 1st layer num features
n_hidden_2 = 256 # 2nd layer num features
n_input = 784 # MNIST data input (img shape: 28*28) n_classes = 10
# MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# weights layer 1
h = tf.Variable(tf.random_normal([n_input, n_hidden_1])) # bias layer 1
bias_layer_1 = tf.Variable(tf.random_normal([n_hidden_1]))
# layer 1 layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, h), bias_layer_1))
# weights layer 2
w = tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2]))
# bias layer 2
bias_layer_2 = tf.Variable(tf.random_normal([n_hidden_2]))
# layer 2
layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, w), bias_layer_2))
# weights output layer
output = tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
# biar output layer
bias_output = tf.Variable(tf.random_normal([n_classes])) # output layer
output_layer = tf.matmul(layer_2, output) + bias_output
# cost function
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits = output_layer, labels = y))
#cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(output_layer, y))
# optimizer
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
# optimizer = tf.train.GradientDescentOptimizer(
learning_rate = learning_rate).minimize(cost)
# Plot settings
avg_set = []
epoch_set = []
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples / batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# Fit training using batch data sess.run(optimizer, feed_dict = {
x: batch_xs, y: batch_ys})
# Compute average loss
avg_cost += sess.run(cost, feed_dict = {x: batch_xs, y: batch_ys}) / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print
Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(avg_cost)
avg_set.append(avg_cost)
epoch_set.append(epoch + 1)
print
"Training phase finished"
plt.plot(epoch_set, avg_set, 'o', label = 'MLP Training phase')
plt.ylabel('cost')
plt.xlabel('epoch')
plt.legend()
plt.show()
# Test model
correct_prediction = tf.equal(tf.argmax(output_layer, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print
"Model Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels})
The above line of code generates the following output −
In this chapter, we will be focus on the network we will have to learn from known set of points called x and f(x). A single hidden layer will build this simple network.
The code for the explanation of hidden layers of perceptron is as shown below −
#Importing the necessary modules
import tensorflow as tf
import numpy as np
import math, random
import matplotlib.pyplot as plt
np.random.seed(1000)
function_to_learn = lambda x: np.cos(x) + 0.1*np.random.randn(*x.shape)
layer_1_neurons = 10
NUM_points = 1000
#Training the parameters
batch_size = 100
NUM_EPOCHS = 1500
all_x = np.float32(np.random.uniform(-2*math.pi, 2*math.pi, (1, NUM_points))).T
np.random.shuffle(all_x)
train_size = int(900)
#Training the first 700 points in the given set x_training = all_x[:train_size]
y_training = function_to_learn(x_training)
#Training the last 300 points in the given set x_validation = all_x[train_size:]
y_validation = function_to_learn(x_validation)
plt.figure(1)
plt.scatter(x_training, y_training, c = 'blue', label = 'train')
plt.scatter(x_validation, y_validation, c = 'pink', label = 'validation')
plt.legend()
plt.show()
X = tf.placeholder(tf.float32, [None, 1], name = "X")
Y = tf.placeholder(tf.float32, [None, 1], name = "Y")
#first layer
#Number of neurons = 10
w_h = tf.Variable(
tf.random_uniform([1, layer_1_neurons],\ minval = -1, maxval = 1, dtype = tf.float32))
b_h = tf.Variable(tf.zeros([1, layer_1_neurons], dtype = tf.float32))
h = tf.nn.sigmoid(tf.matmul(X, w_h) + b_h)
#output layer
#Number of neurons = 10
w_o = tf.Variable(
tf.random_uniform([layer_1_neurons, 1],\ minval = -1, maxval = 1, dtype = tf.float32))
b_o = tf.Variable(tf.zeros([1, 1], dtype = tf.float32))
#build the model
model = tf.matmul(h, w_o) + b_o
#minimize the cost function (model - Y)
train_op = tf.train.AdamOptimizer().minimize(tf.nn.l2_loss(model - Y))
#Start the Learning phase
sess = tf.Session() sess.run(tf.initialize_all_variables())
errors = []
for i in range(NUM_EPOCHS):
for start, end in zip(range(0, len(x_training), batch_size),\
range(batch_size, len(x_training), batch_size)):
sess.run(train_op, feed_dict = {X: x_training[start:end],\ Y: y_training[start:end]})
cost = sess.run(tf.nn.l2_loss(model - y_validation),\ feed_dict = {X:x_validation})
errors.append(cost)
if i%100 == 0:
print("epoch %d, cost = %g" % (i, cost))
plt.plot(errors,label='MLP Function Approximation') plt.xlabel('epochs')
plt.ylabel('cost')
plt.legend()
plt.show()
Following is the representation of function layer approximation −
Here two data are represented in shape of W. The two data are: train and validation which are represented in distinct colors as visible in legend section.
Optimizers are the extended class, which include added information to train a specific model. The optimizer class is initialized with given parameters but it is important to remember that no Tensor is needed. The optimizers are used for improving speed and performance for training a specific model.
The basic optimizer of TensorFlow is −
tf.train.Optimizer
This class is defined in the specified path of tensorflow/python/training/optimizer.py.
Following are some optimizers in Tensorflow −
Stochastic Gradient descent
Stochastic Gradient descent with gradient clipping
Momentum
Nesterov momentum
Adagrad
Adadelta
RMSProp
Adam
Adamax
SMORMS3
We will focus on the Stochastic Gradient descent. The illustration for creating optimizer for the same is mentioned below −
def sgd(cost, params, lr = np.float32(0.01)):
g_params = tf.gradients(cost, params)
updates = []
for param, g_param in zip(params, g_params):
updates.append(param.assign(param - lr*g_param))
return updates
The basic parameters are defined within the specific function. In our subsequent chapter, we will focus on Gradient Descent Optimization with implementation of optimizers.
In this chapter, we will learn about the XOR implementation using TensorFlow. Before starting with XOR implementation in TensorFlow, let us see the XOR table values. This will help us understand encryption and decryption process.
XOR Cipher encryption method is basically used to encrypt data which is hard to crack with brute force method, i.e., by generating random encryption keys which match the appropriate key.
The concept of implementation with XOR Cipher is to define a XOR encryption key and then perform XOR operation of the characters in the specified string with this key, which a user tries to encrypt. Now we will focus on XOR implementation using TensorFlow, which is mentioned below −
#Declaring necessary modules
import tensorflow as tf
import numpy as np
"""
A simple numpy implementation of a XOR gate to understand the backpropagation
algorithm
"""
x = tf.placeholder(tf.float64,shape = [4,2],name = "x")
#declaring a place holder for input x
y = tf.placeholder(tf.float64,shape = [4,1],name = "y")
#declaring a place holder for desired output y
m = np.shape(x)[0]#number of training examples
n = np.shape(x)[1]#number of features
hidden_s = 2 #number of nodes in the hidden layer
l_r = 1#learning rate initialization
theta1 = tf.cast(tf.Variable(tf.random_normal([3,hidden_s]),name = "theta1"),tf.float64)
theta2 = tf.cast(tf.Variable(tf.random_normal([hidden_s+1,1]),name = "theta2"),tf.float64)
#conducting forward propagation
a1 = tf.concat([np.c_[np.ones(x.shape[0])],x],1)
#the weights of the first layer are multiplied by the input of the first layer
z1 = tf.matmul(a1,theta1)
#the input of the second layer is the output of the first layer, passed through the
activation function and column of biases is added
a2 = tf.concat([np.c_[np.ones(x.shape[0])],tf.sigmoid(z1)],1)
#the input of the second layer is multiplied by the weights
z3 = tf.matmul(a2,theta2)
#the output is passed through the activation function to obtain the final probability
h3 = tf.sigmoid(z3)
cost_func = -tf.reduce_sum(y*tf.log(h3)+(1-y)*tf.log(1-h3),axis = 1)
#built in tensorflow optimizer that conducts gradient descent using specified
learning rate to obtain theta values
optimiser = tf.train.GradientDescentOptimizer(learning_rate = l_r).minimize(cost_func)
#setting required X and Y values to perform XOR operation
X = [[0,0],[0,1],[1,0],[1,1]]
Y = [[0],[1],[1],[0]]
#initializing all variables, creating a session and running a tensorflow session
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
#running gradient descent for each iteration and printing the hypothesis
obtained using the updated theta values
for i in range(100000):
sess.run(optimiser, feed_dict = {x:X,y:Y})#setting place holder values using feed_dict
if i%100==0:
print("Epoch:",i)
print("Hyp:",sess.run(h3,feed_dict = {x:X,y:Y}))
The above line of code generates an output as shown in the screenshot below −
Gradient descent optimization is considered to be an important concept in data science.
Consider the steps shown below to understand the implementation of gradient descent optimization −
Include necessary modules and declaration of x and y variables through which we are going to define the gradient descent optimization.
import tensorflow as tf
x = tf.Variable(2, name = 'x', dtype = tf.float32)
log_x = tf.log(x)
log_x_squared = tf.square(log_x)
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(log_x_squared)
Initialize the necessary variables and call the optimizers for defining and calling it with respective function.
init = tf.initialize_all_variables()
def optimize():
with tf.Session() as session:
session.run(init)
print("starting at", "x:", session.run(x), "log(x)^2:", session.run(log_x_squared))
for step in range(10):
session.run(train)
print("step", step, "x:", session.run(x), "log(x)^2:", session.run(log_x_squared))
optimize()
The above line of code generates an output as shown in the screenshot below −
We can see that the necessary epochs and iterations are calculated as shown in the output.
A partial differential equation (PDE) is a differential equation, which involves partial derivatives with unknown function of several independent variables. With reference to partial differential equations, we will focus on creating new graphs.
Let us assume there is a pond with dimension 500*500 square −
N = 500
Now, we will compute partial differential equation and form the respective graph using it. Consider the steps given below for computing graph.
Step 1 − Import libraries for simulation.
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
Step 2 − Include functions for transformation of a 2D array into a convolution kernel and simplified 2D convolution operation.
def make_kernel(a):
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding = 'SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5], [1.0, -6., 1.0], [0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k)
sess = tf.InteractiveSession()
Step 3 − Include the number of iterations and compute the graph to display the records accordingly.
N = 500
# Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype = np.float32)
ut_init = np.zeros([N, N], dtype = np.float32)
# Some rain drops hit a pond at random points
for n in range(100):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
plt.imshow(u_init)
plt.show()
# Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape = ())
damping = tf.placeholder(tf.float32, shape = ())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(U.assign(U_), Ut.assign(Ut_))
# Initialize state to initial conditions
tf.initialize_all_variables().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Visualize every 50 steps
if i % 500 == 0:
plt.imshow(U.eval())
plt.show()
The graphs are plotted as shown below −
TensorFlow includes a special feature of image recognition and these images are stored in a specific folder. With relatively same images, it will be easy to implement this logic for security purposes.
The folder structure of image recognition code implementation is as shown below −
The dataset_image includes the related images, which need to be loaded. We will focus on image recognition with our logo defined in it. The images are loaded with “load_data.py” script, which helps in keeping a note on various image recognition modules within them.
import pickle
from sklearn.model_selection import train_test_split
from scipy import misc
import numpy as np
import os
label = os.listdir("dataset_image")
label = label[1:]
dataset = []
for image_label in label:
images = os.listdir("dataset_image/"+image_label)
for image in images:
img = misc.imread("dataset_image/"+image_label+"/"+image)
img = misc.imresize(img, (64, 64))
dataset.append((img,image_label))
X = []
Y = []
for input,image_label in dataset:
X.append(input)
Y.append(label.index(image_label))
X = np.array(X)
Y = np.array(Y)
X_train,y_train, = X,Y
data_set = (X_train,y_train)
save_label = open("int_to_word_out.pickle","wb")
pickle.dump(label, save_label)
save_label.close()
The training of images helps in storing the recognizable patterns within specified folder.
import numpy
import matplotlib.pyplot as plt
from keras.layers import Dropout
from keras.layers import Flatten
from keras.constraints import maxnorm
from keras.optimizers import SGD
from keras.layers import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
import load_data
from keras.models import Sequential
from keras.layers import Dense
import keras
K.set_image_dim_ordering('tf')
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load data
(X_train,y_train) = load_data.data_set
# normalize inputs from 0-255 to 0.0-1.0
X_train = X_train.astype('float32')
#X_test = X_test.astype('float32')
X_train = X_train / 255.0
#X_test = X_test / 255.0
# one hot encode outputs
y_train = np_utils.to_categorical(y_train)
#y_test = np_utils.to_categorical(y_test)
num_classes = y_train.shape[1]
# Create the model
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), padding = 'same',
activation = 'relu', kernel_constraint = maxnorm(3)))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3, 3), activation = 'relu', padding = 'same',
kernel_constraint = maxnorm(3)))
model.add(MaxPooling2D(pool_size = (2, 2)))
model.add(Flatten())
model.add(Dense(512, activation = 'relu', kernel_constraint = maxnorm(3)))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation = 'softmax'))
# Compile model
epochs = 10
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr = lrate, momentum = 0.9, decay = decay, nesterov = False)
model.compile(loss = 'categorical_crossentropy', optimizer = sgd, metrics = ['accuracy'])
print(model.summary())
#callbacks = [keras.callbacks.EarlyStopping(
monitor = 'val_loss', min_delta = 0, patience = 0, verbose = 0, mode = 'auto')]
callbacks = [keras.callbacks.TensorBoard(log_dir='./logs',
histogram_freq = 0, batch_size = 32, write_graph = True, write_grads = False,
write_images = True, embeddings_freq = 0, embeddings_layer_names = None,
embeddings_metadata = None)]
# Fit the model
model.fit(X_train, y_train, epochs = epochs,
batch_size = 32,shuffle = True,callbacks = callbacks)
# Final evaluation of the model
scores = model.evaluate(X_train, y_train, verbose = 0)
print("Accuracy: %.2f%%" % (scores[1]*100))
# serialize model to JSONx
model_json = model.to_json()
with open("model_face.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model_face.h5")
print("Saved model to disk")
The above line of code generates an output as shown below −
In this chapter, we will understand the various aspects of neural network training which can be implemented using TensorFlow framework.
Following are the ten recommendations, which can be evaluated −
Back propagation is a simple method to compute partial derivatives, which includes the basic form of composition best suitable for neural nets.
In stochastic gradient descent, a batch is the total number of examples, which a user uses to calculate the gradient in a single iteration. So far, it is assumed that the batch has been the entire data set. The best illustration is working at Google scale; data sets often contain billions or even hundreds of billions of examples.
Adapting the learning rate is one of the most important features of gradient descent optimization. This is crucial to TensorFlow implementation.
Deep neural nets with a large number of parameters form powerful machine learning systems. However, over fitting is a serious problem in such networks.
Max pooling is a sample-based discretization process. The object is to down-sample an input representation, which reduces the dimensionality with the required assumptions.
LSTM controls the decision on what inputs should be taken within the specified neuron. It includes the control on deciding what should be computed and what output should be generated.
61 Lectures
9 hours
Abhishek And Pukhraj
57 Lectures
7 hours
Abhishek And Pukhraj
52 Lectures
7 hours
Abhishek And Pukhraj
52 Lectures
6 hours
Abhishek And Pukhraj
29 Lectures
3.5 hours
Mohammad Nauman
82 Lectures
4 hours
Anis Koubaa
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2590,
"s": 2315,
"text": "TensorFlow is a software library or framework, designed by the Google team to implement machine learning and deep learning concepts in the easiest manner. It combines the computational algebra of optimization techniques for easy calculation of many math... |
Replacing your Word Embeddings by Contextualized Word Vectors | by Edward Ma | Towards Data Science | Influenced from Mikolov et al. (2013) and Pennington et al. (2014), word embeddings become the basic step of initializing NLP project. After that, lots of embeddings are introduced such as lda2vec (Moody Christopher, 2016), character embeddings, doc2vec and so on. Today, we have new embeddings which is contextualized word embeddings. The idea is similar but they achieve the same goal which is using a better word representation to solve NLP task.
After reading this post, you will understand:
Contextualized Word Vectors Design
Architecture
Implementation
Take Away
Inspired by the CNN, McCAnn et al. focus on training an encoder and converting it to other task such that a better word representation can be leveraged. Not using skip-gram (Mikolov et al., 2013) or matrix factorization (Pennington et al., 2014) but leveraging machine translation to build Contextualized Word Vectors (CoVe).
Assuming Machine Translation (MT) is general enough to capture the “meaning” of word, we build an encoder and decoder architecture to train a model for MT. After that, we “transfer” the encoder layer to transferring word vectors in other NLP tasks such as classification problem and question answering problem.
Figure a shows how to train a model for Machine Translation. Giving word vectors (e.g. GloVe) such that we can get Context Vectors (CoVe) from the model.
Figure b shows reuseing the encoder from result a and applying it in other NLP problem.
As showed in figure b, the input of “Task-specific Model” are Word Vectors (e.g. GloVe or word2vec) and Encoder (i.e. Result from MT). Therefore, McCann et al. introduced the above formula to get the new word embeddings (concatenating GloVe(w) and CoVe(w)).
Implementation
Before that you need to install corresponding libraries (Copy from CoVe github):
git clone https://github.com/salesforce/cove.git # use ssh: git@github.com:salesforce/cove.gitcd covepip install -r requirements.txtpython setup.py develop# On CPUpython test/example.py --device -1# On GPUpython test/example.py
If you face issue on “pip install -r requirments.txt”, you may replace it by the following command
conda install -c pytorch pytorchpip install -e git+https://github.com/jekbradbury/revtok.git#egg=revtokpip install https://github.com/pytorch/text/archive/master.zip
If you use Keras (Tensorflow), you can follow this notebook to build CoVe. However, there is some issue on the original notebook, you may check out my modified version for reference. The issue is that original CoVe (pytorch updated name of layer from “rnn” to “rnn1”). Pre-trained model (download Keras_CoVe.h5) is available as well.
On the other hand, you can also use pre-converted Keras version as well.
# Init CoVe Modelcove_model = keras.models.load_model(cove_file)# Init GloVe Modelglove_model = GloVeEmbeddings() glove_model.load_model(dest_dir=word_embeddings_dir, process=False)# Encode sentence by GloVex_embs = glove_model.encode(tokens)# Encode GloVe vector by CoVex_embs = cove_model.predict(x_embs)
To access all code, you can visit this github repo.
CoVe needs label data to get the contextual word vectors.
Using GloVe to build CoVe
CoVe unable to resolve OOV issue. It suggests to use zero vectors to represent unknown word.
I am Data Scientist in Bay Area. Focusing on state-of-the-art in Data Science, Artificial Intelligence , especially in NLP and platform related. You can reach me from Medium Blog, LinkedIn or Github.
McCann B., Bradbury J., Xiong C., Socher R.. Learned in Translation: Contextualized Word Vectors. 2017. http://papers.nips.cc/paper/7209-learned-in-translation-contextualized-word-vectors.pdf
CoVe in Pytorch (Original)
CoVe in Keras | [
{
"code": null,
"e": 622,
"s": 172,
"text": "Influenced from Mikolov et al. (2013) and Pennington et al. (2014), word embeddings become the basic step of initializing NLP project. After that, lots of embeddings are introduced such as lda2vec (Moody Christopher, 2016), character embeddings, doc2vec a... |
C++ iomanip Library - setfill Function | The C++ function std::setfill behaves as if member fill were called with c as argument on the stream on which it is inserted as a manipulator (it can be inserted on output streams).
It is used to sets c as the stream's fill character.
Following is the declaration for std::setfill function.
setfill (char_type c);
c − The new fill character for the stream. char_type is the type of characters used by the stream (i.e., its first class template parameter, charT).
It returns unspecified. This function should only be used as a stream manipulator.
Basic guarantee − if an exception is thrown, the stream is in a valid state.
The stream object on which it inserted is modified. Concurrent access to the same stream object may introduce data races.
In below example explains about setfill function.
#include <iostream>
#include <iomanip>
int main () {
std::cout << std::setfill ('x') << std::setw (10);
std::cout << 77 << std::endl;
return 0;
}
Let us compile and run the above program, this will produce the following result −
xxxxxxxx77
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2786,
"s": 2603,
"text": "The C++ function std::setfill behaves as if member fill were called with c as argument on the stream on which it is inserted as a manipulator (it can be inserted on output streams)."
},
{
"code": null,
"e": 2839,
"s": 2786,
"text": ... |
Couchbase Installation - GeeksforGeeks | 08 Jul, 2020
Couchbase Server is NoSQL database software package that is used for interactive applications. It is an opensource NoSQL database that provides us with a mechanism for storage and recovery of data which is modelled in means other than the tabular relations used in relational databases. It has multiple data access paths to query and manage our JSON documents. It provides Eventual Consistency and Immediate Consistency methods to ensure consistency in a distributed system.
Installation :To install couchbase we first need to download the couchbase server setup file and after that, we will have to install that through a package manager, and then we will finally have to start the couchbase server in order to run it over localhost.
The installation procedure is as follows:
Download couchbase Database from here.Move to the location where the file is downloaded and use the following command to install the downloaded file.sudo apt install ./couchbase-server-community_6.5.1-ubuntu18.04_amd64.debNote:- Replace the version in the command with the downloaded version.After the successful installation, run the couchbase service using the following command.sudo service couchbase-server startNow visit http://localhost:8091/ and if it gives the following result, then your installation is successful.
Download couchbase Database from here.
Move to the location where the file is downloaded and use the following command to install the downloaded file.sudo apt install ./couchbase-server-community_6.5.1-ubuntu18.04_amd64.debNote:- Replace the version in the command with the downloaded version.
sudo apt install ./couchbase-server-community_6.5.1-ubuntu18.04_amd64.deb
Note:- Replace the version in the command with the downloaded version.
After the successful installation, run the couchbase service using the following command.sudo service couchbase-server start
sudo service couchbase-server start
Now visit http://localhost:8091/ and if it gives the following result, then your installation is successful.
To create your own cluster:
Click on setup new cluster button in order to create your own cluster.Enter your cluster details there.Read properly the terms and conditions of couchbase and accept them if found suitable.Enter the configuration you want for your cluster.If everything goes well you will have a dashboard like the one shown below.
Click on setup new cluster button in order to create your own cluster.
Enter your cluster details there.
Read properly the terms and conditions of couchbase and accept them if found suitable.
Enter the configuration you want for your cluster.
If everything goes well you will have a dashboard like the one shown below.
NoSQL
DBMS
Technical Scripter
DBMS
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Second Normal Form (2NF)
Types of Functional dependencies in DBMS
Introduction of Relational Algebra in DBMS
KDD Process in Data Mining
Structure of Database Management System
Difference between File System and DBMS
Advantages of Database Management System
Insert Operation in B-Tree
Relational Model in DBMS
What is Temporary Table in SQL? | [
{
"code": null,
"e": 24304,
"s": 24276,
"text": "\n08 Jul, 2020"
},
{
"code": null,
"e": 24779,
"s": 24304,
"text": "Couchbase Server is NoSQL database software package that is used for interactive applications. It is an opensource NoSQL database that provides us with a mechanism... |
Python Mathematical Functions | The math module is used to access mathematical functions in the Python. All methods of this functions are used for integer or real type objects, not for complex numbers.
To use this module, we should import that module into our code.
import math
These constants are used to put them into our calculations.
pi
Return the value of pi: 3.141592
E
Return the value of natural base e. e is 0.718282
tau
Returns the value of tau. tau = 6.283185
inf
Returns the infinite
nan
Not a number type.
These functions are used to represent numbers in different forms. The methods are like below −
ceil(x)
Return the Ceiling value. It is the smallest integer, greater or equal to the number x.
copysign(x, y)
It returns the number x and copy the sign of y to x.
fabs(x)
Returns the absolute value of x.
factorial(x)
Returns factorial of x. where x ≥ 0
floor(x)
Return the Floor value. It is the largest integer, less or equal to the number x.
fsum(iterable)
Find sum of the elements in an iterable object
gcd(x, y)
Returns the Greatest Common Divisor of x and y
isfinite(x)
Checks whether x is neither an infinity nor nan.
isinf(x)
Checks whether x is infinity
isnan(x)
Checks whether x is not a number.
remainder(x, y)
Find remainder after dividing x by y.
import math
print('The Floor and Ceiling value of 23.56 are: ' + str(math.ceil(23.56)) + ', ' + str(math.floor(23.56)))
x = 10
y = -15
print('The value of x after copying the sign from y is: ' + str(math.copysign(x, y)))
print('Absolute value of -96 and 56 are: ' + str(math.fabs(-96)) + ', ' + str(math.fabs(56)))
my_list = [12, 4.25, 89, 3.02, -65.23, -7.2, 6.3]
print('Sum of the elements of the list: ' + str(math.fsum(my_list)))
print('The GCD of 24 and 56 : ' + str(math.gcd(24, 56)))
x = float('nan')
if math.isnan(x):
print('It is not a number')
x = float('inf')
y = 45
if math.isinf(x):
print('It is Infinity')
print(math.isfinite(x)) #x is not a finite number
print(math.isfinite(y)) #y is a finite number
The Floor and Ceiling value of 23.56 are: 24, 23
The value of x after copying the sign from y is: -10.0
Absolute value of -96 and 56 are: 96.0, 56.0
Sum of the elements of the list: 42.13999999999999
The GCD of 24 and 56 : 8
It is not a number
It is Infinity
False
True
These functions are used to calculate different power related and logarithmic related tasks.
pow(x, y)
Return the x to the power y value.
sqrt(x)
Finds the square root of x
exp(x)
Finds xe, where e = 2.718281
log(x[, base])
Returns the Log of x, where base is given. The default base is e
log2(x)
Returns the Log of x, where base is 2
log10(x)
Returns the Log of x, where base is 10
Live Demo
import math
print('The value of 5^8: ' + str(math.pow(5, 8)))
print('Square root of 400: ' + str(math.sqrt(400)))
print('The value of 5^e: ' + str(math.exp(5)))
print('The value of Log(625), base 5: ' + str(math.log(625, 5)))
print('The value of Log(1024), base 2: ' + str(math.log2(1024)))
print('The value of Log(1024), base 10: ' + str(math.log10(1024)))
The value of 5^8: 390625.0
Square root of 400: 20.0
The value of 5^e: 148.4131591025766
The value of Log(625), base 5: 4.0
The value of Log(1024), base 2: 10.0
The value of Log(1024), base 10: 3.010299956639812
These functions are used to calculate different trigonometric operations.
sin(x)
Return the sine of x in radians
cos(x)
Return the cosine of x in radians
tan(x)
Return the tangent of x in radians
asin(x)
This is the inverse operation of the sine, there are acos, atan also.
degrees(x)
Convert angle x from radian to degrees
radians(x)
Convert angle x from degrees to radian
Live Demo
import math
print('The value of Sin(60 degree): ' + str(math.sin(math.radians(60))))
print('The value of cos(pi): ' + str(math.cos(math.pi)))
print('The value of tan(90 degree): ' + str(math.tan(math.pi/2)))
print('The angle of sin(0.8660254037844386): ' + str(math.degrees(math.asin(0.8660254037844386))))
The value of Sin(60 degree): 0.8660254037844386
The value of cos(pi): -1.0
The value of tan(90 degree): 1.633123935319537e+16
The angle of sin(0.8660254037844386): 59.99999999999999 | [
{
"code": null,
"e": 1232,
"s": 1062,
"text": "The math module is used to access mathematical functions in the Python. All methods of this functions are used for integer or real type objects, not for complex numbers."
},
{
"code": null,
"e": 1296,
"s": 1232,
"text": "To use this ... |
Multi Label Classification using Bag-of-Words (BoW) and TF-IDF | by Snehal Nair | Towards Data Science | For this study, we are using Kaggle data for Toxic Comment Classification Challenge. Lets load and inspect the data. This is a multilabel classification problem where comments are classified by the level of toxicity: toxic / severe_toxic / obscene / threat / insult / identity_hate
import pandas as pddata = pd.read_csv('train.csv')print('Shape of the data: ', data.shape)data.head()
y_cols = list(data.columns[2:])is_multilabel = (data[y_cols].sum(axis=1) >1).count()print('is_multilabel count: ', is_multilabel)
From the above data we can see that not all comments have a label.
Its Multilabel data (each comment can have more than one label)
Add a label, ‘non_toxic’ for comments with no label
Lets also check the explore how balanced is the classes.
# Add a label, ‘non_toxic’ for comments with no labeldata['non_toxic'] = 1-data[y_cols].max(axis=1)y_cols += ['non_toxic']# Inspect the class balancedef get_class_weight(data): class_weight = {} for num,col in enumerate(y_cols): if num not in class_weight: class_weight[col] = round((data[data[col] == 1][col].sum())/data.shape[0]*100,2) return class_weightclass_weight = get_class_weight(data)print('Total class weight: ', sum(class_weight.values()), '%\n\n', class_weight)
We can see that the data is highly imbalanced. Imbalanced data refers to classification problems where the classes are not represented equally for e.g., 89% comments are classified under the newly built ‘non_toxic’ label.
Any given linear model will handle class imbalance very badly if it uses squared loss for binary classification. We will not discuss the techniques to tackle the imbalance problem in this project. Let's focus on preprocessing the text data before converting it to numeric data using BoW and tf-idf.
from sklearn.model_selection import train_test_splitX, X_test, y, y_test = train_test_split(X_data, y_data, test_size=0.2, train_size=0.8)X_train, X_val, y_train, y_val = train_test_split(X,y,test_size = 0.25,train_size =0.75)X_train[:1]
Lets take a closer look at one the comments. Please note the text will vary for you since the dataset is split randomly. Please use seed in the split if you aim to reproduce the results.
From the above example, we can see that the text requires preprocessing, i.e., converting it into same cas (lower), remove symbols, numbers and stop words before the text is converted into tokens. For preprocessing the text you will need to download specific libraries.
import reimport numpy as nimport nltknltk.download('stopwords')from nltk.corpus import stopwordsnltk.download('punkt')
For machine learning models, the textual data must be converted to numeric data. This can be done in various ways like BoW, tf-idf, Word embeddings, etc. In this project, we will be focusing on BoW and tf-idf.
In the BoW model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding grammar and even word order but keeping multiplicity.
We will restrict ourselves to N popular words to limit the size of the matrix. Moreover, including unpopular words will only introduce sparsity without adding too much information. For this project lets work with 10000 popular words.
# Lets take a look at top 10 popular wordsPOPULAR_WORDS[:10]
For each comment in the corpora create a zero vector with N dimension and for the words found in the comment increase the values in the vector by 1 for e.g., If a word appear twice, that index in the vector will get 2.
For efficient storage, we will convert this vector into a sparse vector, one that leverages sparsity and actually stores only nonzero entries.
from scipy import sparse as sp_sparse
In information retrieval, tf–idf or TFIDF, short for term frequency–inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus.
This method is an extension to Bag-of-Words where the total frequency of the word is divided by total words in the document. This penalizes too frequent word by normalizing it over the entire document.
from sklearn.feature_extraction.text import TfidfVectorizer
We have the datasets prepared using two different techniques BoW and tf-idf. We can run classifiers on both the datasets. Since this is multilabel classification problem, we will be using a simple OneVsRestClassfier logistic regression.
from sklearn.multiclass import OneVsRestClassifierfrom sklearn.linear_model import LogisticRegression
You can experiement with different regularization techniques, L1 and L2 with different coefficients (e.g. C equal to 0.1, 1, 10, 100) till you are happy with the result, this is called hyperparameter tuning. This can be acheived by cv grid search, random search and bayesian optimisation. We are not covering this topic in this article. If you would like to learn more about this, please refer to this post.
We will using metrics like accuracy score and f1 score for evaluation.
Accuracy Score: In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true.
F1 score : The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0. The relative contribution of precision and recall to the F1 score are equal. F1 score = 2 * (precision * recall) / (precision + recall)
'F1 score micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.
'F1 score macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
'F1 score weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall.
from sklearn.metrics import accuracy_scorefrom sklearn.metrics import f1_scorefrom sklearn.metrics import roc_auc_score from sklearn.metrics import average_precision_scorefrom sklearn.metrics import recall_score
F1 score weighted and macro which accounts for data imbalance look good. Lets check the output, predicted label and actual label. We will need to replace on hot encoded labels with actual label for interpretation. Lets run prediction on tf-idf model.
test_predictions = classifier_tfidf.predict(X_test_tfidf)
for i in range(90,97): print('\ny_label: ', test_labels[i], '\ny_pred: ', test_pred_labels[i])print('\ny_label: ', test_labels[i], '\ny_pred: ', test_pred_labels[i])
The results are not too bad, but it can do better. Please try to experiment hyper parameter tuning and try different classifiers to check the performance of the model. Hope you enjoyed reading.
I have left the code for building word cloud image, if you are interested.
comments_join = ' '.join(POPULAR_WORDS)from scipy.misc import imreadfrom wordcloud import WordCloud, STOPWORDStwitter_mask = imread('twitter.png', flatten=True) wordcloud = WordCloud( stopwords=STOPWORDS, background_color='white', width=1800, height=1400, mask=twitter_mask ).generate(comments_join)plt.figure(figsize = (12, 12), facecolor = None) plt.imshow(wordcloud)plt.axis("off")plt.savefig('twitter_comments.png', dpi=300)plt.show()
For jupyter notebook with codes, please click here. | [
{
"code": null,
"e": 454,
"s": 172,
"text": "For this study, we are using Kaggle data for Toxic Comment Classification Challenge. Lets load and inspect the data. This is a multilabel classification problem where comments are classified by the level of toxicity: toxic / severe_toxic / obscene / threa... |
Find all divisors of a natural number - Set 1 in C++ | In this tutorial, we are going to write a program that finds all the divisors of a natural number. It's a straightforward problem. Let's see the steps to solve it.
Initialize the number.
Initialize the number.
Write a loop that iterates from 1 to the given number.Check whether the given number is divisible by the current number or not.If the above condition satisfies, then print the current number.
Write a loop that iterates from 1 to the given number.
Check whether the given number is divisible by the current number or not.
Check whether the given number is divisible by the current number or not.
If the above condition satisfies, then print the current number.
If the above condition satisfies, then print the current number.
Let's see the code.
Live Demo
#include <bits/stdc++.h>
using namespace std;
void findDivisors(int n) {
for (int i = 1; i <= n; i++) {
if (n % i == 0) {
cout << i << " ";
}
}
cout << endl;
}
int main() {
findDivisors(65);
return 0;
}
If you run the above code, then you will get the following result.
1 5 13 65
If you have any queries in the tutorial, mention them in the comment section. | [
{
"code": null,
"e": 1226,
"s": 1062,
"text": "In this tutorial, we are going to write a program that finds all the divisors of a natural number. It's a straightforward problem. Let's see the steps to solve it."
},
{
"code": null,
"e": 1249,
"s": 1226,
"text": "Initialize the num... |
Imbalanced Classes: Part 1. For a recent data science project, I... | by Becca R | Towards Data Science | For a recent data science project, I developed a supervised learning model to classify the booking location of a first-time user of the vacation home site Airbnb. This dataset is available on Kaggle as a part of a 2015 Kaggle competition.
For my project, I decided to group users into two groups: those who booked their first trip within the U.S.A. and Canada, and those who booked their first trip elsewhere internationally, essentially turning the problem into a binary classification problem. Sounds simple, right?
The problem was that the classification target (booking location) is highly imbalanced. Nearly 75 percent of first time users booked trips within the U.S.A. and Canada. After my initial model showed promising results, further inspection of the model’s performance metrics highlighted a crucial problem when trying to perform binary classification with imbalanced class sizes. This post aims to highlight some pitfalls to be aware of when building a classification model with imbalanced classes, and highlights some methods to deal with these issues.
The data
This plots shows the significant imbalance inherent in my target class:
import matplotlib.pyplot as pltimport pandas as pdimport numpy as npimport pickleimport seaborn as snsdf = pd.read_pickle('data_for_regression_20k.pkl')sns.set_style("white")dests2 = df.groupby('country_USA_World_bi').agg({'country_USA_World_bi':['count']}).reset_index()dests2.columns = ['dest', 'count']dests2['pct'] = dests2['count']*100/(sum(dests2['count']))x = dests2['dest']y = dests2['pct']palette = ['olive','mediumvioletred']fig, ax = plt.subplots(figsize = (8,4))fig = sns.barplot(y, x, estimator = sum, ci = None, orient='h', palette=palette)y_lab = ['USA/Canada', 'International']ax.set_yticklabels(labels=y_lab, ha='right')for i, v in enumerate(y): ax.text(v - 15, i + .05, str(int(v)+.5)+'%', color='white', fontweight='bold')plt.title('Country Destinations as Percent of Total Bookings',size = 16, weight = 'bold')plt.ylabel('Country')plt.xlabel('Percent of total');
Before fitting a logistic regression classifier to my data, I split the data into a training set (80%) and hold-out set for testing (20%). Due to the underrepresentation of international travelers, I used the stratify parameter to ensure that both target classes were represented in the test set. Then, I standardized both the train and test feature set using a standard scalar.
from sklearn.model_selection import train_test_splitfrom sklearn.preprocessing import StandardScalerfrom sklearn.metrics import precision_score, recall_score, precision_recall_curve,f1_score, fbeta_score, make_scorery = df['country_USA_World_bi'] X = df.drop(['country_dest_id','country_USA_World_bi','month_created', 'day_created', 'month_active', 'day_active'], axis = 1)Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.2, stratify=y,random_state = 88)std_scale = StandardScaler()X_train_scaled = std_scale.fit_transform(Xtrain)X_test_scaled = std_scale.transform(Xtest)
The first model I ran was logistic regression, because logistic regression includes feature coefficients (which helps with interpretability). I fit an initial model with default hypterparamaters, and was pleasantly surprised to see that the model was 75 percent accurate prior to any hypterparameter tuning. Naturally, I found myself wondering if this was too good to be true. Remembering how 75 percent of users traveled within the U.S.A./Canada, couldn’t I be 75 percent accurate by simply guessing this destination every time?
In fact, model precision (the proportion of predicted international bookings that are actually international), model recall (the proportion of international bookings correctly identified by the model), and the f1 score (a balance of the two) are pretty poor.
For the purpose of this project, I am most interested in the recall score, since I think it is most useful for the model to be able to accurately predict users who will travel internationally (since those who travel internationally are more likely to be avid travelers with larger travel budgets). However, given the initial recall score of just 0.01, I had a long way to go to improve this model!
from sklearn.linear_model import LogisticRegressiondef fit_logistic_regression_classifier(X_training_set, y_training_set): logreg = LogisticRegression(random_state=88) model = logreg.fit(X_training_set, y_training_set) y_pred = model.predict(X_test_scaled) print('accuracy = ',model.score(X_test_scaled, ytest).round(2), 'precision = ',precision_score(ytest, y_pred).round(2), 'recall = ',recall_score(ytest, y_pred).round(2), 'f1_score = ',f1_score(ytest, y_pred).round(2) ) return(y_pred)y_pred = fit_logistic_regression_classifier(X_train_scaled, ytrain)
A confusion matrix is a great tool to visualize the extent to which the model got, well, confused. The confusion matrix in sklearn gives raw value counts for the number of observations predicted to be in each class, by their actual class. The plot_confusion_matrix() function gives a visual representation of the percent of values in each actual and predicted class.
import itertoolsfrom sklearn.metrics import confusion_matrixdef make_confusion_matrix(cm, classes,title='Confusion matrix',cmap=plt.cm.Blues): print(cm) # Normalize values cm = cm.astype('float')*100 / cm.sum(axis=1)[:, np.newaxis] plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes)fmt = '.2f' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > 50 else "black") plt.ylabel('True label') plt.xlabel('Predicted label') plt.tight_layout()def plot_confusion_matrix(y_test_set, y_pred): class_names = ['Domestic','International'] cnf_matrix = confusion_matrix(y_test_set, y_pred) np.set_printoptions(precision=2) plt.figure() make_confusion_matrix(cnf_matrix, classes=class_names, title='Confusion matrix with normalization');
The figure shows that my intuition was correct — the model had classified almost 100 percent of observations as domestic travelers, therefore hitting the mark 75 percent of the time!
plot_confusion_matrix(ytest, y_pred)
1. Random Oversampling
The Imbalanced Learn library includes a variety of methods to rebalance classes for more accurate predictive capability. The method I tried is called Random Oversampling. According to the documentation, “random over-sampling can be used to repeat some samples and balance the number of samples between the dataset.” Basically, this rebalancing method uses random sampling of target classes with replacement to get a balanced representation of each class in the training set. Indeed, after fitting the Random Over Sampler to my training set, I had a sample of 12,743 observations in each target class, compared with my baseline scenario of 3,186 domestic bookings and 1,088 international bookings.
from imblearn.over_sampling import RandomOverSamplerros = RandomOverSampler(random_state=88)X_resampled, y_resampled = ros.fit_sample(X_train_scaled, ytrain)yvals, counts = np.unique(ytest, return_counts=True)yvals_ros, counts_ros = np.unique(y_resampled, return_counts=True)print('Classes in test set:',dict(zip(yvals, counts)),'\n', 'Classes in rebalanced test set:',dict(zip(yvals_ros, counts_ros)))
Just as before, I fit a logistic regression classifier with default parameters and observed the model’s performance metrics and confusion matrix. Since the model could no longer correctly guess the domestic location 75 percent of the time, its performance decreased significantly: accuracy dropped to 54 percent.
However, recall, my metric of interest, increased — from 0.01 to 0.58. Providing balanced class sizes in the test set significantly increased the model’s ability to predict the minority class (in my case, Airbnb bookings in international locations).
y_pred_ros = fit_logistic_regression_classifier(X_resampled, y_resampled)plot_confusion_matrix(ytest, y_pred_ros)
2. SMOTE and ADASYN
The Synthetic Minority Oversampling Technique (SMOTE) and the Adaptive Synthetic (ADASYN) are two additional methods for oversampling the minority class. Unlike Random Over Sampling, which over samples existing observations, SMOTE and ADASYN use interpolation to create new observations near existing observations of the minority class.
For the SMOTE rebalancing, I used the SMOTENC object, since the majority (all but six) of my features are non-continuous (i.e. categorical) features. Just as before, I ended up with a balanced training set.
ADASYN gave me a new training set that was around 49 percent travelers to U.S.A./Canada and 51 percent travelers abroad. This (trivial) imbalance is due to the way in which ADASYN creates new data points around difficult to classify points according to a weighted distribution of their difficulty (see He, et al., 2008).
Just like with Random Over Sampling, the model’s ability to classify all destinations (accuracy) decreased with oversampling. On the other hand, the model’s ability to classify the minority class (recall) improved with both SMOTE and ADASYN.
from imblearn.over_sampling import SMOTENC, ADASYNsmote_nc = SMOTENC(categorical_features=list(np.arange(7,80)), random_state=88)X_smoted, y_smoted = smote_nc.fit_resample(X_train_scaled, ytrain)adasyn = ADASYN(random_state=88)X_adasyn, y_adasyn = adasyn.fit_resample(X_train_scaled, ytrain)yvals, counts = np.unique(ytest, return_counts=True)yvals_smt, counts_smt = np.unique(y_smoted, return_counts=True)yvals_ads, counts_ads = np.unique(y_adasyn, return_counts=True)print('Classes in test set:',dict(zip(yvals, counts)),'\n', 'Classes in rebalanced test set with SMOTENC:',dict(zip(yvals_smt, counts_smt)),'\n', 'Classes in rebalanced test set with ADASYN:',dict(zip(yvals_ads, counts_ads)))
y_pred_smt = fit_logistic_regression_classifier(X_smoted, y_smoted)plot_confusion_matrix(ytest, y_pred_smt)
y_pred_ads = fit_logistic_regression_classifier(X_adasyn, y_adasyn)plot_confusion_matrix(ytest, y_pred_ads)
3. Gridsearch on balanced classes
Since the baseline model with ADASYN oversampling performed best in terms of recall, I performed a gridsearch on this test set to find the parameters that would furhter optimize model performance.
from sklearn.model_selection import GridSearchCVgrid = {"C":np.logspace(-3,3,7), "penalty":["l1","l2"]}# l1 lasso l2 ridgelogreg = LogisticRegression(random_state=88)logreg_cv = GridSearchCV(logreg,grid,cv=5,scoring='recall')logreg_cv.fit(X_adasyn, y_adasyn)print("tuned hpyerparameters :(best parameters) ", logreg_cv.best_params_)
The logistic regression model with a C parameter of 0.001 and a L2 regularization penalty had an improved recall score of 0.65. This means that the model was able to effectively catch 65 percent of new-users who would book Airbnbs internationally.
y_pred_cv = logreg_cv.predict(X_test_scaled)print('accuracy = ',logreg_cv.score(X_test_scaled, ytest).round(2), 'precision = ',precision_score(ytest, y_pred_cv).round(2), 'recall = ',recall_score(ytest, y_pred_cv).round(2), 'f1_score = ',f1_score(ytest, y_pred_cv).round(2) )plot_confusion_matrix(ytest, y_pred_cv)
While balanced classes and hyperparameter tuning yielded significant improvements to the model’s recall score, model precision remained quite low, at 0.3. This means that only 30% of users classified as international travellers are actually booking Airbnbs internationally. In a business setting, a model like this might be used to inform targeted ads for vacation homes based on predicted booking destination. This means that 70 percent of users receiving suggestions for, say, homes overlooking the Eiffel Tower will in fact be looking to travel domestically. Such mis-targeting would not only prove irrelevant to this group, but failure to disseminate relevant ads to the U.S.A./Canada group could mean missed revenue over time.
Now that I’ve accounted for overestimation of model performance by oversampling the minority class, next steps might include additional feature engineering to tease out more signal and fitting alternative classification algorithms (such as K-Nearest Neighbors or Random Forest Classifier).
Conclusion
In this example, model accuracy declined significantly once I rebalanced the target class sizes. Even after hypterparameter tuning using gridsearch cross-validation, the logistic regression model was 10 percentage points less accurate than the baseline model with imbalanced classes.
This example demonstrates the importance of taking class imbalance into account so as to avoid over-estimating the accuracy of classification models. I have also outlined, with working code, three techniques for re-balancing classes through over-sampling (Random Over Sampling, SMOTE, and ADASYN). Further information on each technique can be found on the Imbalanced-Learn documentation. | [
{
"code": null,
"e": 410,
"s": 171,
"text": "For a recent data science project, I developed a supervised learning model to classify the booking location of a first-time user of the vacation home site Airbnb. This dataset is available on Kaggle as a part of a 2015 Kaggle competition."
},
{
"c... |
JavaScript String - localeCompare() Method | This method returns a number indicating whether a reference string comes before or after or is the same as the given string in sorted order.
The syntax of localeCompare() method is −
string.localeCompare( param )
param − A string to be compared with string object.
0 − If the string matches 100%.
0 − If the string matches 100%.
1 − no match, and the parameter value comes before the string object's value in the locale sort order
1 − no match, and the parameter value comes before the string object's value in the locale sort order
-1 −no match, and the parameter value comes after the string object's value in the local sort order
-1 −no match, and the parameter value comes after the string object's value in the local sort order
Try the following example.
<html>
<head>
<title>JavaScript String localeCompare() Method</title>
</head>
<body>
<script type = "text/javascript">
var str1 = new String( "This is beautiful string" );
var index = str1.localeCompare( "XYZ" );
document.write("localeCompare first :" + index );
document.write("<br />" );
var index = str1.localeCompare( "AbCD ?" );
document.write("localeCompare second :" + index );
</script>
</body>
</html>
localeCompare first :-1
localeCompare second :1
25 Lectures
2.5 hours
Anadi Sharma
74 Lectures
10 hours
Lets Kode It
72 Lectures
4.5 hours
Frahaan Hussain
70 Lectures
4.5 hours
Frahaan Hussain
46 Lectures
6 hours
Eduonix Learning Solutions
88 Lectures
14 hours
Eduonix Learning Solutions
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2607,
"s": 2466,
"text": "This method returns a number indicating whether a reference string comes before or after or is the same as the given string in sorted order."
},
{
"code": null,
"e": 2649,
"s": 2607,
"text": "The syntax of localeCompare() method is −... |
C/C++ program for calling main() in main() - GeeksforGeeks | 28 Jul, 2021
Given a number N, the task is to write C/C++ program to print the number from N to 1 by calling the main() function using recursion.Examples:
Input: N = 10 Output: 10 9 8 7 6 5 4 3 2 1Input: N = 5 Output: 5 4 3 2 1
Approach:
Use static variable to initialise the given number N.Print the number N and decrement it.Call the main() function recusrsively after above step.
Use static variable to initialise the given number N.
Print the number N and decrement it.
Call the main() function recusrsively after above step.
Below is the implementation of the above approach:
C
C++
// C program to illustrate calling// main() function in main() itself#include "stdio.h" // Driver Codeint main(){ // Declare a static variable static int N = 10; // Condition for calling main() // recursively if (N > 0) { printf("%d ", N); N--; main(); }}
// C++ program to illustrate calling// main() function in main() itself#include "iostream"using namespace std; // Driver Codeint main(){ // Declare a static variable static int N = 10; // Condition for calling main() // recursively until N is 0 if (N > 0) { cout << N << ' '; N--; main(); }}
10 9 8 7 6 5 4 3 2 1
Time Complexity: O(N), where N is the given number.
akshaysingh98088
C Language
C Programs
C++
C++ Programs
GATE
Recursion
Stack
Recursion
Stack
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
TCP Server-Client implementation in C
Exception Handling in C++
Multithreading in C
Arrow operator -> in C/C++ with Examples
'this' pointer in C++
Strings in C
Arrow operator -> in C/C++ with Examples
C Program to read contents of Whole File
UDP Server-Client implementation in C
Header files in C/C++ and its uses | [
{
"code": null,
"e": 23919,
"s": 23891,
"text": "\n28 Jul, 2021"
},
{
"code": null,
"e": 24063,
"s": 23919,
"text": "Given a number N, the task is to write C/C++ program to print the number from N to 1 by calling the main() function using recursion.Examples: "
},
{
"code... |
HTTP headers | Content-Range - GeeksforGeeks | 23 Oct, 2019
The Content-Range HTTP header is a response header that indicates where a partial message belongs in a full body massage. This header is sent with a partial entity-body to specify where in the full entity-body the partial body should be applied.
Syntax:
Content-Range: <unit> <range-start>-<range-end>/<size>
Content-Range: <unit> <range-start>-<range-end>/*
Content-Range: <unit> */<size>
Directives:
<unit>: This specifies the unit of content range. This value is usually in bytes.
<range-start>: An integer that specifies the beginning of the requested range in a given unit.
<range-end>: An integer that specifies the ending of the requested range in a given unit.
<size>: This specifies the total size of the document. The ‘*’ is used when the value is unknown.
Examples:
Content-Range: bytes 500-1000/65989
It specifies the unit of the range as bytes and states the range-start as 500 bytes while the range-end as 1000 bytes. The total size of the document is 65989 bytes.
Content-Range: bytes 50-1000/*
It specifies the unit of the range as bytes and states the range-start as 50 bytes while the range-end as 1000 bytes. The total size of the document is unknown.
Supported Browsers The following browsers are compatible with HTTP Content-Range:
Google Chrome
Internet Explorer
Opera
Firefox
Safari
HTTP-headers
Picked
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Top 10 Front End Developer Skills That You Need in 2022
How to fetch data from an API in ReactJS ?
Difference between var, let and const keywords in JavaScript
Convert a string to an integer in JavaScript
Differences between Functional Components and Class Components in React
How to create footer to stay at the bottom of a Web page?
How to set the default value for an HTML <select> element ?
Node.js fs.readFileSync() Method
File uploading in React.js
How to set input type date in dd-mm-yyyy format using HTML ? | [
{
"code": null,
"e": 24171,
"s": 24143,
"text": "\n23 Oct, 2019"
},
{
"code": null,
"e": 24417,
"s": 24171,
"text": "The Content-Range HTTP header is a response header that indicates where a partial message belongs in a full body massage. This header is sent with a partial entity... |
Python | Pandas Series.dt.weekofyear | 20 Mar, 2019
Series.dt can be used to access the values of the series as datetimelike and return several properties. Pandas Series.dt.weekofyear attribute return a numpy array containing the week ordinal of the year in the underlying data of the given series object.
Syntax: Series.dt.weekofyear
Parameter : None
Returns : numpy array
Example #1: Use Series.dt.weekofyear attribute to return the week ordinal of the year in the underlying data of the given Series object.
# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series(['2012-10-21 09:30', '2019-7-18 12:30', '2008-02-2 10:30', '2010-4-22 09:25', '2019-11-8 02:22']) # Creating the indexidx = ['Day 1', 'Day 2', 'Day 3', 'Day 4', 'Day 5'] # set the indexsr.index = idx # Convert the underlying data to datetime sr = pd.to_datetime(sr) # Print the seriesprint(sr)
Output :
Now we will use Series.dt.weekofyear attribute to return the week ordinal of the year in the underlying data of the given Series object.
# return the week ordinal# of the yearresult = sr.dt.weekofyear # print the resultprint(result)
Output :
As we can see in the output, the Series.dt.weekofyear attribute has successfully accessed and returned the week ordinal of the year in the underlying data of the given series object. Example #2 : Use Series.dt.weekofyear attribute to return the week ordinal of the year in the underlying data of the given Series object.
# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series(pd.date_range('2012-12-12 12:12', periods = 5, freq = 'M')) # Creating the indexidx = ['Day 1', 'Day 2', 'Day 3', 'Day 4', 'Day 5'] # set the indexsr.index = idx # Print the seriesprint(sr)
Output :
Now we will use Series.dt.weekofyear attribute to return the week ordinal of the year in the underlying data of the given Series object.
# return the week ordinal# of the yearresult = sr.dt.weekofyear # print the resultprint(result)
Output :As we can see in the output, the Series.dt.weekofyear attribute has successfully accessed and returned the week ordinal of the year in the underlying data of the given series object.
Python pandas-series-datetime
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n20 Mar, 2019"
},
{
"code": null,
"e": 282,
"s": 28,
"text": "Series.dt can be used to access the values of the series as datetimelike and return several properties. Pandas Series.dt.weekofyear attribute return a numpy array containing t... |
How to get a key in a JavaScript object by its value ? | 23 Aug, 2019
Method 1: Checking all the object properties to find the value: The values of the object can be found by iterating through its properties. Each of these properties con be checked to see if they match the value provided. The properties of the object are obtained by using a for loop on the object. These properties are then checked with the object’s hasOwnProperty() method to make sure it is a direct property of the object and not an inherited one.Each property is then checked if they are equal to the value to be found. If the value matches, then the property is returned. This is the key to the value of the object.
Example:
<!DOCTYPE html><html> <head> <title> How to get a key in a JavaScript object by its value ? </title></head> <body> <h1 style="color: green">GeeksforGeeks</h1> <b> How to get a key in a JavaScript object by its value ? </b> <p>Getting the key of the value '100'.</p> <p>See the console for the output</p> <script> function getKeyByValue(object, value) { for (var prop in object) { if (object.hasOwnProperty(prop)) { if (object[prop] === value) return prop; } } } var exampleObject = { key1: 'Geeks', key2: 100, key3: 'Javascript' }; ans = getKeyByValue(exampleObject, 100); console.log(ans); </script></body> </html>
Output:Console Output:
Method 2: Using the find method() to compare the keys: The Object.keys() method is used to return all the keys of the object. On this array of keys, the find() method is used to test if any of these keys match the value provided. The find() method is used to return the value of the first element that satisfies the testing function. If the value matches, then this condition is satisfied and the respective key is returned. This is the key to the value of the object.
Note: This method was added in the ES6 specification and may not be supported on older browser versions.
Syntax:
function getKeyByValue(object, value) { return Object.keys(object).find(key => object[key] === value);}
Example:
<!DOCTYPE html><html> <head> <title> How to get a key in a JavaScript object by its value ? </title></head> <body> <h1 style="color: green">GeeksforGeeks</h1> <b>How to get a key in a JavaScript object by its value?</b> <p>Getting the key of the value 'Geeks'.</p> <p>See the console for the output</p> <script> function getKeyByValue(object, value) { return Object.keys(object).find(key => object[key] === value); } var exampleObject = { key1: 'Geeks', key2: 100, key3: 'Javascript' }; ans = getKeyByValue(exampleObject, 'Geeks'); console.log(ans); </script></body> </html>
Output:Console Output:
javascript-object
Picked
JavaScript
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Roadmap to Learn JavaScript For Beginners
Differences between Functional Components and Class Components in React
Remove elements from a JavaScript Array
Hide or show elements in HTML using display property
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to fetch data from an API in ReactJS ?
Roadmap to Learn JavaScript For Beginners | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n23 Aug, 2019"
},
{
"code": null,
"e": 648,
"s": 28,
"text": "Method 1: Checking all the object properties to find the value: The values of the object can be found by iterating through its properties. Each of these properties con be chec... |
Python program for DNA transcription problem | 28 Jul, 2020
Let’s discuss the DNA transcription problem in Python. First, let’s understand about the basics of DNA and RNA that are going to be used in this problem.
The four nucleotides found in DNA: Adenine (A), Cytosine (C), Guanine (G), and Thymine (T).
The four nucleotides found in RNA: Adenine (A), Cytosine (C), Guanine (G), and Uracil (U).
Given a DNA strand, its transcribed RNA strand is formed by replacing each nucleotide with its complement nucleotide:
G –> C
C –> G
T –> A
A –> U
Example :
Input: GCTAA
Output: CGAUU
Input: GC
Output: CG
Approach:
Take input as a string and then convert it into the list of characters. Then traverse each character present in the list. check if the character is ‘G’ or ‘C’ or ‘T’ or ‘A’ then convert it into ‘C’,’G’, ‘A’ and ‘U’ respectively. Else print ‘Invalid Input’.
Below is the implementation:
Python3
# define a function# for transcriptiondef transcript(x) : # convert string into list l = list(x) for i in range(len(x)): if(l[i]=='G'): l[i]='C' elif(l[i]=='C'): l[i]='G' elif (l[i] == 'T'): l[i] = 'A' elif (l[i] == 'A'): l[i] = 'U' else: print('Invalid Input') print("Translated DNA : ",end="") for char in l: print(char,end="") # Driver codeif __name__ == "__main__": x = "GCTAA" # function calling transcript(x)
Output:
Translated DNA : CGAUU
python-utility
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n28 Jul, 2020"
},
{
"code": null,
"e": 184,
"s": 28,
"text": " Let’s discuss the DNA transcription problem in Python. First, let’s understand about the basics of DNA and RNA that are going to be used in this problem."
},
{
"code... |
Different Ways to Convert Double to Integer in C# | 17 May, 2020
Given a Double real number, the task is to convert it into Integer in C#. There are mainly 3 ways to convert Double to Integer as follows:
Using Type Casting
Using Math.round()
Using Decimal.ToInt32()
Examples:
Input: double = 3452.234
Output: 3452
Input: double = 98.23
Output: 98
1. Using Typecasting: This technique is a very simple and user friendly.
Example:
C#
// C# program for type conversion from double to intusing System;using System.IO;using System.Text;namespace GFG {class Geeks { // Main Method static void Main(string[] args) { double a = 3452.345; int b = 0; // type conversion b = (int)a; Console.WriteLine(b); }}}
Output:
3452
2. Using Math.round(): This method returns the nearest integer.
C#
// C# program to demonstrate the// Math.Round(Double) methodusing System; class Geeks { // Main method static void Main(string[] args) { Double dx1 = 3452.645; // Output value will be 12 Console.WriteLine(Math.Round(dx1)); }}
Output:
3452
3. UsingDecimal.ToInt32(): This method is used to convert the value of the specified Decimal to the equivalent 32-bit signed integer.
C#
// C# program to convert Double to Integerusing System;publicclass Demo {public static void Main() { double val = 3452.345; int res = Convert.ToInt32(val); Console.WriteLine(res); }}
Output:
3452
C#
Write From Home
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n17 May, 2020"
},
{
"code": null,
"e": 168,
"s": 28,
"text": "Given a Double real number, the task is to convert it into Integer in C#. There are mainly 3 ways to convert Double to Integer as follows:"
},
{
"code": null,
"e"... |
Flutter – Implementing Calendar | 10 Jun, 2021
Nowadays in most of the apps, we see Calendar in most of the apps for displaying birth dates or for any appointment application. Displaying the date in the app with the help of the Calendar view gives a better user experience. In this article, we are going to see how to implement Calendar in our Flutter App.
Follow the steps to implement Calendar in our Flutter App
Add the given dependency in pubspec.yaml file.
Dart
dependencies: table_calendar: ^2.3.3
Now click on pub.get to configure.
First, we have declared MyApp() in runApp in the main function. Then we have created StatelessWidget for MyApp in which we have returned MaterialApp(). In this MaterialApp() we have given the title of our App then declared the theme of our App as primarySwatch as green. Then we have given our first screen of or slider app in the home: Calendar()
Dart
void main() { runApp(MyApp());} class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { // Material App return MaterialApp( title: 'To Do List', debugShowCheckedModeBanner: false, theme: ThemeData( // Theme color primarySwatch: Colors.green, ), /* home: ChangeNotifierProvider( create: (context)=> TodoModel(), child: HomeScreen(), ), ); */ // Our First Screen home: Calendar(), ); }}
In that state class, we have given _calendarController. After that, we have declared Scaffold() in which we have declared appbar. Which consists of the title of the app. In the body section, we have declared TableCalendar() wrapped with the centre widget. This imported library will give us the calendar of the specific year in our app. This Library will also display the year as well as the month in our app. In that TableCalendar() we have declared controller in which we have returned _calendarController which we have declared in State class.
Dart
import 'package:flutter/material.dart';import 'package:table_calendar/table_calendar.dart'; // StatefulWidgetclass Calendar extends StatefulWidget { @override _CalendarState createState() => _CalendarState();} class _CalendarState extends State<Calendar> { // created controller CalendarController _calendarController = new CalendarController(); @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( // Title of app title: Text("Geeks for Geeks"), ), body: Center( // Declared TableController child: TableCalendar( calendarController: _calendarController, ), ), ); }}
Dart
import 'package:flutter/material.dart';import 'package:todolistapp/CalendarApp.dart'; void main() { runApp(MyApp());} class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return MaterialApp( title: 'To Do List', debugShowCheckedModeBanner: false, theme: ThemeData( primarySwatch: Colors.green, visualDensity: VisualDensity.adaptivePlatformDensity, ), // First screen of app home: Calendar(), ); }}
Dart
import 'package:flutter/material.dart';import 'package:table_calendar/table_calendar.dart'; // stateful widget created for calendar classclass Calendar extends StatefulWidget { @override _CalendarState createState() => _CalendarState();} class _CalendarState extends State<Calendar> { CalendarController _calendarController = new CalendarController(); @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( // title of app title: Text("Geeks for Geeks"), ), body: Center( // create calendar child: TableCalendar( calendarController: _calendarController, ), ), ); }}
Output:
clintra
android
Flutter
Dart
Flutter
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Flutter - Custom Bottom Navigation Bar
ListView Class in Flutter
Flutter - Search Bar
Flutter - FutureBuilder Widget
Flutter - Flexible Widget
Flutter - Custom Bottom Navigation Bar
Flutter Tutorial
Flutter - Search Bar
Flutter - FutureBuilder Widget
Flutter - Flexible Widget | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n10 Jun, 2021"
},
{
"code": null,
"e": 338,
"s": 28,
"text": "Nowadays in most of the apps, we see Calendar in most of the apps for displaying birth dates or for any appointment application. Displaying the date in the app with the help o... |
strings.LastIndex() Function in Golang With Examples | 19 Apr, 2020
strings.LastIndex() function in the Golang returns the starting index of the occurrence of the last instance of a substring in a given string. If the substring is not found then it returns -1. Thus, this function returns an integer value. The index is counted taking zero as the starting index of the string.
Syntax:
func LastIndex(str, substring string) int
Here, str is the original string and substring is a string, whose we want to find the last index value.
Example 1:
// Golang program to illustrate the// strings.LastIndex() Functionpackage main import ( "fmt" "strings") func main() { // taking a string str := "GeeksforGeeks" substr := "Geeks" fmt.Println(strings.LastIndex(str, substr)) }
Output:
8
The string is “GeeksforGeeks” and the substring is “Geeks” so the compiler finds the substring present in the original string and displays the starting index of the last instance of substring which is 8.
Example 2:
// Golang program to illustrate the// strings.LastIndex() Functionpackage main import ( "fmt" "strings") func main() { // taking strings str := "My favorite sport is football" substr1 := "f" substr2 := "ll" substr3 := "SPORT" // using the function fmt.Println(strings.LastIndex(str, substr1)) fmt.Println(strings.LastIndex(str, substr2)) fmt.Println(strings.LastIndex(str, substr3))}
Output:
21
27
-1
The string is “My favorite sport is football” and the substrings are “f”, “ll” and “SPORT” so the compiler displays the output as 21 and 27 in the first two cases respectively and since the third substring is “SPORT” which is considered as not present in the string as the function is case sensitive so it will give the result as -1.
Golang-String
Picked
Go Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
strings.Replace() Function in Golang With Examples
Arrays in Go
Golang Maps
Interfaces in Golang
Slices in Golang
How to Parse JSON in Golang?
How to Trim a String in Golang?
How to convert a string in lower case in Golang?
Different Ways to Find the Type of Variable in Golang
How to compare times in Golang? | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n19 Apr, 2020"
},
{
"code": null,
"e": 337,
"s": 28,
"text": "strings.LastIndex() function in the Golang returns the starting index of the occurrence of the last instance of a substring in a given string. If the substring is not found th... |
Python | Find overlapping tuples from list | 03 Nov, 2019
Sometimes, while working with tuple data, we can have a problem in which we may need to get the tuples which overlap a certain tuple. This kind of problem can occur in Mathematics domain while working with Geometry. Let’s discuss certain ways in which this problem can be solved.
Method #1 : Using loopIn this method, we extract the pairs with overlap using conditional statements and append the suitable match into list keeping records.
# Python3 code to demonstrate working of# Find overlapping tuples from list# using loop # initialize listtest_list = [(4, 6), (3, 7), (7, 10), (5, 6)] # initialize test tuple test_tup = (1, 5) # printing original listprint("The original list : " + str(test_list)) # Find overlapping tuples from list# using loop res = []for tup in test_list: if(tup[1]>= test_tup[0] and tup[0]<= test_tup[1]): res.append(tup) # printing resultprint("The tuple elements that overlap the argument tuple is : " + str(res))
The original list : [(4, 6), (3, 7), (7, 10), (5, 6)]The tuple elements that overlap the argument tuple is : [(4, 6), (3, 7), (5, 6)]
Method #2 : Using list comprehensionThis task can also be achieved using list comprehension functionality. This method is similar to the above one, just packed in one-liner for use as a shorthand.
# Python3 code to demonstrate working of# Find overlapping tuples from list# Using list comprehension # initialize listtest_list = [(4, 6), (3, 7), (7, 10), (5, 6)] # initialize test tuple test_tup = (1, 5) # printing original listprint("The original list : " + str(test_list)) # Find overlapping tuples from list# Using list comprehensionres = [(idx[0], idx[1]) for idx in test_list\ if idx[0] >= test_tup[0] and idx[0] <= test_tup[1]\ or idx[1] >= test_tup[0] and idx[1] <= test_tup[1]] # printing resultprint("The tuple elements that overlap the argument tuple is : " + str(res))
The original list : [(4, 6), (3, 7), (7, 10), (5, 6)]The tuple elements that overlap the argument tuple is : [(4, 6), (3, 7), (5, 6)]
Python list-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n03 Nov, 2019"
},
{
"code": null,
"e": 308,
"s": 28,
"text": "Sometimes, while working with tuple data, we can have a problem in which we may need to get the tuples which overlap a certain tuple. This kind of problem can occur in Mathema... |
What are the VTP Modes? | 24 Nov, 2021
VLAN Trunking Protocol is a Cisco proprietary protocol used for communicating VLAN information by Cisco switches. Through VTP, the user can synchronize VLAN name, VLAN ID, and other VLAN information; with Cisco switches inside the same domain. These VTP domains are a set of trunked switches with a similar VTP domain name, version, password, and some other VTP settings. All Cisco switches inside the same domain share their VLAN data with each other.
There are three VTP versions, i.e., V1, V2, and V3. V1 and V2 versions are alike except that V2 supports token ring VLANs and V3 is quite different as it adds the following features:
V3 supports extended VLANs (1006 to 4094). Whereas V1 and V2 can broadcast only VLANs 1 to 1005.V3 supports private VLAN.V3 supports VTP primary server and secondary servers.It also supports enhanced authentication.V3 is backward compatibility with V1 and V2.V3 has the ability to be configured on a per-port basis.
V3 supports extended VLANs (1006 to 4094). Whereas V1 and V2 can broadcast only VLANs 1 to 1005.
V3 supports private VLAN.
V3 supports VTP primary server and secondary servers.
It also supports enhanced authentication.
V3 is backward compatibility with V1 and V2.
V3 has the ability to be configured on a per-port basis.
VTP Modes :The user can configure a switch to work in any one of the following VTP modes:
Server –In VTP server mode, the user can make VLANs, modify and delete them, and they can also specify additional configuration constraints such as VTP pruning and VTP version for the whole domain. These servers promote their VLAN configuration to additional switches that exist in the same domain, and they also synchronize their configuration with additional switches based upon the announcements acknowledged over trunk links. The default mode in VTP is server.Client –VTP clients act in a similar way as the VTP servers, though here the user can’t change, create, or delete VLANs.Transparent –VTP transparent switches don’t partake in VTP. A VTP transparent switch doesn’t promote its VLAN configuration and it also doesn’t synchronize its VLAN configuration based on acknowledged announcements, however, these switches do transmit VTP announcements that they receive from trunk ports in Version 2 of VTP.VTP mode Off –In the three described modes, VTP announcements are acknowledged and forwarded as soon as the switch goes in the management domain state. In the off mode, switches act in a similar way as they do in VTP transparent mode however there is one difference that is the VTP announcements are not forwarded.
Server –In VTP server mode, the user can make VLANs, modify and delete them, and they can also specify additional configuration constraints such as VTP pruning and VTP version for the whole domain. These servers promote their VLAN configuration to additional switches that exist in the same domain, and they also synchronize their configuration with additional switches based upon the announcements acknowledged over trunk links. The default mode in VTP is server.
Client –VTP clients act in a similar way as the VTP servers, though here the user can’t change, create, or delete VLANs.
Transparent –VTP transparent switches don’t partake in VTP. A VTP transparent switch doesn’t promote its VLAN configuration and it also doesn’t synchronize its VLAN configuration based on acknowledged announcements, however, these switches do transmit VTP announcements that they receive from trunk ports in Version 2 of VTP.
VTP mode Off –In the three described modes, VTP announcements are acknowledged and forwarded as soon as the switch goes in the management domain state. In the off mode, switches act in a similar way as they do in VTP transparent mode however there is one difference that is the VTP announcements are not forwarded.
The user can configure VLANs on Catalyst 1900, 2820, and 4500 series switches when the switch is in transparent mode or VTP server. The user can use the MIB (Management information base), CLI, or console menus to modify a VLAN configuration when the switch is in either transparent mode or server.
A switch configured in VTP server mode promotes VLAN configuration to adjoining switches over its trunks and learns new VLAN configurations from those neighboring switches. The user can also use the server mode to add or delete VLANs and to modify VLAN information by using either the CLI, the VTP MIB, or the console. For example, VTP promotes the new VLAN, whenever the user adds a VLAN and both servers and clients prepare to receive traffic on their trunk ports.
Subsequently, the switch automatically changes to VTP client mode, it forwards announcements and learns new data from announcements. Though, the user can’t add, modify, or delete a VLAN over the console, the CLI, or the MIB. The VTP client doesn’t preserve VLAN information in NVM (non-volatile memory); Hence as soon as it starts, it learns the configuration by getting announcements from the trunk ports.
In VTP transparent mode, the switch doesn’t learn or promote VLAN configurations from the network. Whenever a switch is in transparent mode, the user is allowed to add VLANs, modify, or delete them over the CLI, MIB, or the console.
VTP configuration :For exchanging VTP messages there are some basic conditions that need to be fulfilled.
The VTP domain name should be the same on both switches.VTP versions should be the same.VTP domain password should be the same.The switch should be configured as either a VTP client or a VTP server.A trunk link should be used between switches.
The VTP domain name should be the same on both switches.
VTP versions should be the same.
VTP domain password should be the same.
The switch should be configured as either a VTP client or a VTP server.
A trunk link should be used between switches.
VTP
In the image above three switches are connected through trunk links. On switch1, the VTP domain name will be configured using the “vtp domain” command and VTP password by using the “vtp password” command.
Switch1(config)#vtp domain mlkjr
Changing VTP domain name from NULL to mlkjr
Switch1(config)#vtp password kjtmkcbb
Setting device VLAN database password to kjtmkcbb
Now configuring Switch2 and Switch3 as VTP clients.
Switch2(config)#vtp mode client
Setting device to VTP CLIENT mode.
Switch2(config)#vtp domain mlkjr
Changing VTP domain name from NULL to mlkjr
Switch2(config)#vtp password kjtmkcbb
Setting device VLAN database password kjtmkcbb
Switch3(config)#vtp mode client
Setting device to VTP CLIENT mode.
Switch3(config)#vtp domain mlkjr
Changing VTP domain name from NULL to mlkjr
Switch3(config)#vtp password kjtmkcbb
Setting device VLAN database password kjtmkcbb
Now create a new VLAN on Switch1, the VTP will be sent to Switch2 and Switch3 creating a new VLAN automatically on Switch2 and Switch3.
Switch1(config)#vlan 30
Switch2 and Switch3 will create the VLAN 30 automatically. Now checking if it has been created or not.
Switch2#show vlan
VLAN Name Status Ports
---- -------------------------------- --------- -----------------------------
1 default active Fa0/1, Fa0/2, Fa0/3, Fa0/4
Fa0/5, Fa0/6, Fa0/7, Fa0/8
Fa0/9, Fa0/10, Fa0/11, Fa0/12
Fa0/13, Fa0/14, Fa0/15, Fa0/16
Fa0/17, Fa0/18, Fa0/19, Fa0/20
Fa0/21, Fa0/22, Fa0/23, Fa0/24
2 Accounting active Fa/05
30 VLAN0030 active
1002 fddi-default act/unsup
1003 token-ring-default act/unsup
1004 fddinet-default act/unsup
1005 trnet-default act/unsup
Now checking for Switch3
Switch3#show vlan
VLAN Name Status Ports
---- -------------------------------- --------- -----------------------------
1 default active Fa0/1, Fa0/2, Fa0/3, Fa0/4
Fa0/5, Fa0/6, Fa0/7, Fa0/8
Fa0/9, Fa0/10, Fa0/11, Fa0/12
Fa0/13, Fa0/14, Fa0/15, Fa0/16
Fa0/17, Fa0/18, Fa0/19, Fa0/20
Fa0/21, Fa0/22, Fa0/23, Fa0/24
2 Accounting active Fa/05
30 VLAN0030 active
1002 fddi-default act/unsup
1003 token-ring-default act/unsup
1004 fddinet-default act/unsup
1005 trnet-default act/unsup
Picked
Computer Networks
Computer Networks
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Design Issues of Distributed System
What is P2P(Peer-to-peer process) ?
Secure Socket Layer (SSL)
IP security (IPSec)
IPSec Architecture
Intrusion Detection System (IDS)
Various Failures in Distributed System
Distributed System - Types of Distributed Deadlock
Bluetooth
Types of Network Firewall | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n24 Nov, 2021"
},
{
"code": null,
"e": 481,
"s": 28,
"text": "VLAN Trunking Protocol is a Cisco proprietary protocol used for communicating VLAN information by Cisco switches. Through VTP, the user can synchronize VLAN name, VLAN ID, and... |
Find the median array for Binary tree | 26 May, 2021
Prerequisite: Tree Traversals (Inorder, Preorder and Postorder), MedianGiven a Binary tree having integral nodes, the task is to find the median for each position in the preorder, postorder and inorder traversal of the tree.
The median array is given as the array formed with the help of PreOrder, PostOrder, and Inorder traversal of a tree, such that med[i] = median(preorder[i], inorder[i], postorder[i])
Examples:
Input: Tree =
1
/ \
2 3
/ \
4 5
Output: {4, 2, 4, 3, 3}
Explanation:
Preorder traversal = {1 2 4 5 3}
Inorder traversal = {4 2 5 1 3}
Postorder traversal = {4 5 2 3 1}
median[0] = median(1, 4, 4) = 4
median[1] = median(2, 2, 5) = 2
median[2] = median(4, 5, 2) = 4
median[3] = median(5, 1, 3) = 3
median[4] = median(3, 3, 1) = 3
Hence, Median array = {4 2 4 3 3}
Input: Tree =
25
/ \
20 30
/ \ / \
18 22 24 32
Output: 18 20 20 24 30 30 32
Approach:
First, find the preorder, postorder and inorder traversal of the given binary tree and store them each in a vector.
Now, for each position from 0 to N, insert the values at that position in each of the traversal arrays into a vector. The vector will be of 3N size.
Finally, sort this vector, and the median for this position is given by the 2nd element. In this vector, it has 3N elements. Therefore after sorting, the median will be given by the middle element, the 2nd element, in every 3 elements.
Below is the implementation of the above approach:
CPP
Java
Python3
C#
Javascript
// C++ program to Obtain the median// array for the preorder, postorder// and inorder traversal of a binary tree #include <bits/stdc++.h>using namespace std; // A binary tree node has data,// a pointer to the left child// and a pointer to the right childstruct Node { int data; struct Node *left, *right; Node(int data) { this->data = data; left = right = NULL; }}; // Postorder traversalvoid Postorder( struct Node* node, vector<int>& postorder){ if (node == NULL) return; // First recur on left subtree Postorder(node->left, postorder); // then recur on right subtree Postorder(node->right, postorder); // now deal with the node postorder.push_back(node->data);} // Inorder traversalvoid Inorder( struct Node* node, vector<int>& inorder){ if (node == NULL) return; // First recur on left child Inorder(node->left, inorder); // then print the data of node inorder.push_back(node->data); // now recur on right child Inorder(node->right, inorder);} // Preorder traversalvoid Preorder( struct Node* node, vector<int>& preorder){ if (node == NULL) return; // First print data of node preorder.push_back(node->data); // then recur on left subtree Preorder(node->left, preorder); // now recur on right subtree Preorder(node->right, preorder);} // Function to print the any arrayvoid PrintArray(vector<int> median){ for (int i = 0; i < median.size(); i++) cout << median[i] << " "; return;} // Function to create and print// the Median arrayvoid MedianArray(struct Node* node){ // Vector to store // the median values vector<int> median; if (node == NULL) return; vector<int> preorder, postorder, inorder; // Traverse the tree Postorder(node, postorder); Inorder(node, inorder); Preorder(node, preorder); int n = preorder.size(); for (int i = 0; i < n; i++) { // Temporary vector to sort // the three values vector<int> temp; // Insert the values at ith index // for each traversal into temp temp.push_back(postorder[i]); temp.push_back(inorder[i]); temp.push_back(preorder[i]); // Sort the temp vector to // find the median sort(temp.begin(), temp.end()); // Insert the middle value in // temp into the median vector median.push_back(temp[1]); } PrintArray(median); return;} // Driver Codeint main(){ struct Node* root = new Node(1); root->left = new Node(2); root->right = new Node(3); root->left->left = new Node(4); root->left->right = new Node(5); MedianArray(root); return 0;}
// Java program to Obtain the median// array for the preorder, postorder// and inorder traversal of a binary treeimport java.io.*;import java.util.*; // A binary tree node has data,// a pointer to the left child// and a pointer to the right childclass Node{ int data; Node left,right; Node(int item) { data = item; left = right = null; }}class Tree { public static Vector<Integer> postorder = new Vector<Integer>(); public static Vector<Integer> inorder = new Vector<Integer>(); public static Vector<Integer> preorder = new Vector<Integer>(); public static Node root; // Postorder traversal public static void Postorder(Node node) { if(node == null) { return; } // First recur on left subtree Postorder(node.left); // then recur on right subtree Postorder(node.right); // now deal with the node postorder.add(node.data); } // Inorder traversal public static void Inorder(Node node) { if(node == null) { return; } // First recur on left child Inorder(node.left); // then print the data of node inorder.add(node.data); // now recur on right child Inorder(node.right); } // Preorder traversal public static void Preorder(Node node) { if(node == null) { return; } // First print data of node preorder.add(node.data); // then recur on left subtree Preorder(node.left); // now recur on right subtree Preorder(node.right); } // Function to print the any array public static void PrintArray(Vector<Integer> median) { for(int i = 0; i < median.size(); i++) { System.out.print(median.get(i) + " "); } } // Function to create and print // the Median array public static void MedianArray(Node node) { // Vector to store // the median values Vector<Integer> median = new Vector<Integer>(); if(node == null) { return; } // Traverse the tree Postorder(node); Inorder(node); Preorder(node); int n = preorder.size(); for(int i = 0; i < n; i++) { // Temporary vector to sort // the three values Vector<Integer> temp = new Vector<Integer>(); // Insert the values at ith index // for each traversal into temp temp.add(postorder.get(i)); temp.add(inorder.get(i)); temp.add(preorder.get(i)); // Sort the temp vector to // find the median Collections.sort(temp); // Insert the middle value in // temp into the median vector median.add(temp.get(1)); } PrintArray(median); } // Driver Code public static void main (String[] args) { Tree.root = new Node(1); Tree.root.left = new Node(2); Tree.root.right = new Node(3); Tree.root.left.left = new Node(4); Tree.root.left.right = new Node(5); MedianArray(root); }} // This code is contributed by avanitrachhadiya2155
# Python3 program to Obtain the median# array for the preorder, postorder# and inorder traversal of a binary tree # A binary tree node has data,# a pointer to the left child# and a pointer to the right childclass Node: def __init__(self, x): self.data = x self.left = None self.right = None # Postorder traversaldef Postorder(node): global preorder if (node == None): return # First recur on left subtree Postorder(node.left) # then recur on right subtree Postorder(node.right) # now deal with the node postorder.append(node.data) # Inorder traversaldef Inorder(node): global inorder if (node == None): return # First recur on left child Inorder(node.left) # then print the data of node inorder.append(node.data) # now recur on right child Inorder(node.right) # Preorder traversaldef Preorder(node): global preorder if (node == None): return # First print data of node preorder.append(node.data) # then recur on left subtree Preorder(node.left) # now recur on right subtree Preorder(node.right) # Function to print the any arraydef PrintArray(median): for i in range(len(median)): print(median[i], end = " ") return # Function to create and print# the Median arraydef MedianArray(node): global inorder, postorder, preorder # Vector to store # the median values median = [] if (node == None): return # Traverse the tree Postorder(node) Inorder(node) Preorder(node) n = len(preorder) for i in range(n): # Temporary vector to sort # the three values temp = [] # Insert the values at ith index # for each traversal into temp temp.append(postorder[i]) temp.append(inorder[i]) temp.append(preorder[i]) # Sort the temp vector to # find the median temp = sorted(temp) # Insert the middle value in # temp into the median vector median.append(temp[1]) PrintArray(median) # Driver Codeif __name__ == '__main__': preorder, inorder, postorder = [], [], [] root = Node(1) root.left = Node(2) root.right = Node(3) root.left.left = Node(4) root.left.right = Node(5) MedianArray(root) # This code is contributed by mohit kumar 29
// C# program to Obtain the median// array for the preorder, postorder// and inorder traversal of a binary treeusing System;using System.Collections.Generic;using System.Numerics; // A binary tree node has data,// a pointer to the left child// and a pointer to the right childpublic class Node{ public int data; public Node left,right; public Node(int item) { data = item; left = right = null; }}public class Tree{ static List<int> postorder = new List<int>(); static List<int> inorder = new List<int>(); static List<int> preorder = new List<int>(); static Node root; // Postorder traversal public static void Postorder(Node node) { if(node == null) { return; } // First recur on left subtree Postorder(node.left); // then recur on right subtree Postorder(node.right); // now deal with the node postorder.Add(node.data); } // Inorder traversal public static void Inorder(Node node) { if(node == null) { return; } // First recur on left child Inorder(node.left); // then print the data of node inorder.Add(node.data); // now recur on right child Inorder(node.right); } // Preorder traversal public static void Preorder(Node node) { if(node == null) { return; } // First print data of node preorder.Add(node.data); // then recur on left subtree Preorder(node.left); // now recur on right subtree Preorder(node.right); } // Function to print the any array public static void PrintArray(List<int> median) { for(int i = 0; i < median.Count; i++) { Console.Write(median[i] + " "); } } // Function to create and print // the Median array public static void MedianArray(Node node) { // Vector to store // the median values List<int> median = new List<int>(); if(node == null) { return; } // Traverse the tree Postorder(node); Inorder(node); Preorder(node); int n = preorder.Count; for(int i = 0; i < n; i++) { // Temporary vector to sort // the three values List<int> temp = new List<int>(); // Insert the values at ith index // for each traversal into temp temp.Add(postorder[i]); temp.Add(inorder[i]); temp.Add(preorder[i]); // Sort the temp vector to // find the median temp.Sort(); // Insert the middle value in // temp into the median vector median.Add(temp[1]); } PrintArray(median); } // Driver code static public void Main () { Tree.root = new Node(1); Tree.root.left = new Node(2); Tree.root.right = new Node(3); Tree.root.left.left = new Node(4); Tree.root.left.right = new Node(5); MedianArray(root); }} // This code is contributed by rag2127
<script> // JavaScript program to Obtain the median// array for the preorder, postorder// and inorder traversal of a binary tree // A binary tree node has data,// a pointer to the left child// and a pointer to the right childclass Node{ constructor(item) { this.data = item; this.left = this.right = null; }} let postorder = [];let inorder = [];let preorder = []; // Postorder traversalfunction Postorder(node){ if(node == null) { return; } // First recur on left subtree Postorder(node.left); // then recur on right subtree Postorder(node.right); // now deal with the node postorder.push(node.data);} // Inorder traversalfunction Inorder(node){ if(node == null) { return; } // First recur on left child Inorder(node.left); // then print the data of node inorder.push(node.data); // now recur on right child Inorder(node.right); } // Preorder traversalfunction Preorder(node){ if(node == null) { return; } // First print data of node preorder.push(node.data); // then recur on left subtree Preorder(node.left); // now recur on right subtree Preorder(node.right);}// Function to print the any arrayfunction PrintArray(median){ for(let i = 0; i < median.length; i++) { document.write(median[i] + " "); }} // Function to create and print // the Median arrayfunction MedianArray(node){ // Vector to store // the median values let median = []; if(node == null) { return; } // Traverse the tree Postorder(node); Inorder(node); Preorder(node); let n = preorder.length; for(let i = 0; i < n; i++) { // Temporary vector to sort // the three values let temp = []; // Insert the values at ith index // for each traversal into temp temp.push(postorder[i]); temp.push(inorder[i]); temp.push(preorder[i]); // Sort the temp vector to // find the median temp.sort(function(a,b){return a-b;}); // Insert the middle value in // temp into the median vector median.push(temp[1]); } PrintArray(median);} // Driver Codelet root = new Node(1);root.left = new Node(2);root.right = new Node(3);root.left.left = new Node(4);root.left.right = new Node(5);MedianArray(root); // This code is contributed by patel2127 </script>
4 2 4 3 3
Time Complexity: O(N)
mohit kumar 29
avanitrachhadiya2155
rag2127
patel2127
median-finding
Tree Traversals
Arrays
Recursion
Tree
Arrays
Recursion
Tree
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Introduction to Data Structures
Search, insert and delete in an unsorted array
Window Sliding Technique
Chocolate Distribution Problem
Find duplicates in O(n) time and O(1) extra space | Set 1
Write a program to print all permutations of a given string
Recursion
Program for Tower of Hanoi
Backtracking | Introduction
Print all subsequences of a string | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n26 May, 2021"
},
{
"code": null,
"e": 254,
"s": 28,
"text": "Prerequisite: Tree Traversals (Inorder, Preorder and Postorder), MedianGiven a Binary tree having integral nodes, the task is to find the median for each position in the preor... |
Convert dataframe column to vector in R | 21 Apr, 2021
In this article, we will discuss how to convert a DataFrame column to vector in R Programming Language. To extract a single vector from a data frame in R Programming language, as.vector() function can be used.
Syntax: as.vector( data_frame$column_name )
Here,
data_frame is the name of the data frame
column_name is the column to be extracted
Given below are some implementations for this.
Example 1:
R
# creating dataframe std.data <- data.frame(std_id = c (1:5), std_name = c("Ram","Shayam","Mohan", "Kailash","Aditya"), marks = c(95,96,95,85,80) ) # extracting vector from# dataframe column std_namename.vec <- as.vector(std.data$std_name)print(name.vec)
Output:
[1] “Ram” “Shayam” “Mohan” “Kailash” “Aditya”
We can now examine whether the returned column is a vector or not, by passing it to the function is.vector() which returns a Boolean value i.e. either true or false.
Example 2:
We will extract the Species column from the well-known data frame Iris using as.vector( ) function and print it. We will also check whether the returned column is a vector or not.
R
df <- iris # print the data framehead(df) # extracting vector from# dataframe column Speciesname.vec <- as.vector(df$Species) print(name.vec) # returns Boolean valueis.vector(name.vec)
Output:
Picked
R DataFrame-Programs
R Vector-Programs
R-DataFrame
R-Vectors
R Language
R Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n21 Apr, 2021"
},
{
"code": null,
"e": 238,
"s": 28,
"text": "In this article, we will discuss how to convert a DataFrame column to vector in R Programming Language. To extract a single vector from a data frame in R Programming language,... |
Minimum boxes required to carry all gifts | 10 May, 2021
Given an array containing weights of gifts, and an integer K representing maximum weight a box can contain (All boxes are uniform). Each box carries at most 2 gifts at the same time, provided the sum of the weight of those gifts is at most limit of box. The task is to find the minimum number of boxes required to carry all gifts. Note: It is guaranteed each gift can be carried by a box.
Examples:
Input: A = [3, 2, 2, 1], K = 3 Output: 3 Explanation: 3 boxes with weights (1, 2), (2) and (3)
Input: A = [3, 5, 3, 4], K = 5 Output: 4 Explanation: 4 boxes with weights (3), (3), (4), (5)
Approach: If the heaviest gift can share a box with the lightest gift, then do so. Otherwise, the heaviest gift can’t pair with anyone, so it get an individual box.
The reason this works is because if the lightest gift can pair with anyone, it might as well pair with the heaviest gift.Let A[i] be the currently lightest gift, and A[j] to the heaviest.
Then, if the heaviest gift can share a box with the lightest gift (if A[j] + A[i] <= K) then do so otherwise, the heaviest gift get an individual box.
Below is the implementation of above approach:
C++
Java
Python3
C#
PHP
Javascript
// CPP implementation of above approach#include <bits/stdc++.h>using namespace std; // Function to return number of boxesint numBoxes(int A[], int n, int K){ // Sort the boxes in ascending order sort(A, A + n); // Try to fit smallest box with // current heaviest box (from right // side) int i = 0, j = n - 1; int ans = 0; while (i <= j) { ans++; if (A[i] + A[j] <= K) i++; j--; } return ans;} // Driver programint main(){ int A[] = { 3, 2, 2, 1 }, K = 3; int n = sizeof(A) / sizeof(A[0]); cout << numBoxes(A, n, K); return 0;} // This code is written by Sanjit_Prasad
// Java implementation of above approachimport java.util.*; class solution{ // Function to return number of boxesstatic int numBoxes(int A[], int n, int K){ // Sort the boxes in ascending order Arrays.sort(A); // Try to fit smallest box with // current heaviest box (from right // side) int i = 0, j = n - 1; int ans = 0; while (i <= j) { ans++; if (A[i] + A[j] <= K) i++; j--; } return ans;} // Driver programpublic static void main(String args[]){ int A[] = { 3, 2, 2, 1 }, K = 3; int n = A.length; System.out.println(numBoxes(A, n, K)); }} //THis code is contributed by// Surendra_Gangwar
# Python3 implementation of# above approach # Function to return number of boxesdef numBoxex(A,n,K): # Sort the boxes in ascending order A.sort() # Try to fit smallest box with current # heaviest box (from right side) i =0 j = n-1 ans=0 while i<=j: ans +=1 if A[i]+A[j] <=K: i+=1 j-=1 return ans # Driver codeif __name__=='__main__': A = [3, 2, 2, 1] K= 3 n = len(A) print(numBoxex(A,n,K)) # This code is contributed by# Shrikant13
// C# implementation of above approachusing System; class GFG{ // Function to return number of boxesstatic int numBoxes(int []A, int n, int K){ // Sort the boxes in ascending order Array.Sort(A); // Try to fit smallest box with // current heaviest box (from right // side) int i = 0, j = (n - 1); int ans = 0; while (i <= j) { ans++; if (A[i] + A[j] <= K) i++; j--; } return ans;} // Driver Codestatic public void Main (){ int []A = { 3, 2, 2, 1 }; int K = 3; int n = A.Length; Console.WriteLine(numBoxes(A, n, K));}} // This code is contributed by ajit
<?php//PHP implementation of above approach// Function to return number of boxesfunction numBoxes($A, $n, $K){ // Sort the boxes in ascending order sort($A); // Try to fit smallest box with // current heaviest box (from right // side) $i = 0; $j = $n - 1; $ans = 0; while ($i <= $j) { $ans++; if ($A[$i] + $A[$j] <= $K) $i++; $j--; } return $ans;} // Driver program $A = array (3, 2, 2, 1 ); $K = 3; $n = sizeof($A) / sizeof($A[0]); echo numBoxes($A, $n, $K); // This code is written by ajit?>
<script> // Javascript implementation of above approach // Function to return number of boxesfunction numBoxes(A, n, K){ // Sort the boxes in ascending order A.sort(function(a, b){return a - b}); // Try to fit smallest box with // current heaviest box (from right // side) let i = 0, j = (n - 1); let ans = 0; while (i <= j) { ans++; if (A[i] + A[j] <= K) i++; j--; } return ans;} // Driver codelet A = [ 3, 2, 2, 1 ];let K = 3;let n = A.length; document.write(numBoxes(A, n, K)); // This code is contributed by suresh07 </script>
Output:
3
Time Complexity: O(N*log(N)), where N is the length of array.
shrikanth13
SURENDRA_GANGWAR
jit_t
suresh07
Arrays
Greedy
Sorting
Arrays
Greedy
Sorting
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n10 May, 2021"
},
{
"code": null,
"e": 443,
"s": 54,
"text": "Given an array containing weights of gifts, and an integer K representing maximum weight a box can contain (All boxes are uniform). Each box carries at most 2 gifts at the sa... |
Python | os.statvfs() method | 21 Jun, 2022
OS module in Python provides functions for interacting with the operating system. OS comes under Python’s standard utility modules. This module provides a portable way of using operating system dependent functionality. os.statvfs() method in Python is used to get the information about the mounted file system containing the given path. To get file system information the method performs statvfs() system call on the given path. Note: os.statvfs() method is available on Unix platforms only.
Syntax: os.statvfs(path) Parameter: path: A path-like object for which file system information is required. Return Type: This method returns an object of class ‘os.statvfs_result’ whose attributes represents the information about the file system containing the given path. The returned os.statvfs_result object has following attributes:
f_bsize: It represents the file system block size
f_frsize: It represents the fragment size
f_blocks It represents the size of fs in f_frsize units
f_bfree: It represents the number of free blocks
f_bavail: It represents the number of free blocks for unprivileged users
f_files: It represents the number of inodes
f_ffree: It represents the number of free inodes
f_favail: It represents the number of free inodes for unprivileged users
f_fsid: It represents the file system ID
f_flag: It represents the mount flags
f_namemax: It represents the maximum filename length
Code: Use of os.statvfs() method to get information about the file system containing the given path.
Python3
# Python program to explain os.statvfs() method # importing os moduleimport os # File pathpath = "/home / ihritik / Desktop / file.txt" # Get the information about the# filesystem containing the# given path using os.statvfs() methodinfo = os.statvfs(path) # Print the information# about the file systemprint(info) # Print the file system block sizeprint("File system block size:", info.f_bsize) # Print the number of free blocks# in the file systemprint("Number of free blocks:", info.f_bfree)
os.statvfs_result(f_bsize=4096, f_frsize=4096, f_blocks=59798433, f_bfree=56521834,
f_bavail=53466807, f_files=15261696, f_ffree=14933520, f_favail=14933520, f_flag=4096,
f_namemax=255)
File system block size: 4096
Number of free blocks: 56517297
surinderdawra388
python-os-module
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Python Classes and Objects
Python OOPs Concepts
Introduction To PYTHON
Python | os.path.join() method
How to drop one or multiple columns in Pandas Dataframe
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Python | Get unique values from a list
Python | datetime.timedelta() function | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n21 Jun, 2022"
},
{
"code": null,
"e": 520,
"s": 28,
"text": "OS module in Python provides functions for interacting with the operating system. OS comes under Python’s standard utility modules. This module provides a portable way of usin... |
How to convert int to string in Python? | Type conversion is at times needed when the user wants to convert one data type into another data type according to requirement.
Python has in-built function str() to convert an integer to a string. We will be discussing various other methods in addition to this to convert int into string in Python.
This is the most commonly used method to convert int into string in Python.The str() takes integer variable as a parameter and converts it into a string.
str(integer variable)
Live Demo
num=2
print("Datatype before conversion",type(num))
num=str(num)
print(num)
print("Datatype after conversion",type(num))
Datatype before conversion <class 'int'>
2
Datatype after conversion <class 'str'>
The type() function gives the datatype of the variable which is passed as parameter.
In the above code, before conversion, the datatype of num is int and after the conversion, the datatype of num is str(i.e. string in python).
f ’{integer variable}’
Live Demo
num=2
print("Datatype before conversion",type(num))
num=f'{num}'
print(num)
print("Datatype after conversion",type(num))
Datatype before conversion <class 'int'>
2
Datatype after conversion <class 'str'>
“%s” % integer variable
Live Demo
num=2
print("Datatype before conversion",type(num))
num="%s" %num
print(num)
print("Datatype after conversion",type(num))
Datatype before conversion <class 'int'>
2
Datatype after conversion <class 'str'>
‘{}’.format(integer variable)
Live Demo
num=2
print("Datatype before conversion",type(num))
num='{}'.format(num)
print(num)
print("Datatype after conversion",type(num))
Datatype before conversion <class 'int'>
2
Datatype after conversion <class 'str'>
These were some of the methods to convert int into string in Python. We may require to convert int into string in certain scenarios such as appending value retained in a int into some string variable. One common scenario is to reverse an integer. We may convert it into string and then reverse which is easier than implementing mathematical logic to reverse an integer. | [
{
"code": null,
"e": 1316,
"s": 1187,
"text": "Type conversion is at times needed when the user wants to convert one data type into another data type according to requirement."
},
{
"code": null,
"e": 1488,
"s": 1316,
"text": "Python has in-built function str() to convert an inte... |
Find if there is a rectangle in binary matrix with corners as 1 | 15 Jun, 2022
There is a given binary matrix, we need to find if there exists any rectangle or square in the given matrix whose all four corners are equal to
Examples:
Input :
mat[][] = { 1 0 0 1 0
0 0 1 0 1
0 0 0 1 0
1 0 1 0 1}
Output : Yes
as there exists-
1 0 1
0 1 0
1 0 1
Brute Force Approach-
We start scanning the matrix whenever we find a 1 at any index then we try for all the combinations for index with which we can form the rectangle. algorithm–
for i = 1 to rows
for j = 1 to columns
if matrix[i][j] == 1
for k=i+1 to rows
for l=j+1 to columns
if (matrix[i][l]==1 &&
matrix[k][j]==1 &&
m[k][l]==1)
return true
return false
C++
Java
Python3
C#
Javascript
// A brute force approach based CPP program to// find if there is a rectangle with 1 as corners.#include <bits/stdc++.h>using namespace std; // Returns true if there is a rectangle with// 1 as corners.bool isRectangle(const vector<vector<int> >& m){ // finding row and column size int rows = m.size(); if (rows == 0) return false; int columns = m[0].size(); // scanning the matrix for (int y1 = 0; y1 < rows; y1++) for (int x1 = 0; x1 < columns; x1++) // if any index found 1 then try // for all rectangles if (m[y1][x1] == 1) for (int y2 = y1 + 1; y2 < rows; y2++) for (int x2 = x1 + 1; x2 < columns; x2++) if (m[y1][x2] == 1 && m[y2][x1] == 1 && m[y2][x2] == 1) return true; return false;} // Driver codeint main(){ vector<vector<int> > mat = { { 1, 0, 0, 1, 0 }, { 0, 0, 1, 0, 1 }, { 0, 0, 0, 1, 0 }, { 1, 0, 1, 0, 1 } }; if (isRectangle(mat)) cout << "Yes"; else cout << "No";}
// A brute force approach based CPP program to// find if there is a rectangle with 1 as corners.public class FindRectangle { // Returns true if there is a rectangle with // 1 as corners. static boolean isRectangle(int m[][]) { // finding row and column size int rows = m.length; if (rows == 0) return false; int columns = m[0].length; // scanning the matrix for (int y1 = 0; y1 < rows; y1++) for (int x1 = 0; x1 < columns; x1++) // if any index found 1 then try // for all rectangles if (m[y1][x1] == 1) for (int y2 = y1 + 1; y2 < rows; y2++) for (int x2 = x1 + 1; x2 < columns; x2++) if (m[y1][x2] == 1 && m[y2][x1] == 1 && m[y2][x2] == 1) return true; return false; } public static void main(String args[]) { int mat[][] = { { 1, 0, 0, 1, 0 }, { 0, 0, 1, 0, 1 }, { 0, 0, 0, 1, 0 }, { 1, 0, 1, 0, 1 } }; if (isRectangle(mat)) System.out.print("Yes"); else System.out.print("No"); }}// This code is contributed by Gaurav Tiwari
# A brute force approach based Python3 program to# find if there is a rectangle with 1 as corners. # Returns true if there is a rectangle# with 1 as corners.def isRectangle(m): # finding row and column size rows = len(m) if (rows == 0): return False columns = len(m[0]) # scanning the matrix for y1 in range(rows): for x1 in range(columns): # if any index found 1 then # try for all rectangles if (m[y1][x1] == 1): for y2 in range(y1 + 1, rows): for x2 in range(x1 + 1, columns): if (m[y1][x2] == 1 and m[y2][x1] == 1 and m[y2][x2] == 1): return True return False # Driver codemat = [[1, 0, 0, 1, 0], [0, 0, 1, 0, 1], [0, 0, 0, 1, 0], [1, 0, 1, 0, 1]] if (isRectangle(mat)): print("Yes")else: print("No") # This code is contributed# by mohit kumar 29
// A brute force approach based C# program to// find if there is a rectangle with 1 as corners.using System; public class FindRectangle { // Returns true if there is a rectangle with // 1 as corners. static Boolean isRectangle(int[, ] m) { // finding row and column size int rows = m.GetLength(0); if (rows == 0) return false; int columns = m.GetLength(1); // scanning the matrix for (int y1 = 0; y1 < rows; y1++) for (int x1 = 0; x1 < columns; x1++) // if any index found 1 then try // for all rectangles if (m[y1, x1] == 1) for (int y2 = y1 + 1; y2 < rows; y2++) for (int x2 = x1 + 1; x2 < columns; x2++) if (m[y1, x2] == 1 && m[y2, x1] == 1 && m[y2, x2] == 1) return true; return false; } // Driver code public static void Main(String[] args) { int[, ] mat = { { 1, 0, 0, 1, 0 }, { 0, 0, 1, 0, 1 }, { 0, 0, 0, 1, 0 }, { 1, 0, 1, 0, 1 } }; if (isRectangle(mat)) Console.Write("Yes"); else Console.Write("No"); }} // This code contributed by Rajput-Ji
<script> // A brute force approach based Javascript program to// find if there is a rectangle with 1 as corners. // Returns true if there is a rectangle with // 1 as corners. function isRectangle(m) { // finding row and column size let rows = m.length; if (rows == 0) return false; let columns = m[0].length; // scanning the matrix for (let y1 = 0; y1 < rows; y1++) for (let x1 = 0; x1 < columns; x1++) // if any index found 1 then try // for all rectangles if (m[y1][x1] == 1) for (let y2 = y1 + 1; y2 < rows; y2++) for (let x2 = x1 + 1; x2 < columns; x2++) if (m[y1][x2] == 1 && m[y2][x1] == 1 && m[y2][x2] == 1) return true; return false; } let mat = [[1, 0, 0, 1, 0], [0, 0, 1, 0, 1], [0, 0, 0, 1, 0], [1, 0, 1, 0, 1]]; if (isRectangle(mat)) document.write("Yes"); else document.write("No"); // This code is contributed by patel2127 </script>
Chapters
descriptions off, selected
captions settings, opens captions settings dialog
captions off, selected
English
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
Yes
Time Complexity: O(m2 * n2)
Auxiliary Space: O(1)
Efficient Approach
Scan from top to down, line by lineFor each line, remember each combination of 2 1’s and push that into a hash-setIf we ever find that combination again in a later line, we get our rectangle
Scan from top to down, line by line
For each line, remember each combination of 2 1’s and push that into a hash-set
If we ever find that combination again in a later line, we get our rectangle
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// An efficient approach based CPP program to// find if there is a rectangle with 1 as// corners.#include <bits/stdc++.h>using namespace std; // Returns true if there is a rectangle with// 1 as corners.bool isRectangle(const vector<vector<int> >& matrix){ // finding row and column size int rows = matrix.size(); if (rows == 0) return false; int columns = matrix[0].size(); // map for storing the index of combination of 2 1's unordered_map<int, unordered_set<int> > table; // scanning from top to bottom line by line for (int i = 0; i < rows; ++i) { for (int j = 0; j < columns - 1; ++j) { for (int k = j + 1; k < columns; ++k) { // if found two 1's in a column if (matrix[i][j] == 1 && matrix[i][k] == 1) { // check if there exists 1's in same // row previously then return true // we don't need to check (j, k) pair // and again (k, j) pair because we always // store pair in ascending order and similarly // check in ascending order, i.e. j always less // than k. if (table.find(j) != table.end() && table[j].find(k) != table[j].end()) return true; // store the indexes in hashset table[j].insert(k); } } } } return false;} // Driver codeint main(){ vector<vector<int> > mat = { { 1, 0, 0, 1, 0 }, { 0, 1, 1, 1, 1 }, { 0, 0, 0, 1, 0 }, { 1, 1, 1, 1, 0 } }; if (isRectangle(mat)) cout << "Yes"; else cout << "No";}// This code is improved by Gautam Agrawal
// An efficient approach based Java program to// find if there is a rectangle with 1 as// corners.import java.util.HashMap;import java.util.HashSet;public class FindRectangle { // Returns true if there is a rectangle with // 1 as corners. static boolean isRectangle(int matrix[][]) { // finding row and column size int rows = matrix.length; if (rows == 0) return false; int columns = matrix[0].length; // map for storing the index of combination of 2 1's HashMap<Integer, HashSet<Integer> > table = new HashMap<>(); // scanning from top to bottom line by line for (int i = 0; i < rows; i++) { for (int j = 0; j < columns - 1; j++) { for (int k = j + 1; k < columns; k++) { // if found two 1's in a column if (matrix[i][j] == 1 && matrix[i][k] == 1) { // check if there exists 1's in same // row previously then return true if (table.containsKey(j) && table.get(j).contains(k)) { return true; } if (table.containsKey(k) && table.get(k).contains(j)) { return true; } // store the indexes in hashset if (!table.containsKey(j)) { HashSet<Integer> x = new HashSet<>(); x.add(k); table.put(j, x); } else { table.get(j).add(k); } if (!table.containsKey(k)) { HashSet<Integer> x = new HashSet<>(); x.add(j); table.put(k, x); } else { table.get(k).add(j); } } } } } return false; } public static void main(String args[]) { int mat[][] = { { 1, 0, 0, 1, 0 }, { 0, 0, 1, 0, 1 }, { 0, 0, 0, 1, 0 }, { 1, 0, 1, 0, 1 } }; if (isRectangle(mat)) System.out.print("Yes"); else System.out.print("No"); }}// This code is contributed by Gaurav Tiwari
# An efficient approach based Python program# to find if there is a rectangle with 1 as# corners. # Returns true if there is a rectangle# with 1 as corners.def isRectangle(matrix): # finding row and column size rows = len(matrix) if (rows == 0): return False columns = len(matrix[0]) # map for storing the index of # combination of 2 1's table = {} # scanning from top to bottom # line by line for i in range(rows): for j in range(columns - 1): for k in range(j + 1, columns): # if found two 1's in a column if (matrix[i][j] == 1 and matrix[i][k] == 1): # check if there exists 1's in same # row previously then return true if (j in table and k in table[j]): return True if (k in table and j in table[k]): return True # store the indexes in hashset if j not in table: table[j] = set() if k not in table: table[k] = set() table[j].add(k) table[k].add(j) return False # Driver Codeif __name__ == '__main__': mat = [[ 1, 0, 0, 1, 0 ], [ 0, 0, 1, 0, 1 ], [ 0, 0, 0, 1, 0 ], [ 1, 0, 1, 0, 1 ]] if (isRectangle(mat)): print("Yes") else: print("No") # This code is contributed# by SHUBHAMSINGH10
// An efficient approach based C# program to// find if there is a rectangle with 1 as// corners.using System;using System.Collections.Generic; public class FindRectangle { // Returns true if there is a rectangle with // 1 as corners. static bool isRectangle(int[, ] matrix) { // finding row and column size int rows = matrix.GetLength(0); if (rows == 0) return false; int columns = matrix.GetLength(1); // map for storing the index of combination of 2 1's Dictionary<int, HashSet<int> > table = new Dictionary<int, HashSet<int> >(); // scanning from top to bottom line by line for (int i = 0; i < rows; i++) { for (int j = 0; j < columns - 1; j++) { for (int k = j + 1; k < columns; k++) { // if found two 1's in a column if (matrix[i, j] == 1 && matrix[i, k] == 1) { // check if there exists 1's in same // row previously then return true if (table.ContainsKey(j) && table[j].Contains(k)) { return true; } if (table.ContainsKey(k) && table[k].Contains(j)) { return true; } // store the indexes in hashset if (!table.ContainsKey(j)) { HashSet<int> x = new HashSet<int>(); x.Add(k); table.Add(j, x); } else { table[j].Add(k); } if (!table.ContainsKey(k)) { HashSet<int> x = new HashSet<int>(); x.Add(j); table.Add(k, x); } else { table[k].Add(j); } } } } } return false; } public static void Main(String[] args) { int[, ] mat = { { 1, 0, 0, 1, 0 }, { 0, 0, 1, 0, 1 }, { 0, 0, 0, 1, 0 }, { 1, 0, 1, 0, 1 } }; if (isRectangle(mat)) Console.Write("Yes"); else Console.Write("No"); }} // This code is contributed by PrinciRaj1992
<script> // An efficient approach based Javascript program to// find if there is a rectangle with 1 as// corners. // Returns true if there is a rectangle with // 1 as corners. function isRectangle(matrix) { // finding row and column size let rows = matrix.length; if (rows == 0) return false; let columns = matrix[0].length; // map for storing the index of // combination of 2 1's let table = new Map(); // scanning from top to bottom line by line for (let i = 0; i < rows; i++) { for (let j = 0; j < columns - 1; j++) { for (let k = j + 1; k < columns; k++) { // if found two 1's in a column if (matrix[i][j] == 1 && matrix[i][k] == 1) { // check if there // exists 1's in same // row previously then // return true if (table.has(j) && table.get(j).has(k)) { return true; } if (table.has(k) && table.get(k).has(j)) { return true; } // store the indexes in hashset if (!table.has(j)) { let x = new Set(); x.add(k); table.set(j, x); } else { table.get(j).add(k); } if (!table.has(k)) { let x = new Set(); x.add(j); table.set(k, x); } else { table.get(k).add(j); } } } } } return false; } let mat = [[ 1, 0, 0, 1, 0 ], [ 0, 0, 1, 0, 1 ], [ 0, 0, 0, 1, 0 ], [ 1, 0, 1, 0, 1 ]]; if (isRectangle(mat)) document.write("Yes"); else document.write("No"); // This code is contributed by unknown2108 </script>
Yes
Time Complexity: O(n*m2)
Auxiliary Space: O(n*m)
More Efficient Approach
The previous approach scans through every pair of column indexes to find each combination of 2 1’s.
To more efficiently find each combination of 2 1’s, convert each row into a set of column indexes.
Then, select pairs of column indexes from the row set to quickly get each combination of 2 1’s.
If a pair of column indexes appears more than once, then there is a rectangle whose corners are 1’s.
The runtime becomes O(m*n+n*n*log(n*n)). This is because there are m*n cells in the matrix and there are at most O(n^2) combinations of column indexes and we are using a map which will store every entry in log(n) time.
Also, if n > m, then by first transposing the matrix, the runtime becomes O(m*n+m*m*log(m*m)).
Notice that min{m*n+n*n*log(n*n), m*n+m*m*log(m*m)} is O(m*n + p*p*log(p*p)), p=max(n,m).
Below is the implementation of the above approach:
C++
// C++ implementation comes from:// https://github.com/MichaelWehar/FourCornersProblem// Written by Niteesh Kumar and Michael Wehar// References:// [1] F. Mráz, D. Prusa, and M. Wehar.// Two-dimensional Pattern Matching against// Basic Picture Languages. CIAA 2019.// [2] D. Prusa and M. Wehar. Complexity of// Searching for 2 by 2 Submatrices in Boolean// Matrices. DLT 2020. #include <bits/stdc++.h>using namespace std; bool searchForRectangle(int rows, int cols, vector<vector<int>> mat){ // Make sure that matrix is non-trivial if (rows < 2 || cols < 2) { return false; } // Create map int num_of_keys; map<int, vector<int>> adjsList; if (rows >= cols) { // Row-wise num_of_keys = rows; // Convert each row into vector of col indexes for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { if (mat[i][j]) { adjsList[i].push_back(j); } } } } else { // Col-wise num_of_keys = cols; // Convert each col into vector of row indexes for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { if (mat[i][j]) { adjsList[j].push_back(i); } } } } // Search for a rectangle whose four corners are 1's map<pair<int, int>, int> pairs; for (int i = 0; i < num_of_keys; i++) { vector<int> values = adjsList[i]; int size = values.size(); for (int j = 0; j < size - 1; j++) { for (int k = j + 1; k < size; k++) { pair<int, int> temp = make_pair(values[j], values[k]); if (pairs.find(temp) != pairs.end()) { return true; } else { pairs[temp] = i; } } } } return false;} // Driver codeint main(){ vector<vector<int> > mat = { { 1, 0, 0, 1, 0 }, { 0, 1, 1, 1, 1 }, { 0, 0, 0, 1, 0 }, { 1, 1, 1, 1, 0 } }; if (searchForRectangle(4, 5, mat)) cout << "Yes"; else cout << "No";}
Yes
Time Complexity: O(m*n + p*p*log(p*p)), p=max(n,m).
Auxiliary Space: O(n*m)
abhishekahuja02
_Gaurav_Tiwari
mohit kumar 29
SHUBHAMSINGH10
Rajput-Ji
princiraj1992
code hunter
mohith__b
mwehar
patel2127
unknown2108
cppdp
201801215
adnanirshad158
sachinvinod1904
cpp-unordered_map
Facebook
Google
square-rectangle
Hash
Matrix
Google
Facebook
Hash
Matrix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)
What is Hashing | A Complete Tutorial
Internal Working of HashMap in Java
Count pairs with given sum
Longest Consecutive Subsequence
Print a given matrix in spiral form
Matrix Chain Multiplication | DP-8
Program to find largest element in an array
Rat in a Maze | Backtracking-2
Sudoku | Backtracking-7 | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n15 Jun, 2022"
},
{
"code": null,
"e": 197,
"s": 52,
"text": "There is a given binary matrix, we need to find if there exists any rectangle or square in the given matrix whose all four corners are equal to "
},
{
"code": null,
... |
PHP Program to print all permutations of a given string | 10 Dec, 2021
A permutation also called an “arrangement number” or “order,” is a rearrangement of the elements of an ordered list S into a one-to-one
A permutation also called an “arrangement number” or “order,” is a rearrangement of the elements of an ordered list S into a one-to-one correspondence with S itself. A string of length n has n! permutation.
Source: Mathword(http://mathworld.wolfram.com/Permutation.html)
Below are the permutations of string ABC. ABC ACB BAC BCA CBA CAB
Here is a solution that is used as a basis in backtracking.
PHP
<?php // PHP program to print all // permutations of a given string. /* Permutation function @param str string to calculate permutation for @param l starting index @param r end index */function permute($str, $l, $r) { if ($l == $r) echo $str. "\n"; else { for ($i = $l; $i <= $r; $i++) { $str = swap($str, $l, $i); permute($str, $l + 1, $r); $str = swap($str, $l, $i); } } } /* Swap Characters at position @param a string value @param i position 1 @param j position 2 @return swapped string */function swap($a, $i, $j) { $temp; $charArray = str_split($a); $temp = $charArray[$i] ; $charArray[$i] = $charArray[$j]; $charArray[$j] = $temp; return implode($charArray); } // Driver Code $str = "ABC"; $n = strlen($str); permute($str, 0, $n - 1); // This code is contributed by mits. ?>
Output:
ABC
ACB
BAC
BCA
CBA
CAB
Algorithm Paradigm: Backtracking
Time Complexity: O(n*n!) Note that there are n! permutations and it requires O(n) time to print a permutation.
Auxiliary Space: O(r – l)
Note: The above solution prints duplicate permutations if there are repeating characters in the input string. Please see the below link for a solution that prints only distinct permutations even if there are duplicates in input.Print all distinct permutations of a given string with duplicates. Permutations of a given string using STL
PHP Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n10 Dec, 2021"
},
{
"code": null,
"e": 164,
"s": 28,
"text": "A permutation also called an “arrangement number” or “order,” is a rearrangement of the elements of an ordered list S into a one-to-one"
},
{
"code": null,
"e": 37... |
How to create sticky footer in ReactJS ? | 25 May, 2021
In this article, we will see how to create a sticky footer in ReactJS. A footer is an important element of a website’s design. A sticky footer sticks to the bottom of the website and signals to the user that they have reached the end of the webpage. For working with react, we have to set up the project first.
Creating React Application:
Step 1: Create a React application using the following command:npx create-react-app react-footer
Step 1: Create a React application using the following command:
npx create-react-app react-footer
Step 2: After creating your project folder i.e. react-footer, move to it using the following command:cd react-footer
Step 2: After creating your project folder i.e. react-footer, move to it using the following command:
cd react-footer
Project Structure: It will look like the following.
Example: In this example, we will design a footer, for that we will need to manipulate the App.js file and App.css as well as the Footer.js file.
Footer.js
import React from 'react'; const Footer = () => ( <footer className="footer"> <p>This is react sticky footer!!</p> </footer>); export default Footer;
App.css
body { margin: 0; padding: 0; height:1000px;}.App{ color: #228b22; text-align: center;}.footer { background-color: green; border-top:2px solid red; position: fixed; width: 100%; bottom: 0; color: white; font-size: 25px;}
App.js
import React from "react"; // Importing the footer componentimport Footer from "./Footer"; // Importing the styling of App componentimport "./App.css"; const App = () => ( <div className="App"> <h3>GeeksforGeeks</h3> <h2>Sticky Footer using Reactjs!</h2> <Footer /> </div>); export default App;
Step to Run Application: Run the application using the following command from the root directory of the project:
npm start
Output: Now open your browser and go to http://localhost:3000/, you will see the following output:
CSS-Properties
Picked
React-Questions
ReactJS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Axios in React: A Guide for Beginners
ReactJS useNavigate() Hook
How to install bootstrap in React.js ?
How to create a multi-page website using React.js ?
How to do crud operations in ReactJS ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Installation of Node.js on Linux
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
Differences between Functional Components and Class Components in React | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n25 May, 2021"
},
{
"code": null,
"e": 339,
"s": 28,
"text": "In this article, we will see how to create a sticky footer in ReactJS. A footer is an important element of a website’s design. A sticky footer sticks to the bottom of the webs... |
How to access array Elements in Ruby | 24 Oct, 2019
In this article, we will learn how to access array elements in Ruby.
In Ruby, there are several ways to retrieve the elements from the array. Ruby arrays provide a lot of different methods to access the array element. But the most used way is to use the index of an array.
# Ruby program to demonstrate the # accessing the elements of the array # creating string using [] str = ["GFG", "G4G", "Sudo", "Geeks"] # accessing array elements # using index puts str[1] # using the negative index puts str[-1]
Output:
G4G
Geeks
Sometimes, we need to access the multiple elements from the array. So to access the multiple elements, pass the two specified array index into the [].
# Ruby program to demonstrate the # accessing the multiple elements # from array # creating string using [] str = ["GFG", "G4G", "Sudo", "Geeks"] # accessing multiple array elements puts str[2, 3]
Output:
Sudo
Geeks
# Ruby program to demonstrate the # accessing the multiple elements # from array # creating string using [] str = [["GFG", "G4G", "Sudo"], [ "Geeks", "for", "portal"] ] # accessing item from multi-arrayputs str[1][2]
Output:
portal
Ruby Array
Ruby
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n24 Oct, 2019"
},
{
"code": null,
"e": 97,
"s": 28,
"text": "In this article, we will learn how to access array elements in Ruby."
},
{
"code": null,
"e": 301,
"s": 97,
"text": "In Ruby, there are several ways to retr... |
turtle.numinput() function in Python | 15 Sep, 2021
The turtle module provides turtle graphics primitives, in both object-oriented and procedure-oriented ways. Because it uses Tkinter for the underlying graphics, it needs a version of Python installed with Tk support.
This function is used to pop up a dialog window for the input of a number. The number input must be in the range minval to maxval if these are given. If not, a hint is issued and the dialog remains open for correction.
Syntax :
turtle.numinput(title, prompt, default=None, minval=None, maxval=None)
Parameters:
Below is the implementation of the above method with some examples :
Example 1 :
Python3
# import packageimport turtleturtle.numinput("title","prompt")
Output :
Example 2 :
Python3
# import packageimport turtle # taking inputnum = int(turtle.numinput("Contact Detail", "Phone no.", default=9999999999, minval=6000000000, maxval=9999999999 ))print(num)
Output :
9897347846
anikakapoor
Python-turtle
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
Different ways to create Pandas Dataframe
Enumerate() in Python
Read a file line by line in Python
Python String | replace()
How to Install PIP on Windows ?
*args and **kwargs in Python
Python Classes and Objects
Iterate over a list in Python
Introduction To PYTHON | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n15 Sep, 2021"
},
{
"code": null,
"e": 245,
"s": 28,
"text": "The turtle module provides turtle graphics primitives, in both object-oriented and procedure-oriented ways. Because it uses Tkinter for the underlying graphics, it needs a ver... |
Remove duplicates from unsorted array using Map data structure | 31 May, 2022
Given an unsorted array of integers, print the array after removing the duplicate elements from it. We need to print distinct array elements according to their first occurrence.Examples:
Input : arr[] = { 1, 2, 5, 1, 7, 2, 4, 2}
Output : 1 2 5 7 4
Explanation : {1, 2} appear more than one time.
Approach :
Take a hash map, which will store all the elements which have appeared before.
Traverse the array.
Check if the element is present in the hash map.
If yes, continue traversing the array.
Else Print the element.
C++
Java
Python3
C#
Javascript
// C++ program to remove the duplicates from the array.#include "iostream"#include "unordered_map"using namespace std; void removeDups(int arr[], int n){ // Hash map which will store the // elements which has appeared previously. unordered_map<int, bool> mp; for (int i = 0; i < n; ++i) { // Print the element if it is not // there in the hash map if (mp.find(arr[i]) == mp.end()) { cout << arr[i] << " "; } // Insert the element in the hash map mp[arr[i]] = true; }} int main(int argc, char const* argv[]){ int arr[] = { 1, 2, 5, 1, 7, 2, 4, 2 }; int n = sizeof(arr) / sizeof(arr[0]); removeDups(arr, n); return 0;}
// Java program to remove// the duplicates from the array.import java.util.HashMap; class GFG{ static void removeDups(int[] arr, int n) { // Hash map which will store the // elements which has appeared previously. HashMap<Integer, Boolean> mp = new HashMap<>(); for (int i = 0; i < n; ++i) { // Print the element if it is not // there in the hash map if (mp.get(arr[i]) == null) System.out.print(arr[i] + " "); // Insert the element in the hash map mp.put(arr[i], true); } } // Driver Code public static void main(String[] args) { int[] arr = { 1, 2, 5, 1, 7, 2, 4, 2 }; int n = arr.length; removeDups(arr, n); }} // This code is contributed by// sanjeev2552
# Python 3 program to remove the# duplicates from the arraydef removeDups(arr, n): # dict to store every element # one time mp = {i : 0 for i in arr} for i in range(n): if mp[arr[i]] == 0: print(arr[i], end = " ") mp[arr[i]] = 1 # Driver codearr = [ 1, 2, 5, 1, 7, 2, 4, 2 ] # len of arrayn = len(arr) removeDups(arr,n) # This code is contributed# by Mohit Kumar
// C# program to remove// the duplicates from the array.using System;using System.Collections.Generic; class GFG{ static void removeDups(int[] arr, int n) { // Hash map which will store the // elements which has appeared previously. Dictionary<int, Boolean> mp = new Dictionary<int, Boolean>(); for (int i = 0; i < n; ++i) { // Print the element if it is not // there in the hash map if (!mp.ContainsKey(arr[i])) Console.Write(arr[i] + " "); // Insert the element in the hash map mp[arr[i]] = true; } } // Driver Code public static void Main(String[] args) { int[] arr = { 1, 2, 5, 1, 7, 2, 4, 2 }; int n = arr.Length; removeDups(arr, n); }} // This code is contributed by Rajput-Ji
<script> // JavaScript program to remove// the duplicates from the array. function removeDups(arr,n){ // Hash map which will store the // elements which has appeared previously. let mp = new Map(); for (let i = 0; i < n; ++i) { // Print the element if it is not // there in the hash map if (mp.get(arr[i]) == null) document.write(arr[i] + " "); // Insert the element in the hash map mp.set(arr[i], true); }} // Driver Codelet arr=[1, 2, 5, 1, 7, 2, 4, 2 ];let n = arr.length;removeDups(arr, n); // This code is contributed by unknown2108 </script>
1 2 5 7 4
Time Complexity – O(N)
Auxiliary Space – O(N).
mohit kumar 29
aman2096
sanjeev2552
Rajput-Ji
unknown2108
anandkumarshivam2266
cpp-unordered_map
Technical Scripter 2018
Arrays
Hash
Technical Scripter
Arrays
Hash
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n31 May, 2022"
},
{
"code": null,
"e": 241,
"s": 52,
"text": "Given an unsorted array of integers, print the array after removing the duplicate elements from it. We need to print distinct array elements according to their first occurren... |
std::string::push_back() in C++ | 06 Jul, 2017
The push_back() member function is provided to append characters. Appends character c to the end of the string, increasing its length by one.Syntax :
void string:: push_back (char c)
Parameters: Character which to be appended.
Return value: None
Error: throws length_error if the
resulting size exceeds the maximum number of characters(max_size).
// CPP code for to illustrate // std::string::push_back() #include <iostream>#include <string>using namespace std; // Function to demonstrate push_back()void push_backDemo(string str1, string str2){ // Appends character by character str2 // at the end of str1 for(int i = 0; str2[i] != '\0'; i++) { str1.push_back(str2[i]); } cout << "After push_back : "; cout << str1;} // Driver codeint main(){ string str1("Geeksfor"); string str2("Geeks"); cout << "Original String : " << str1 << endl; push_backDemo(str1, str2); return 0;}
Output:
Original String : Geeksfor
After push_back : GeeksforGeeks
This article is contributed by Sakshi Tiwari. If you like GeeksforGeeks(We know you do!) and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
cpp-string
cpp-strings-library
STL
C++
STL
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 53,
"s": 25,
"text": "\n06 Jul, 2017"
},
{
"code": null,
"e": 203,
"s": 53,
"text": "The push_back() member function is provided to append characters. Appends character c to the end of the string, increasing its length by one.Syntax :"
},
{
"code": nu... |
spawn - Unix, Linux Command | spawn [generic Postfix daemon options] command_attributes...
This daemon expects to be run from the master(8) process
manager.
The text below provides only a parameter summary. See
postconf(5) for more details including examples.
In the text below, transport is the first field of the entry
in the master.cf file.
postconf(5), configuration parameters
master(8), process manager
syslogd(8), system logging
Wietse Venema
IBM T.J. Watson Research
P.O. Box 704
Yorktown Heights, NY 10598, USA | [
{
"code": null,
"e": 10773,
"s": 10711,
"text": "spawn [generic Postfix daemon options] command_attributes...\n"
},
{
"code": null,
"e": 10841,
"s": 10773,
"text": "\nThis daemon expects to be run from the master(8) process\nmanager.\n"
},
{
"code": null,
"e": 10950,
... |
How to Extend an Object in JavaScript ? | 30 Jan, 2020
The extends keyword can be used to extend the objects as well as classes in JavaScript. It is usually used to create a class which is child of another class.
Syntax:
class childclass extends parentclass {...}
class parentclass extends in-built object {...}
Below example depicts how a child class uses properties of parent class using the keyword extends and by creating objects of the child class. In example 1, we see that class Profile has two attributes name and age. Now we will see that class Student acquires both attributes of class Profile using the keyword extends with an added attribute languages and then all attributes are displayed.
Example 1: In this example, we use the extends keyword.
Program:<script> // Declaring class class Profile { // Constructor of profile class constructor(name, age) { this.name = name; this.age = age; } getName() { // Method to return name return this.name; } getAge() { // Method to return age return this.age; } getClass() { return this; } } // Class Student extends class Profile class Student extends Profile { // Each data of class Profile can be // accessed from student class. constructor(name, age, languages) { // Acquiring of attributes of parent class super(name, age); this.lang = [...languages]; } // Method to display all attributes getDetails() { console.log("Name : " + this.name); console.log("Age : " + this.age); console.log("Languages : " + this.lang); }} // Creating object of child class with passing of valuesvar student1 = new Student("Ankit Dholakia", 24, ['Java', 'Python', 'PHP', 'JavaScript']);student1.getDetails(); </script>
<script> // Declaring class class Profile { // Constructor of profile class constructor(name, age) { this.name = name; this.age = age; } getName() { // Method to return name return this.name; } getAge() { // Method to return age return this.age; } getClass() { return this; } } // Class Student extends class Profile class Student extends Profile { // Each data of class Profile can be // accessed from student class. constructor(name, age, languages) { // Acquiring of attributes of parent class super(name, age); this.lang = [...languages]; } // Method to display all attributes getDetails() { console.log("Name : " + this.name); console.log("Age : " + this.age); console.log("Languages : " + this.lang); }} // Creating object of child class with passing of valuesvar student1 = new Student("Ankit Dholakia", 24, ['Java', 'Python', 'PHP', 'JavaScript']);student1.getDetails(); </script>
Output:Name : Ankit Dholakia
Age : 24
Languages : Java,Python,PHP,JavaScript
Name : Ankit Dholakia
Age : 24
Languages : Java,Python,PHP,JavaScript
Example 2: In this example, we will see how spread syntax can be used to extend two objects into a third object and display the containing attributes of both objects.
Program:<script>// Creating first objectvar obj1 = { name: 'Ankit', age: 20}; // Creating second objectvar obj2 = { marks: 50}; // Using spread syntax to extend// both objects into onevar object = { ...obj1, ...obj2};console.log(object); </script>
<script>// Creating first objectvar obj1 = { name: 'Ankit', age: 20}; // Creating second objectvar obj2 = { marks: 50}; // Using spread syntax to extend// both objects into onevar object = { ...obj1, ...obj2};console.log(object); </script>
Output:{ name: 'Ankit', age: 20, marks: 50 }
{ name: 'Ankit', age: 20, marks: 50 }
JavaScript-Misc
Picked
JavaScript
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
Remove elements from a JavaScript Array
How to append HTML code to a div using JavaScript ?
Difference Between PUT and PATCH Request
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Installation of Node.js on Linux
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ? | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n30 Jan, 2020"
},
{
"code": null,
"e": 186,
"s": 28,
"text": "The extends keyword can be used to extend the objects as well as classes in JavaScript. It is usually used to create a class which is child of another class."
},
{
"co... |
Compare the Case Insensitive strings in JavaScript | 22 Apr, 2019
Comparing strings in a case insensitive manner means to compare them without taking care of the uppercase and lowercase letters. To perform this operation the most preferred method is to use either toUpperCase() or toLowerCase() function.
toUpperCase() function: The str.toUpperCase() function converts the entire string to Upper case. This function does not affect any of the special characters, digits and the alphabets that are already in upper case.Syntax:string.toUpperCase()
Syntax:
string.toUpperCase()
toLowerCase() function: The str.toLowerCase() function converts the entire string to lower case. This function does not affect any of the special characters, digits and the alphabets that are already in the lower case.Syntax:string.toLowerCase()
Syntax:
string.toLowerCase()
Example 1: This example uses toUpperCase() function to compare two strings.
<!DOCTYPE html> <html> <head> <title> JavaScript | Case insensitive string comparison </title> </head> <body style = "text-align:center;"> <h1 style = "color:green;" > GeeksForGeeks </h1> <p id="GFG_up" style="color:green;"></p> <button onclick = "myGeeks()"> Click here </button> <p id="GFG_down" style="color:green;"></p> <script> var str1 = "this iS geeksForGeeKs"; var str2 = "This IS GeeksfOrgeeks"; var p_up = document.getElementById("GFG_up"); p_up.innerHTML = str1 + "<br>" + str2; function myGeeks() { var p_down = document.getElementById("GFG_down"); var areEqual = str1.toUpperCase() === str2.toUpperCase(); p_down.innerHTML = areEqual; } </script> </body> </html>
Output:
Before clicking on the button:
After clicking on the button:
Example 2: This example uses toLoweCase() function to compare two strings.
<!DOCTYPE html> <html> <head> <title> JavaScript | Case insensitive string comparison </title> </head> <body style = "text-align:center;"> <h1 style = "color:green;" > GeeksForGeeks </h1> <p id="GFG_up" style="color:green;"></p> <button onclick = "myGeeks()"> Click here </button> <p id="GFG_down" style="color:green;"></p> <script> var str1 = "this iS geeks"; var str2 = "This IS GeeksfOrgeeks"; var p_up = document.getElementById("GFG_up"); p_up.innerHTML = str1 + "<br>" + str2; function myGeeks() { var p_down = document.getElementById("GFG_down"); var areEqual = str1.toLowerCase() === str2.toLowerCase(); p_down.innerHTML = areEqual; } </script> </body> </html>
Output:
Before clicking on the button:
After clicking on the button:
JavaScript-Misc
javascript-string
JavaScript
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
Remove elements from a JavaScript Array
Hide or show elements in HTML using display property
Roadmap to Learn JavaScript For Beginners
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ? | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n22 Apr, 2019"
},
{
"code": null,
"e": 267,
"s": 28,
"text": "Comparing strings in a case insensitive manner means to compare them without taking care of the uppercase and lowercase letters. To perform this operation the most preferred m... |
howdoi in Python | 17 May, 2021
howdoi is a command-line tool written in Python. It gives the answers to do basic programming tasks, while working still in the console, directly from the command line. It scrapes code from the top answers on StackOverflow. You need an internet connection for using howdoi.
howdoi will answer all sorts of queries related to programming and coding. Like getting help in syntax, searching for libraries for a specific purpose, resolving errors, using pre-defined functions and their applications, etc.
Command to Install:
pip install howdoi
Usage:
howdoi QUERY
Other optional arguments are:
-h : show this help message and exit
-p POS : select answer in specified position (default: 1)
-al : display the full text of the answer
-l : display only the answer link
-c : enable colorized output
-n NUM_ANSWERS : number of answers to return
-C : clear the cache
-v : displays the current version of howdoi
Examples:
howdoi tells us that in C, a comment can be given by using “//”.
howdoi tells that python can be exited by using quit()
howdoi tells us how to check if a list in python is empty or not.
howdoi tells us that the most recent commit on git can be undone by the following the steps
howdoi tells us how to redirect into an another webpage.
-l optional argument displays the link of the StackOverflow article from which the answer is taken.
Shivam_k
Akanksha_Rai
sumitgumber28
python-utility
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to iterate through Excel rows in Python?
Rotate axis tick labels in Seaborn and Matplotlib
Deque in Python
Queue in Python
Defaultdict in Python
Check if element exists in list in Python
Python Classes and Objects
Bar Plot in Matplotlib
reduce() in Python
Python | Get unique values from a list | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n17 May, 2021"
},
{
"code": null,
"e": 303,
"s": 28,
"text": "howdoi is a command-line tool written in Python. It gives the answers to do basic programming tasks, while working still in the console, directly from the command line. It scr... |
Measuring script execution time in PHP | 04 Nov, 2018
Script execution time in PHP is the time required to execute the PHP script. To calculate script execution time use clock time rather than the CPU execution time. Knowing clock time before script execution and after script execution will help to know the script execution time.
Example: Sample script
<?phpfor($i = 1; $i <=1000; $i++){ echo "Hello Geeks!";} ?>
Clock time can get using microtime() function. First use it before starts the script and then at the end of the script. Then using formula (End_time – Start_time). The mirotime() function returns time in seconds. Execution time is not fixed, it depends upon the processor.
Example:
<?php // Starting clock time in seconds$start_time = microtime(true);$a=1; // Start loopfor($i = 1; $i <=1000; $i++){ $a++;} // End clock time in seconds$end_time = microtime(true); // Calculate script execution time$execution_time = ($end_time - $start_time); echo " Execution time of script = ".$execution_time." sec";?>
Execution time of script = 1.6927719116211E-5 sec
PHP-basics
Picked
PHP
PHP Programs
Technical Scripter
Web Technologies
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n04 Nov, 2018"
},
{
"code": null,
"e": 306,
"s": 28,
"text": "Script execution time in PHP is the time required to execute the PHP script. To calculate script execution time use clock time rather than the CPU execution time. Knowing cloc... |
Dumping queue into list or array in Python | 31 Dec, 2020
Prerequisite: Queue in Python
Here given a queue and our task is to dump the queue into list or array. We are to see two methods to achieve the objective of our solution.
Example 1:
In this example, we will create a queue using the collection package and then cast it into the list
Python3
# Python program to# demonstrate queue implementation# using collections.dequeue from collections import deque # Initializing a queueq = deque() # Adding elements to a queueq.append('a')q.append('b')q.append('c') # display the queueprint("Initial queue")print(q,"\n") # display the typeprint(type(q))
Output:
Initial queue
deque(['a', 'b', 'c'])
<class 'collections.deque'>
Let’s create a list and cast into it:
Python3
# convert into listli = list(q) # displayprint("Convert into the list")print(li)print(type(li))
Output:
Convert into the list
['a', 'b', 'c']
<class 'list'>
Example 2:
In this example, we will create a queue using the queue module and then cast it into the list.
Python3
from queue import Queue # Initializing a queueque = Queue() # Adding elements to a queueque.put(1)que.put(2)que.put(3)que.put(4)que.put(5) # display the queueprint("Initial queue")print(que.queue) # casting into the listli = list(que.queue)print("\nConverted into the list")print(li)
Output:
Initial queue
deque([1, 2, 3, 4, 5])
Converted into the list
[1, 2, 3, 4, 5]
Python DSA-exercises
Python
Queue
Queue
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n31 Dec, 2020"
},
{
"code": null,
"e": 58,
"s": 28,
"text": "Prerequisite: Queue in Python"
},
{
"code": null,
"e": 199,
"s": 58,
"text": "Here given a queue and our task is to dump the queue into list or array. We ar... |
Appending a dictionary to a list in Python - GeeksforGeeks | 03 Feb, 2022
In this article, we will discuss how to append the dictionary to the list data structure in Python.
Here we will discuss:
Appending a dictionary to a list with the same key and different valuesUsing append() methodUsing copy() method to list using append() methodUsing deepcopy() method to list using append() methodUsing NumPy.
Appending a dictionary to a list with the same key and different values
Using append() method
Using copy() method to list using append() method
Using deepcopy() method to list using append() method
Using NumPy.
Here we are going to append a dictionary of integer type to an empty list using for loop with same key but different values. We will use the using zip() function
Syntax: list=[dict(zip([key],[x])) for x in range(start,stop)]
where, key is the key and range() is the range of values to be appended
Example: Python code to append 100 values from 1 to 100 with 1 as key to list
In this example, we are taking elements from 1 to 100 as values and assigning those values to the key 1 and finally, we are zipping the elements and finally, we are appending the elements in a list.
Python3
# append 100 values from 1 to 100# with 1 as key to listl = [dict(zip([1],[x])) for x in range(1,100)] # display listprint(l)
Output:
[{1: 1}, {1: 2}, {1: 3}, {1: 4}, {1: 5}, {1: 6}, {1: 7}, {1: 8}, {1: 9}, {1: 10}, {1: 11}, {1: 12}, {1: 13}, {1: 14}, {1: 15}, {1: 16}, {1: 17}, {1: 18}, {1: 19}, {1: 20}, {1: 21}, {1: 22}, {1: 23}, {1: 24}, {1: 25}, {1: 26}, {1: 27}, {1: 28}, {1: 29}, {1: 30}, {1: 31}, {1: 32}, {1: 33}, {1: 34}, {1: 35}, {1: 36}, {1: 37}, {1: 38}, {1: 39}, {1: 40}, {1: 41}, {1: 42}, {1: 43}, {1: 44}, {1: 45}, {1: 46}, {1: 47}, {1: 48}, {1: 49}, {1: 50}, {1: 51}, {1: 52}, {1: 53}, {1: 54}, {1: 55}, {1: 56}, {1: 57}, {1: 58}, {1: 59}, {1: 60}, {1: 61}, {1: 62}, {1: 63}, {1: 64}, {1: 65}, {1: 66}, {1: 67}, {1: 68}, {1: 69}, {1: 70}, {1: 71}, {1: 72}, {1: 73}, {1: 74}, {1: 75}, {1: 76}, {1: 77}, {1: 78}, {1: 79}, {1: 80}, {1: 81}, {1: 82}, {1: 83}, {1: 84}, {1: 85}, {1: 86}, {1: 87}, {1: 88}, {1: 89}, {1: 90}, {1: 91}, {1: 92}, {1: 93}, {1: 94}, {1: 95}, {1: 96}, {1: 97}, {1: 98}, {1: 99}]
Here, we are going to append a dictionary to a list using append() method, which is used to append an item in a list.
Syntax: list.append(dictionary)
where,
list is the input list
dictionary is an input dictionary to be appended
Example 1: Python code to append a dictionary to an empty list
Here we are considering the student id as key and name as value as a dictionary, So we are appending the student dictionary the empty list using the append method
Python3
# create an empty listl = [] # create a dictionary with student detailsstudent = {7058:'sravan kumsr Gottumukkala', 7059:'ojaswi',7060:'bobby', 7061:'gnanesh',7062:'rohith'} # append this dictionary to the empty listl.append(student) # display listl
Output:
[{7058: 'sravan kumsr Gottumukkala',
7059: 'ojaswi',
7060: 'bobby',
7061: 'gnanesh',
7062: 'rohith'}]
Example 2: Append a dictionary to the list containing elements
Here we are considering the student id as key and name as value as a dictionary. So we are appending the student dictionary the list that already contains some elements using the append method
Python3
# create an list that contain some elementsl = [1, 2, "hi", "welcome"] # create a dictionary with student detailsstudent={7058:'sravan kumsr Gottumukkala', 7059:'ojaswi',7060:'bobby', 7061:'gnanesh',7062:'rohith'} # append this dictionary to the empty listl.append(student) # display listl
Output:
[1,
2,
'hi',
'welcome',
{7058: 'sravan kumsr Gottumukkala',
7059: 'ojaswi',
7060: 'bobby',
7061: 'gnanesh',
7062: 'rohith'}]
Here, we are going to append a dictionary to a list using append() method, we can use copy() to append to a list
Syntax: dictionary.copy()
Example 1: Python code to append the dictionary to an empty list by using copy() method
Here we are considering the student id as key and name as value as a dictionary, so we are appending the student dictionary with copy method the empty list using the append method
Python3
# create an empty listl = [] # create a dictionary with student detailsstudent = {7058:'sravan kumsr Gottumukkala', 7059:'ojaswi',7060:'bobby', 7061:'gnanesh',7062:'rohith'} # append this dictionary to the# empty list using copy() methodl.append(student.copy()) # display listl
Output:
[{7058: 'sravan kumsr Gottumukkala',
7059: 'ojaswi',
7060: 'bobby',
7061: 'gnanesh',
7062: 'rohith'}]
Example 2: Append dictionary to list those containing elements using copy() method
Here we are considering the student id as key and name as value as a dictionary, so we are appending the student dictionary with copy method the list that already contains elements using the append method
Python3
# create an list that contain some elementsl = [1, 2, "hi", "welcome"] # create a dictionary with student detailsstudent = {7058:'sravan kumsr Gottumukkala', 7059:'ojaswi',7060:'bobby', 7061:'gnanesh',7062:'rohith'} # append this dictionary to# the empty list using copy() methodl.append(student.copy()) # display listl
Output:
[1,
2,
'hi',
'welcome',
{7058: 'sravan kumsr Gottumukkala',
7059: 'ojaswi',
7060: 'bobby',
7061: 'gnanesh',
7062: 'rohith'}]
Deep copy is a process in which the copying process occurs recursively, here we are going to append the dictionary to a list by deep copying the dictionary.
Syntax: append(deepcopy())
Example 1: Append the dictionary to list using deep copy
Here we are considering the student id as key and name as value as a dictionary, so we are appending the student dictionary with deepcopy method the empty list using the append method
Python3
# import deepcopy modulefrom copy import deepcopy # create an empty listl = [] # create a dictionary with student detailsstudent = {7058: 'sravan kumsr Gottumukkala', 7059: 'ojaswi', 7060: 'bobby', 7061: 'gnanesh', 7062: 'rohith'} # append this dictionary to# the empty list using deepcopy() methodl.append(deepcopy(student)) # display listl
Output:
[{7058: 'sravan kumsr Gottumukkala',
7059: 'ojaswi',
7060: 'bobby',
7061: 'gnanesh',
7062: 'rohith'}]
Example 2: Appending the dictionary to the list that contains elements
Here we are considering the student id as key and name as value as a dictionary, so we are appending the student dictionary with deepcopy method the list that already contains elements using the append method
Python
# import deepcopy modulefrom copy import deepcopy # create an list that contain some elementsl = [1, 2, "hi", "welcome"] # create a dictionary with student detailsstudent = {7058:'sravan kumsr Gottumukkala', 7059:'ojaswi',7060:'bobby', 7061:'gnanesh',7062:'rohith'} # append this dictionary to the empty# list using deepcopy() methodl.append(deepcopy(student)) # display listl
Output:
[1,
2,
'hi',
'welcome',
{7058: 'sravan kumsr Gottumukkala',
7059: 'ojaswi',
7060: 'bobby',
7061: 'gnanesh',
7062: 'rohith'}]
Numpy stands for numeric python used to store and process the arrays, here we are going to use NumPy to append the dictionary.
Syntax: np.append(res_array, {‘key’: “value”}).tolist()
where,
res_array is the resultabt array
append is used to append to array
tolist() is used to convert a list
Example: Python code to append a dictionary to list using NumPy method.
Here we created a list of elements with subjects and pass the GFG key and appending to the list
Python3
# import numpy libraryimport numpy as np # define a listsubjects = ["PHP", "Java", "SQL"] # iterating the elements in# list to create an numpy arraydata = np.array([{'GFG': i} for i in subjects]) # append to the numpy array to listfinal = np.append(res_array, {'GFG': "ML/DL"}).tolist() # Printing the appended dataprint(final)
Output:
[{‘GFG’: ‘PHP’}, {‘GFG’: ‘Java’}, {‘GFG’: ‘SQL’}, {‘GFG’: ‘ML/DL’}]
sumitgumber28
Picked
python-dict
python-list
Python
Python Programs
python-dict
python-list
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Check if element exists in list in Python
How To Convert Python Dictionary To JSON?
How to drop one or multiple columns in Pandas Dataframe
Python Classes and Objects
Defaultdict in Python
Python | Get dictionary keys as a list
Python | Split string into list of characters
Python | Convert a list to dictionary
How to print without newline in Python? | [
{
"code": null,
"e": 25665,
"s": 25637,
"text": "\n03 Feb, 2022"
},
{
"code": null,
"e": 25765,
"s": 25665,
"text": "In this article, we will discuss how to append the dictionary to the list data structure in Python."
},
{
"code": null,
"e": 25787,
"s": 25765,
... |
What is the Disadvantage of using innerHTML in JavaScript ? - GeeksforGeeks | 29 Jan, 2020
The innerHTML property is a part of the Document Object Model (DOM) that is used to set or return the HTML content of an element. Where the return value represents the text content of the HTML element. It allows JavaScript code to manipulate a website being displayed. More specifically, it sets or returns the HTML content (the inner HTML) of an element. The innerHTML property is widely used to modify the contents of a webpage as it is the easiest way of modifying DOM. But there are some disadvantages to using innerHTML in JavaScript.
Disadvantages of using innerHTML property in JavaScript:
The use of innerHTML very slow: The process of using innerHTML is much slower as its contents as slowly built, also already parsed contents and elements are also re-parsed which takes time.
Preserves event handlers attached to any DOM elements: The event handlers do not get attached to the new elements created by setting innerHTML automatically. To do so one has to keep track of the event handlers and attach it to new elements manually. This may cause a memory leak on some browsers.
Content is replaced everywhere: Either you add, append, delete or modify contents on a webpage using innerHTML, all contents is replaced, also all the DOM nodes inside that element are reparsed and recreated.
Appending to innerHTML is not supported: Usually, += is used for appending in JavaScript. But on appending to an Html tag using innerHTML, the whole tag is re-parsed.Example:<p id="geek">Geeks</p>
title = document.getElementById('#geek')
// The whole "geek" tag is reparsed
title.innerHTML += '<p> forGeeks </p>'
Example:
<p id="geek">Geeks</p>
title = document.getElementById('#geek')
// The whole "geek" tag is reparsed
title.innerHTML += '<p> forGeeks </p>'
Old content replaced issue: The old content is replaced even if object.innerHTML = object.innerHTML + ‘html’ is used instead of object.innerHTML += ‘html’. There is no way of appending without reparsing the whole innerHTML. Therefore, working with innerHTML becomes very slow. String concatenation just does not scale when dynamic DOM elements need to be created as the plus’ and quote openings and closings becomes difficult to track.
Can break the document: There is no proper validation provided by innerHTML, so any valid HTML code can be used. This may break the document of JavaScript. Even broken HTML can be used, which may lead to unexpected problems.
Can also be used for Cross-site Scripting(XSS): The fact that innerHTML can add text and elements to the webpage, can easily be used by malicious users to manipulate and display undesirable or harmful elements within other HTML element tags. Cross-site Scripting may also lead to loss, leak and change of sensitive information.
Example:
<!DOCTYPE html> <html> <head> <title> Using innerHTML in JavaScript </title> </head> <body style="text-align: center"> <h1 style="color:green"> GeeksforGeeks </h1> <p id="P"> A computer science portal for geeks. </p> <button onclick="geek()"> Try it </button> <p id="p"></p> <script> function geek() { var x = document.getElementById("P") .innerHTML; document.getElementById("p") .innerHTML = x; document.getElementById("p") .style.color = "green"; } </script> </body> </html>
Output:
JavaScript-Misc
Picked
Technical Scripter 2019
JavaScript
Technical Scripter
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Remove elements from a JavaScript Array
Difference between var, let and const keywords in JavaScript
Difference Between PUT and PATCH Request
JavaScript | Promises
How to filter object array based on attributes?
Remove elements from a JavaScript Array
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
Difference between var, let and const keywords in JavaScript | [
{
"code": null,
"e": 26655,
"s": 26627,
"text": "\n29 Jan, 2020"
},
{
"code": null,
"e": 27195,
"s": 26655,
"text": "The innerHTML property is a part of the Document Object Model (DOM) that is used to set or return the HTML content of an element. Where the return value represents... |
How to vertically flip an Image using MATLAB - GeeksforGeeks | 14 Jan, 2020
Prerequisite: Image representation in MATLAB
In MATLAB, Images are stored in matrices, in which each element of the matrix corresponds to a single discrete pixel of the image. We can flip the given image vertically (along the x-axis), if we reverse the order of the pixels (elements of the matrix) in each column as illustrated in the below image.
Code #1: Using MATLAB Library function
% Read the target image fileimg = imread('leaf.png'); % Reverse the order of the element in each columnvertFlip_img = flip(img, 1); % Display the vertically flipped imageimshow(vertFlip_img); title('Vertically flipped image');
Code #2: Using matrix manipulation
% Read the target image fileimg = imread('leaf.png'); % Flip the columns vertically vertFlip_img = img(end : -1: 1, :, :); % Display the vertically flipped imageimshow(vertFlip_img); title('Vertically flipped image');
Code #3: Using matrix manipulation (Using loops)
Approach:
Read the source image file in MATLAB environment
Get the Dimensions of the image matrix
Reverse the order of the elements of each column in every image plane
Display the water image (vertically flipped image).
Below is the implementation of the above approach:
% Read the target image fileimg = imread('leaf.png'); % Get the dimensions of the image[x, y, z] = size(img); % Reverse elements of each column% in each image plane (dimension)for plane = 1 : z len = x; for i = 1 : x for j = 1 : y % To reverse the order of the element % of a column we can swap the % topmost element of the row with % its bottom-most element if i < x/2 temp = img(i, j, plane); img(i, j, plane) = img(len, j, plane); img(len, j, plane) = temp; end end len = len - 1; endend % Display the vertically flipped imageimshow(img); title('Vertically flipped image');
Input image: leaf.png
Output:
MATLAB
Advanced Computer Subject
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
ML | Linear Regression
Decision Tree
Reinforcement learning
Decision Tree Introduction with example
Copying Files to and from Docker Containers
System Design Tutorial
Python | Decision tree implementation
ML | Underfitting and Overfitting
Clustering in Machine Learning
Docker - COPY Instruction | [
{
"code": null,
"e": 26269,
"s": 26241,
"text": "\n14 Jan, 2020"
},
{
"code": null,
"e": 26314,
"s": 26269,
"text": "Prerequisite: Image representation in MATLAB"
},
{
"code": null,
"e": 26617,
"s": 26314,
"text": "In MATLAB, Images are stored in matrices, in ... |
How to view folder NTFS permission with PowerShell? | To view the NTFS permission with PowerShell, we use the Get-ACL command. This command is supported in PowerShell version 5.1 or later. Generally how we get the security permission of the folder in the Windows OS using GUI,
To get the same permissions shown above using PowerShell, use the below command.
Get-Acl C:\Shared
PS C:\> Get-Acl C:\Shared
Directory: C:\
Path Owner Access
---- ----- ------
Shared BUILTIN\Administrators NT AUTHORITY\SYSTEM Allow FullControl...
You can compare the first image with the above output. You can compare the Owner of the folder and it is Administrator, second part is the Access, to get all the users who have access on this folder by expanding property.
Get-Acl C:\Shared | Select -ExpandProperty Access
Let convert the above output in the table format to get a clear understanding of the output as shown in the first image.
Get-Acl C:\Shared | Select -ExpandProperty Access | ft -AutoSize
So you have everything that you can see in the first image of the folder security property like User rights, File System rights, Inherited or not, etc.
To view the specific user rights, you can filter with the username. For example,
Get-Acl C:\Shared | Select -ExpandProperty Access | where {$_.IdentityReference -like "*user*"} | ft -AutoSize
Similarly, you can also filter other properties like AccessControlType, IsInherited, etc. | [
{
"code": null,
"e": 1285,
"s": 1062,
"text": "To view the NTFS permission with PowerShell, we use the Get-ACL command. This command is supported in PowerShell version 5.1 or later. Generally how we get the security permission of the folder in the Windows OS using GUI,"
},
{
"code": null,
... |
How to change the size of dots in dotplot created by using ggplot2 in R? | To change the size of dots in dotplot created by using ggplot2, we can use binwidth argument inside geom_dotplot. For example, if we have a data frame called df that contains a column x for which we want to create the dotplot then the plot with different size of dots can be created by using the command ggplot(df,aes(x))+geom_dotplot(binwidth=2).
Consider the below data frame −
Live Demo
x<-rpois(20,5)
df<-data.frame(x)
df
x
1 1
2 3
3 6
4 3
5 5
6 11
7 2
8 3
9 2
10 6
11 5
12 4
13 4
14 6
15 8
16 6
17 8
18 9
19 4
20 7
Loading ggplot2 package and creating a dotpot for data in df with different size of dots −
library(ggplot2)
ggplot(df,aes(x))+geom_dotplot(binwidth=1)
ggplot(df,aes(x))+geom_dotplot(binwidth=0.5)
ggplot(df,aes(x))+geom_dotplot(binwidth=2) | [
{
"code": null,
"e": 1410,
"s": 1062,
"text": "To change the size of dots in dotplot created by using ggplot2, we can use binwidth argument inside geom_dotplot. For example, if we have a data frame called df that contains a column x for which we want to create the dotplot then the plot with differen... |
How to return variable value from jQuery event function? | You cannot return variable value from jQuery event function. To understand the reason, let us see how data is passed, which was created in the event handler of a button click.
Live Demo
<!DOCTYPE html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script>
$(document).ready(function(){
var a=document.getElementById('demo');
a.addEventListener('click', function(){
var s='Hello World!';
var event = new CustomEvent('build', { 'detail': s });
this.dispatchEvent(event);
})
document.getElementById('parent').addEventListener('build', getData, true);
function getData(e){
alert(e.detail);
}
});
</script>
</head>
<body>
<div id='parent'>
<button id='demo'>Click me</button>
</div>
</body>
</html> | [
{
"code": null,
"e": 1238,
"s": 1062,
"text": "You cannot return variable value from jQuery event function. To understand the reason, let us see how data is passed, which was created in the event handler of a button click."
},
{
"code": null,
"e": 1248,
"s": 1238,
"text": "Live D... |
Hands on Apache Beam, building data pipelines in Python | by Vincent Teyssier | Towards Data Science | Apache Beam is an open-source SDK which allows you to build multiple data pipelines from batch or stream based integrations and run it in a direct or distributed way. You can add various transformations in each pipeline. But the real power of Beam comes from the fact that it is not based on a specific compute engine and therefore is platform independant. You declare which “runner” you want to use to compute your transformation. It is using your local computing resource by default, but you can specify a Spark engine for example or Cloud Dataflow...
In this article, I will create a pipeline ingesting a csv file, computing the mean of the Open and Close columns fo a historical S&P500 dataset. The goal here is not to give an extensive tutorial on Beam features, but rather to give you an overall idea of what you can do with it and if it is worth for you going deeper in building custom pipelines with Beam. Though I only write about batch processing, streaming pipelines are a powerful feature of Beam!
Beam’s SDK can be used in various languages, Java, Python... however in this article I will focus on Python.
At the date of this article Apache Beam (2.8.1) is only compatible with Python 2.7, however a Python 3 version should be available soon. If you have python-snappy installed, Beam may crash. This issue is known and will be fixed in Beam 2.9.
pip install apache-beam
For this example we will use a csv containing historical values of the S&P 500. The data looks like that:
Date,Open,High,Low,Close,Volume03–01–00,1469.25,1478,1438.359985,1455.219971,93180000004–01–00,1455.219971,1455.219971,1397.430054,1399.420044,1009000000
To create a pipeline, we need to instantiate the pipeline object, eventually pass some options, and declaring the steps/transforms of the pipeline.
import apache_beam as beamfrom apache_beam.options.pipeline_options import PipelineOptionsoptions = PipelineOptions()p = beam.Pipeline(options=options)
From the beam documentation:
Use the pipeline options to configure different aspects of your pipeline, such as the pipeline runner that will execute your pipeline and any runner-specific configuration required by the chosen runner. Your pipeline options will potentially include information such as your project ID or a location for storing files.
The PipelineOptions() method above is a command line parser that will read any standard option passed the following way:
--<option>=<value>
You can also build your custom options. In this example I set an input and an output folder for my pipeline:
class MyOptions(PipelineOptions):@classmethod def _add_argparse_args(cls, parser): parser.add_argument('--input', help='Input for the pipeline', default='./data/') parser.add_argument('--output', help='Output for the pipeline', default='./output/')
In Beam, data is represented as a PCollection object. So to start ingesting data, we need to read from the csv and store this as a PCollection to which we can then apply transformations. The Read operation is considered as a transform and follows the syntax of all transformations:
[Output PCollection] = [Input PCollection] | [Transform]
These tranforms can then be chained like this:
[Final Output PCollection] = ([Initial Input PCollection] | [First Transform] | [Second Transform] | [Third Transform])
The pipe is the equivalent of an apply method.
The input and output PCollections, as well as each intermediate PCollection are to be considered as individual data containers. This allows to apply multiple transformations to the same PCollection as the initial PCollection is immutable. For example:
[Output PCollection 1] = [Input PCollection] | [Transform 1][Output PCollection 2] = [Input PCollection] | [Transform 2]
So let’s start by using one of the readers provided to read our csv, not forgetting to skip the header row:
csv_lines = (p | ReadFromText(input_filename, skip_header_lines=1) | ...
At the other end of our pipeline we want to output a text file. So let’s use the standard writer:
... | beam.io.WriteToText(output_filename)
Now we want to apply some transformations to our PCollection created with the Reader function. Transforms are applied to each element of the PCollection individually.
Depending on the worker that you chose, your transforms can be distributed. Instances of your transformation are then executed on each node.
The user code running on each worker generates the output elements that are ultimately added to the final output PCollection that the transform produces.
Beam has core methods (ParDo, Combine) that allows to apply a custom transform , but also has pre written transforms called composite transforms. In our example we will use the ParDo transform to apply our own functions.
We have read our csv into a PCollection, so let’s split it so we can access the Date and Close items:
... beam.ParDo(Split()) ...
And define our split function so we only retain the Date and Close and return it as a dictionnary:
class Split(beam.DoFn): def process(self, element): Date,Open,High,Low,Close,Volume = element.split(“,”) return [{ ‘Open’: float(Open), ‘Close’: float(Close), }]
Now that we have the data we need, we can use one of the standard combiners to calculate the mean over the entire PCollection.
The first thing to do is to represent the data as a tuple so we can group by a key and then feed CombineValues with what it expects. To do that we use a custom function “CollectOpen()” which returns a list of tuples containing (1, <open_value>).
class CollectOpen(beam.DoFn): def process(self, element): # Returns a list of tuples containing Date and Open value result = [(1, element[‘Open’])] return result
The first parameter of the tuple is fixed since we want to calculate the mean over the whole dataset, but you can make it dynamic to perform the next transform only on a sub-set defined by that key.
The GroupByKey function allows to create a PCollection of all elements for which the key (ie the left side of the tuples) is the same.
mean_open = ( csv_lines | beam.ParDo(CollectOpen()) | "Grouping keys Open" >> beam.GroupByKey() | "Calculating mean for Open" >> beam.CombineValues( beam.combiners.MeanCombineFn() ))
When you assign a label to a transform, make sure it is unique, otherwise Beam will throw an error.
Our final pipeline could look like that if we want to chain everything:
csv_lines = ( p | beam.io.ReadFromText(input_filename) | beam.ParDo(Split()) | beam.ParDo(CollectOpen()) | "Grouping keys Open" >> beam.GroupByKey() | "Calculating mean" >> beam.CombineValues( beam.combiners.MeanCombineFn() ) | beam.io.WriteToText(output_filename))
But we could also write it in a way that allows to add future transformation on the splitted PCollection (like a mean of the close for example):
csv_lines = ( p | beam.io.ReadFromText(input_filename) | beam.ParDo(Split()))mean_open = ( csv_lines | beam.ParDo(CollectOpen()) | "Grouping keys Open" >> beam.GroupByKey() | "Calculating mean for Open" >> beam.CombineValues( beam.combiners.MeanCombineFn() ))output = ( mean_open | beam.io.WriteToText(output_filename))
If I want to add another transform operation on the csv_lines PCollection I will obtain a second “transformed PCollection”. Beam represents it very well in the form of “branched” tranformations:
To apply the different transforms we would have :
csv_lines = ( p | beam.io.ReadFromText(input_filename) | beam.ParDo(Split()))mean_open = ( csv_lines | beam.ParDo(CollectOpen()) | "Grouping keys Open" >> beam.GroupByKey() | "Calculating mean for Open" >> beam.CombineValues( beam.combiners.MeanCombineFn() ))mean_close = ( csv_lines | beam.ParDo(CollectClose()) | "Grouping keys Close" >> beam.GroupByKey() | "Calculating mean for Close" >> beam.CombineValues( beam.combiners.MeanCombineFn() ))
But now we have 2 PCollections: mean_open and mean_close, as a result of the transform. We need to merge/join these results to get a PCollection we could write on a file with our writer. Beam has the CoGroupByKeywhich is doing just that. Our output would then look like that:
output= ( { ‘Mean Open’: mean_open, ‘Mean Close’: mean_close } | apache_beam.CoGroupByKey() | WriteToText(output_filename)))
We have now our pipeline defined end to end. You can run it by command line using the custom arguments we have defined earlier:
python test_beam.py --input ./data/sp500.csv --output ./output/result.txt
The final result in the file looks like that:
(1, {‘Mean Close’: [1482.764536822227], ‘Mean Open’: [1482.5682959997862]})
In this example we only used the csv reader and text writer, but Beam has much more connectors (ufortunately most of them are available for the Java platform, but a few Python ones are in progress). You can find the list of available connectors and their documentation at:
beam.apache.org
You can also find a guide to write your own connectors if you feel brave enough:
beam.apache.org
Whenever a data pipeline needs to be implemented, we want to be clear on the requirements and the end goal of our pipeline/transformations. In Beam documentation I found this little extract which I think is the core of how you should reason when starting to build a pipeline with Beam:
Where is your input data stored? How many sets of input data do you have? This will determine what kinds of Read transforms you’ll need to apply at the start of your pipeline.
What does your data look like? It might be plaintext, formatted log files, or rows in a database table. Some Beam transforms work exclusively on PCollections of key/value pairs; you’ll need to determine if and how your data is keyed and how to best represent that in your pipeline’s PCollection(s).
What do you want to do with your data? The core transforms in the Beam SDKs are general purpose. Knowing how you need to change or manipulate your data will determine how you build core transforms like ParDo, or when you use pre-written transforms included with the Beam SDKs.
What does your output data look like, and where should it go? This will determine what kinds of Write transforms you’ll need to apply at the end of your pipeline.
As said earlier, instead of using the local compute power (DirectRunner) you can use a distributed compute engine such as Spark. You can do that by setting the following options to your pipeline options (in command line arguments or in an option list):
--runner SparkRunner --sparkMaster spark://host:port
More options are available here, but these 2 are the basics.
Beam is quite low level when it comes to write custom transformation, then offering the flexibily one might need. It is fast and handles cloud / distributed environments. If you look at a higher level API/SDK, some libraries like tf.transform are actually built on top of Beam and offer you its power while coding less. The trade-off lays in the flexibility you are looking for.
The code for this article is available on GitHub here.
I hope you enjoyed this article. If you did, feel free to clap or follow me :) | [
{
"code": null,
"e": 725,
"s": 171,
"text": "Apache Beam is an open-source SDK which allows you to build multiple data pipelines from batch or stream based integrations and run it in a direct or distributed way. You can add various transformations in each pipeline. But the real power of Beam comes f... |
Novelty Detection with Local Outlier Factor | by Cornellius Yudha Wijaya | Towards Data Science | Novelty detection might be a more rare term to be heard by some people compared to outlier detection. If outlier detection aims to find the anomaly or significantly different data in a dataset, novelty detection aims to determine new or unknown data to be an outlier or not.
Novelty detection is a semi-supervised analysis because we would train training data that was not polluted by outliers, and we are interested in detecting whether a new observation is an outlier by using the trained model. In this context, an outlier is also called a novelty.
In this article, I would explain how to perform a novelty detection using the Local Outlier Factor (LOF). Let’s just get into it. Although, in case you want to skip all the theory and just want to get dirty with the coding part, just skip this next section.
Local Outlier Factor or LOF is an algorithm proposed by Breunig et al. (2000). The concept is simple; the algorithm tries to find anomalous data points by measuring the local deviation of a given data point with respect to its neighbors. In this algorithm, LOF would yield a score that tells if our data is an outlier or not.
LOF(k) ~ 1 means Similar density as neighbors.
LOF(k) < 1 means Higher density than neighbors (Inlier/not an outlier).
LOF(k) > 1 means Lower density than neighbors (Outlier)
In the above equation, I have introduced you to a k parameter. In the LOF algorithm, the locality is given by k-nearest neighbors, whose distance is used to estimate the local density. The distance would then be used to measure the local deviation and pinpoint the anomaly.
k parameter is a parameter we set up to acquire a k-distance. The k-distance is the distance of a point to its kth neighbor. If k were 4, the k-distance would be the distance of a point to the fourth closest point. The distance itself is the metric of distance computation that we could choose. Often it is ‘Euclidean,’ but you could select other distances.
In case you are wondering, every data point in our dataset would be measured for the k-distance in order to measure the next step.
The distance is now used to define what we called reachability distance. This distance measure is either the maximum of the distance of two points or the k-distance of the second point. Formally, reachability distance is measured below.
reachability-distance k(A,B)=max{k-distance(B), d(A,B)}
Where A and B are the data point, and reachability distance between A and B is either the maximum of k-distance B or distance between A and B. if point A is within the k-neighbors of point B, then the reachability-distance k(A, B) will be the k-distance of B. Otherwise, it will be the distance between A and B.
Next, we would measure the local reachability density (lrd) by using the reachability distance. The equation to measured lrd is estimated as below.
lrd(A) = 1/(sum(reacheable-distance(A,i))/k)
Where i is the neighbors of point A (Point where A can be “reached” from its neighbors), to get the lrd for point A, we calculate the sum of the reachability distance A to all its k-nearest neighbors and take an average it. The lrd is then is the inverse of the average. LOF concept is all about densities and, therefore, if the distance to the next neighbors is longer, the particular point is located in the sparser area. Hence, the less dense — the inverse.
Here, the lrd tells how far the point has to travel to reach the next point or cluster of points. The lower the lrd means that the longer the point has to go, and the less dense the group is.
The local reachability densities then compared to the other neighbors with the following equation.
The lrd of each point is compared to the lrd of their k-neighbors. The LOF would be the average ratio of the lrds of the neighbors of A to the lrd of A. If the ratio is greater than 1, the density of point A is smaller than the density of its neighbors. It means, from point A, it needs to travel much longer distances to get to the next point or cluster of points than from A’s neighbors to their next neighbors.
The LOF tells the density of the point compared to the density of its neighbors. If the density of a point is smaller than the densities of its neighbors (LOF >1), then it is considered as an outlier because the point is further than the dense area.
LOF algorithm advantages are that it can perform well even in datasets where abnormal samples have different underlying densities. This is because LOF is not how isolated the sample is, but how isolated it the sample compared to the surrounding neighborhood.
We would use the LOF provided by scikit-learn to help us train our model.
#Importing the moduleimport pandas as pdimport seaborn as snsfrom sklearn.neighbors import LocalOutlierFactor#Importing the dataset. Here I use the mpg dataset as an examplempg = sns.load_dataset('mpg')
For the introduction, let’s try using LOF as an Outlier Detection model before using the model for Novelty Detection.
#Setting up the model. K is set by passing the n_neighbors parameter with integer. 20 is often considered good already to detect an outlier. By default the distance metric is Euclidean distance.lof = LocalOutlierFactor(n_neighbors = 20)#Training the model, I drop few columns that was not a continuous variablempg['lof'] = lof.fit_predict(mpg.drop(['cylinders', 'model_year', 'origin', 'name'], axis = 1))#Getting the negative LOF scorempg['negative_outlier_factor'] = lof.negative_outlier_factor_mpg
Below is the result, where we get the LOF classification and negative LOF score. If lof equal to 1, then it is considered as an inlier; if it is -1, then it is an outlier. Let’s try to get all the outlier data.
mpg[mpg['lof'] == -1]
The contamination or outlier depends on the LOF score. By default, if the score of the negative_outlier_score is less than -1.5, it would be considered as the outlier.
Now, let’s try to use the model for Novelty Detection. We need to train the model once more because there is an additional parameter we need to pass on.
#For novelty detection, we need to pass novelty parameter as Truelof = LocalOutlierFactor(n_neighbors = 20, novelty = True)lof.fit(mpg.drop(['cylinders', 'model_year', 'origin', 'name'], axis = 1))
We only train the model, and we would not use the training data at all to predict the data. Here we would try to predict the new unseen data.
#Predict a new random unseen data, the dimension must match the training datalof.predict(np.array([109, 310, 190, 3411, 15]).reshape(1,-1))Out: array([1])
The output would be either 1 or -1, depends if it is an outlier or not. If you want to get the negative LOF score, we could also do it with the following code.
lof.score_samples(np.array([109, 310, 190, 3411, 15]).reshape(1,-1))Out: array([-1.34887042])
The output would be the score in the form of an array. And that is it, I have shown you how to do a simple novelty detection with LOF.
Novelty Detection is an activity to detect whether the new unseen data is an outlier or not. Local Outlier Factor is an algorithm used for Outlier Detection and Novelty Detection. It depends on the k parameter we pass on. Often k = 20 is working in general, but if you feel the data have a higher number of outliers, you could increase the number.
I hope it helps.
If you are not subscribed as a Medium Member, please consider subscribing through my referral. | [
{
"code": null,
"e": 447,
"s": 172,
"text": "Novelty detection might be a more rare term to be heard by some people compared to outlier detection. If outlier detection aims to find the anomaly or significantly different data in a dataset, novelty detection aims to determine new or unknown data to be... |
Identifying Duplicates in Snowflake With Zingg | by Sonal Goyal | Towards Data Science | It is a truth universally acknowledged that customer tables in warehouses are messy, with multiple records pointing to one single customer. There can be various reasons for this problem — data coming from offline stores and online channels, guest checkouts, multiple internal customer systems like CRMs and customer service...In fact, this problem is not limited to customer tables alone. Supplier tables, locations, and other non-transactional data(dimensions in traditional warehouse language) also have the same problem.
For those of us who rely on the warehouse for business decisions, these duplicates can completely toss away our metrics. How can we trust lifetime value if our customer tables do not have a solid customer ID? Segmentation, marketing attribution, personalization — what can we achieve without reliable dimension data? How about reverse ETL — feeding this data into our operational systems. If we pump our systems with duplicates from the warehouse, won't it mess up our day to day work?
Is the investment in the warehouse even worth it if we still can not identify our core entities correctly?
On the surface, this looks like an easy problem to solve — surely we have email ids that we can leverage? Unfortunately, people use work, personal, school and other email ids and though this is a start, it doesn't solve the problem. Let us not even get started with the different ways in which we enter names, addresses, and other details on web and print forms.
Let us take a look at our customer table in Snowflake. The data in the table is loaded from this csv.
The customer table does have an an SSN column, but that is not consistent in many cases and so we can not rely on it. The table does have identifier column, but it still has multiple records belonging to the same customer with different ids.
As an example, check the following two records belonging to customer Thomas George
Or the following five records all belonging to customer Jackson Eglinton
We could build some similarity rules and use SQL or programming to build our identifiers and match these records. However, this will soon get complex catering to the variations above. What if we use Snowflake’s edit distance function? Or fuzzywuzzy or some such library? Unfortunately, we are dealing with a beast here — knowing which pairs to compare or find edit distance for is actually pretty important else we will end up with a cartesian join on multiple attributes(!!).
As an example, take a look at the number of comparisons we can run into as our number of records increase 10 fold or 100 fold. This table assumes that we are comparing on a single attribute. Hence it is evident that scalability is definitely a big challenge and needs to be really planned through.
Fortunately, open-source has a solution(when does it not?). Looks like there is a tool called Zingg, built specifically to address this problem of entity resolution. (I need to put a disclaimer here, I am the author :))
Let us see how we can use Zingg to resolve our customers and identify duplicates.
Installation is straightforward, we need binaries of Java, Apache Spark and Zingg. Do NOT be intimidated if you are not a Java programmer or a distributed programming geek writing Spark programs on petabyte size clusters. Zingg uses these technologies under the hood so for most practical purposes, we can work off a single laptop or machine. Zingg is a learning-based tool, it trains on our data and does not transmit anything to an external party, so security and privacy are automatically taken care of when we run Zingg within our environment.
We need to tell Zingg where our Snowflake data is. For this, the Zingg config is set with our Snowflake instance and table details. Here is the excerpt of the configuration for our input CUSTOMERS table from Snowflake.
We also configure Zingg to write the output to the UNIFIED_CUSTOMERS table. This table does not exist in Snowflake yet, but Zingg will create it while writing its output, so we do not need to build it.
Let us now specify which attributes to use for matching, and what kind of matching we want. As an example, the first name attribute is set for FUZZY match type.
We do not wish to use the SSN for our matching, so that we can see how well the matching performs, so we mark that field as DO_NOT_USE. The other parts of the configuration are fairly boilerplate, you can check the entire configuration here.
Zingg learns what to match(scale) and how to match(similarity) based on training samples. It ships with an interactive learner which picks out representative sample pairs which the user can mark as acceptable matches or non-matches. Let us now build the training samples from which Zingg will learn. We pass the configuration to Zingg and run it in the findTrainingData phase. This is a simple command line execution.
zingg.sh --phase findTrainingData --conf examples/febrl/configSnow.json
Under the hood, Zingg does a lot of work during findTrainingData to spot the right representative pairs to build the training data for matching. The uncertain pairs get written to zinggDir/modelId as configured through the input json. But we do not need to worry about that. Once the job is finished, we will go to the next phase and mark or label the pairs.
zingg.sh --phase label --conf examples/febrl/configSnow.json
The above phase will bring up the interactive learner, which reads the work done by the findTrainingData phase and shows us record pairs to mark as matches or non matches. This helps Zingg build out the machine learning models tailored to our data. This is how it looks like
Zingg selects different kinds of pairs — absolute non-matches, sure matches as well as doubtful cases so that a robust training set can be built. These records are selected after a very rigorous scan of the input so that proper generalization can be made and every single variation across attributes does not have to be hand labelled by the user. As an example, the following is an excerpt of Zingg output for our data.
The phases of findTrainingData and label are repeated a few times till 30–50 positive pairs are marked. This should be good enough to train Zingg to run on millions of records with reasonable accuracy. Each and every case need not be fed to Zingg, the learner automatically selects the representatives and generalizes through that. When unsure, one can always halt the learner and check Zingg's output and come back and train a bit more.
In our simplistic case of only 65 examples, one round of findTrainingData and label is enough and so we pause here. Now that we have the training data with the labels, we build the machine learning models by invoking the train phase. Internally, Zingg does hyperparameter search, feature weighing, threshold selection and other work to build a balanced model — one that does not leave out matches(recall) AND one that does not predict wrong matches(precision).
zingg.sh --phase train --conf examples/febrl/configSnow.json
The above will save the models and we can apply them to this and any other new data to predict matches. Retraining is not necessary as long as the schema — the attributes to be matched and the input format remains the same.
Let us now apply the models on our data and predict which records are indeed matches — or duplicates to each other.
zingg.sh --phase match --conf examples/febrl/configSnow.json
Peeking into Snowflake once the above has run shows us that a new table with the following columns has been created.
Zingg copies over the raw data, but adds 3 columns to each row of the output.
The Z_CLUSTER column is the customer id Zingg gives — matching or duplicate records get the same cluster identifier. This helps to group the matching records together.
The Z_MINSCORE column is an indicator for the least that record matched to any other record in the cluster
The Z_MAXSCORE is an indicator for the most that record matched to another record in the cluster.
Let us look at the records for customer Thomas George in the output. Both the records get the same z_cluster. No other record gets the same id. The scores are pretty good too, which means we can be confident of this match.
What happened to customer Jackson Eglinton? Here is what the output looks like
Again, the 5 records get an identifier distinct from the other records in the table. On inspecting the scores, we see that the minimum score of two records is close to 0.69, which means that the confidence of these record belonging to the cluster is low. Rightly so, as in one case, the street and address attributes are swapped. In the other, the last name is different from other records in the cluster.
Based on our data, we can decide how to use the scores provided. We could choose a cutoff on either of the scores to be confident of the matches and pipe the rest to another workflow — likely human review. We could take an average of the scores and use that if our case warrants it.
In the most likely scenario, the output of Zingg is used further along in the data pipeline as the definitive source of entity data. Zingg output either gets picked up for transformations by DBT and used thereof, or Zingg output is streamed to the lakehouse and utilized for data science.
In either case, the dimensions are accurate and we have a unified view of our core entities which we can trust. | [
{
"code": null,
"e": 696,
"s": 172,
"text": "It is a truth universally acknowledged that customer tables in warehouses are messy, with multiple records pointing to one single customer. There can be various reasons for this problem — data coming from offline stores and online channels, guest checkout... |
Simplify your Dataset Cleaning with Pandas | by Ulysse Petit | Towards Data Science | I’ve heard a lot of analysts/data scientists saying they spend most of their time cleaning data. You’ve probably seen a lot of tutorials to clean your dataset but you probably know that already: it will never be 100% clean and you have to understand that point before continuing to read this article. So I’ll be honest with you: I won’t give you the magic recipe to get rid of all the data issues you might have with your dataset. The cleaning rules depend on the domain you are working on and the context of your project. The examples of this article come from my own experience with data manipulation in the real world.
I’ve dealt with all the issues/processes that I’m detailing in this article. The problem can come from the data source itself at times and you have to clean it, sometimes it’s just your colleague or your manager who requests some specific fields in the final file. Feel free to share the main issues you’ve seen from your experience in the comments.
You’ll need the latest Python release: 3.7+. I recommend using the Anaconda distribution to get Python, Pandas, and Jupyter. If you already have Anaconda installed, ignore the two following commands.
pip install seaborn
pip install pandas
pip install jupyter
Let’s start with a basic dataset I’ve found on Kaggle. This dataset is a search results report for “Flights” and “Tickets” queries searched on Google. When you search for a query on Google, the “organic” results will be ranked in the search result page and only 10 are listed for each query in this dataset. As you can guess, we might expect duplicates in some fields.
We will cover the following topics:
Remove useless charactersExtract relevant content from a SeriesCheck NaN valuesChange the type of your Series
Remove useless characters
Extract relevant content from a Series
Check NaN values
Change the type of your Series
Open a new Jupyter notebook and import the dataset:
import osimport pandas as pddf = pd.read_csv('flights_tickets_serp2018-12-16.csv')
We can check quickly how the dataset looks like with the 3 magic functions:
.info(): Shows the rows count and the types
df.info()
.describe(): Shows the main statistics for every numerical column in our dataset
df.describe()
.head(): Shows the first 5 rows of the dataset (you can change this number by passing an integer as a parameter)
df.head()
We can even show the column names because we have too many and they don’t fit in the screen:
df.columns
Now we have a better view of what we are dealing with. Starting from this point, depending on the context of your company and your objectives, you won’t be looking for the same thing. Moreover, if this dataset will be used to feed a Machine Learning algorithm for training or if you need to run an analysis for your manger, your output DataFrame won’t look the same. But it’s not the subject here. Let’s continue with the first topic.
Why should we remove characters in a dataset full of data? Well, a dataset is not always built for your personal use-case but for multiple uses in general. Again, depending on your project, your focus will be different.
In the case where you are working on an NLP (=Natural Language Processing) project, you will need to get your text very clean and get rid of the unusual characters that don’t change the meaning for instance. If we look closer at the “title” column,
df['title']
we can see a few special characters to remove like: , . ( ) [ ] + | -
If you want to be safe, you can use a complete list of special characters and remove them using a loop:
spec_chars = ["!",'"',"#","%","&","'","(",")", "*","+",",","-",".","/",":",";","<", "=",">","?","@","[","\\","]","^","_", "`","{","|","}","~","–"]for char in spec_chars: df['title'] = df['title'].str.replace(char, ' ')
Now you shouldn’t have any of those characters in your title column. Because we replaced the special characters with a plus, we might end up with double whitespaces in some titles.
Let’s remove them by splitting each title using whitespaces and re-joining the words again using join.
df['title'] = df['title'].str.split().str.join(" ")
We’re done with this column, we removed the special characters. Note that I didn’t include the currencies characters and the dot “.” in the special characters list above. The reason is that some results titles contain the price of the flights tickets they are selling (e.g. “$514.98”) and this information might be interesting to extract in the next section.
What is important when you work with a dataset is to make the dataset cleaner, readable. Your dataset might end up being ingested in a Machine Learning pipeline, that being said, the idea is to extract a lot of information from this dataset. I’ll show an example but the granularity of the information you will extract depends on your objectives.
Let’s say you want to extract all the prices in dollars from the results titles (i.e. the title column). We use a regex function to do that. You can do some tests with your regex function here if you want.
df['dollar_prices'] = df['title'].str.extract('(\$\.d*\.?\d*)')
We’ve now extracted all the prices in dollars in the title column.
We now display the unique values of the resulting column:
df['dollar_prices'].unique()
As you can see and it was expected, we have some NaN (=Not a Number) values (4th position in the array above). Some titles don’t have a dollar price so the regex rule couldn’t find it, instead, we have “nan”. We’ll see in the next section how to deal with the NaN values.
Most of the time, a big dataset will contain NaN values. What to do with them? Should we leave them? Absolutely not! You can’t leave them because of the calculations you might perform on numerical columns or just because of future modeling a Data Science team or even you could do. For those who know about Machine Learning, more or less, you know that your features/variables/predictors should be numerical before training a model. For all these reasons, we want to fix the NaN values. I won’t spend too much time dealing with the different strategies, there are plenty of articles explaining different strategies. What you can do:
Remove the corresponding rows: This can be done only if removing the rows doesn’t impact the distributions in your dataset or if they are not significant. If there are just 3 rows with some NaN values in your 1M dataset, it should be safe to remove the rows. Otherwise, you might have to go with the next options.
Use statistics to replace them (in numerical columns): You can replace the NaN values by the mean of the column. Be careful, you don’t want to skew the data. It might be more relevant to look at another column like a categorical one and replace the NaN values based on the mean for each category. Let me explain. In the Titanic dataset, for instance, each row represents a passenger. Some values in the Fares column are missing (NaN). In order to replace these NaN with a more accurate value, closer to the reality: you can, for example, replace them by the mean of the Fares of the rows for the same ticket type. You assume by doing this that people who bought the same ticket type paid roughly the same price, which makes sense.
Else: Try to know well your domain, your context which will help you to know how to replace in a safe way the missing variables. If you won’t use this dataset in a Machine Learning pipeline, you can replace the NaN with another value like a string (in case of a string type column) “None” or “NoData”, anything relevant for you and the people who will use the dataset.
For more examples about dealing with the NaN values, I recommend reading an article with a focus on that particular point.
About our dataset, let’s see what is the proportion of NaN values using a visualization library called seaborn. You can do it with Pandas functions directly but I think it’s good to go first with a visual way so you get to know your data:
sns.heatmap(df.isnull(), cmap='viridis')
In this heatmap, you can see in yellow (depending on the cmap you are using, but with mine it’s yellow) the NaN values in each column. It doesn’t mean that a column with no yellow doesn’t have any NaN, it happens to have some at the top of the heatmap or at the bottom so they are confused with the graphical borders of the heatmap (it’s due to the resolution of the image printed in the notebook). Anyway, don’t hesitate to show the raw data using the .isnull() Pandas function.
In our case, we will just keep the rows where we have a dollar_price in the title so we will just keep the rows with a value in the dollar_price column:
df = df.dropna(subset=['dollar_prices'])
The resulting DataFrame contains only the rows where dollar_price has a value, the others have been dropped.
In this dataset, we might not have a lot of type changes to do. It depends on your context and the quality of the dataset. If you’re happy with the column types as we saw with df.info(), that’s fine.
The input file might come from a colleague of yours who saved the data into an Excel file (.xlsx). In that case, and that’s because Excel is terrible for this, the format of your columns could be changed especially for your Ids columns. By “Ids” I mean a column that represents an Id for one of your listed entities. For instance, if you’re manipulating a dataset containing an inventory of products, you’ll probably have a ProductId column in it. Anyway, numerical Ids in Excel can be interpreted as a numerical column or text column and to be honest with you I don’t know when Excel chooses one or the other.
For your data manipulation, it depends on how you want to treat Ids in your script, do you want to manipulate text (string) or numerical values (integer)? That’s up to you, but be consistent in your script, if you need to join two DataFrames, based on an Id column, for instance, convert them first in the same format (string or integer but you have to choose one).
Let’s go back to our dataset. In the second section, we created a new column containing the prices in dollars. At this point, we know there are in dollars, right? Because this information about the currency is in the column name. That being said, we can get rid of the dollar characters:
df['dollar_prices'] = df['dollar_prices'].str.replace('$', '')
The last step is to check the type of this column. We should have a string as we extracted the data with the regex rule as strings:
df.dtypes
Here our dollar_prices is an object that means a string actually. Do we really want to consider our prices as strings? We will prefer to use floats for the prices, we just need to convert the column type.
df['dollar_prices'] = pd.to_numeric(df['dollar_prices'], errors='coerce')
If you show the dtypes one more time, you’ll notice that the dollar_prices column is no longer an object but it’s now a float64 type.
I hope you found some answers to the questions you might have asked yourself. As I said in the beginning, there is no universal way to clean a dataset, some checks have to be done like the NaN values for instance. The rest really depends on what level of quality do you need for the project? What data are relevant to extract?
Also, keep in mind that everything you can extract from your dataset (section 2) might be really helpful for visualization specialists who will build dashboards using your dataset as a source.
The Jupyter notebook is available in my GitHub just here.
Don’t hesitate to write comments and questions. | [
{
"code": null,
"e": 794,
"s": 172,
"text": "I’ve heard a lot of analysts/data scientists saying they spend most of their time cleaning data. You’ve probably seen a lot of tutorials to clean your dataset but you probably know that already: it will never be 100% clean and you have to understand that ... |
How to generate Prime Numbers in JavaScript? | To generate prime numbers in JavaScript, you can try to run the following code
Live Demo
<html>
<head>
<title>JavaScript Prime</title>
</head>
<body>
<script>
for (var limit = 1; limit <= 20; limit++) {
var a = false;
for (var i = 2; i <= limit; i++) {
if (limit%i===0 && i!==limit) {
a = true;
}
}
if (a === false) {
document.write("<br>"+limit);
}
}
</script>
</body>
</html> | [
{
"code": null,
"e": 1141,
"s": 1062,
"text": "To generate prime numbers in JavaScript, you can try to run the following code"
},
{
"code": null,
"e": 1151,
"s": 1141,
"text": "Live Demo"
},
{
"code": null,
"e": 1620,
"s": 1151,
"text": "<html>\n <head>\n ... |
How to use the ValidateCount attribute in PowerShell Function? | The validateCount attribute in PowerShell function is to validate the length of an array, which means
that you can pass the specific number of arguments into the parameter. In the below example we need array should contain a minimum 1 and maximum 4 values when we pass the values. For that, we will
write the below script,
Function ValidateArray {
Param (
[ValidateCount(1,3)]
[String[]]$Animals
)
return $PSBoundParameters
}
PS C:\> ValidateArray -Animals Cow, Dog, Cat
Key Value
--- -----
Animals {Cow, Dog, Cat}
The above output is valid but when we pass the null or 4 values, it becomes invalid because we have
declared an array should have a length between 1 to 3.
PS C:\> ValidateArray -Animals @()
ValidateArray: Cannot validate argument on parameter 'Animals'. The parameter req
uires at least 1 value(s) and no more than 3 value(s) - 0 value(s) were provided.
PS C:\> ValidateArray -Animals Cow, Dog, Cat, Tiger
ValidateArray: Cannot validate argument on parameter 'Animals'. The parameter req
uires at least 1 value(s) and no more than 3 value(s) - 4 value(s) were provided. | [
{
"code": null,
"e": 1385,
"s": 1062,
"text": "The validateCount attribute in PowerShell function is to validate the length of an array, which means\nthat you can pass the specific number of arguments into the parameter. In the below example we need array should contain a minimum 1 and maximum 4 val... |
Explain unformatted input and output functions in C language | Unformatted input and output functions read a single input sent by the user and permits to display the value as the output at the console.
The unformatted input functions in C programming language are explained below −
It reads a character from the keyboard.
The syntax of getchar() function is as follows −
Variablename=getchar();
For example,
Char a;
a = getchar();
Following is the C program for getchar() function −
Live Demo
#include<stdio.h>
int main(){
char ch;
FILE *fp;
fp=fopen("file.txt","w"); //open the file in write mode
printf("enter the text then press cntrl Z:\n");
while((ch = getchar())!=EOF){
putc(ch,fp);
}
fclose(fp);
fp=fopen("file.txt","r");
printf("text on the file:\n");
while ((ch=getc(fp))!=EOF){
if(fp){
char word[100];
while(fscanf(fp,"%s",word)!=EOF) // read words from file{
printf("%s\n", word); // print each word on separate lines.
}
fclose(fp); // close file.
}else{
printf("file doesnot exist");
// then tells the user that the file does not exist.
}
}
return 0;
}
When the above program is executed, it produces the following result −
enter the text then press cntrl Z:
This is an example program on getchar()
^Z
text on the file:
This
is
an
example
program
on
getchar()
It reads a string from the keyboard
The syntax for gets() function is as follows −
gets(variablename);
Live Demo
#include<stdio.h>
#include<string.h>
main(){
char str[10];
printf("Enter your name: \n");
gets(str);
printf("Hello %s welcome to Tutorialspoint", str);
}
Enter your name:
Madhu
Hello Madhu welcome to Tutorialspoint
The unformatted output functions in C programming language are as follows −
It displays a character on the monitor.
The syntax for putchar() function is as follows −
Putchar(variablename);
For example,
Putchar(‘a’);
It displays a string on the monitor.
The syntax for puts() function is as follows −
puts(variablename);
For example,
puts("tutorial");
Following is the C program for putc and getc functions −
Live Demo
#include <stdio.h>
int main(){
char ch;
FILE *fp;
fp=fopen("std1.txt","w");
printf("enter the text.press cntrl Z:\n");
while((ch = getchar())!=EOF){
putc(ch,fp);
}
fclose(fp);
fp=fopen("std1.txt","r");
printf("text on the file:\n");
while ((ch=getc(fp))!=EOF){
putchar(ch);
}
fclose(fp);
return 0;
}
When the above program is executed, it produces the following result −
enter the text.press cntrl Z:
This is an example program on putchar()
^Z
text on the file:
This is an example program on putchar() | [
{
"code": null,
"e": 1201,
"s": 1062,
"text": "Unformatted input and output functions read a single input sent by the user and permits to display the value as the output at the console."
},
{
"code": null,
"e": 1281,
"s": 1201,
"text": "The unformatted input functions in C progra... |
Java Concurrency - Executor Interface | A java.util.concurrent.Executor interface is a simple interface to support launching new tasks.
void execute(Runnable command)
Executes the given command at some time in the future.
The following TestThread program shows usage of Executor interface in thread based environment.
import java.util.concurrent.Executor;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public class TestThread {
public static void main(final String[] arguments) throws InterruptedException {
Executor executor = Executors.newCachedThreadPool();
executor.execute(new Task());
ThreadPoolExecutor pool = (ThreadPoolExecutor)executor;
pool.shutdown();
}
static class Task implements Runnable {
public void run() {
try {
Long duration = (long) (Math.random() * 5);
System.out.println("Running Task!");
TimeUnit.SECONDS.sleep(duration);
System.out.println("Task Completed");
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
This will produce the following result.
Running Task!
Task Completed
16 Lectures
2 hours
Malhar Lathkar
19 Lectures
5 hours
Malhar Lathkar
25 Lectures
2.5 hours
Anadi Sharma
126 Lectures
7 hours
Tushar Kale
119 Lectures
17.5 hours
Monica Mittal
76 Lectures
7 hours
Arnab Chakraborty
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2753,
"s": 2657,
"text": "A java.util.concurrent.Executor interface is a simple interface to support launching new tasks."
},
{
"code": null,
"e": 2784,
"s": 2753,
"text": "void execute(Runnable command)"
},
{
"code": null,
"e": 2839,
"s": 278... |
Python for finance: automated analysis of the financial markets | by Riccardo Poli | Towards Data Science | The Python script presented in this article has been used to analyse the impact of COVID-19 on the various business sectors of the S&P 500 index, but can also be easily adapted to any other analysis of financial markets.
The code has been developed in Python programming language, and can be found in my GitHub repository (link below) as a Jupyter notebook.
github.com
The script makes use of standard Python packages (i.e. pandas, bokeh, math) as well as the yfinance API (Application Programming Interface), that is used to download the S&P 500 stock prices. The API is free to use and it is public, meaning that the user does not need an individual API key.
# Import librariesimport yfinance as yfimport pandas as pdfrom bokeh.plotting import figureimport bokeh.models as bmofrom bokeh.palettes import Paired11from bokeh.io import showfrom bokeh.models import ColumnDataSource, HoverToolimport math
In the first section of the script, the user needs to define the variable called “depth” to defines the level of detail of the analysis. This can either be set to “sector” or “sub_sector”. Choosing “sector” will produce a plot such as the one in Figure 1, whereas “sub_sector” will produce a plots similar to the ones in Figure 2 and Figure 3. In this last case, the user has to also specify the “filter”, that is the sector of interest. The available values of filter are: Communication Services, Consumer Discretionary, Consumer Staples, Energy, Financials, Health Care, Industrials, Information Technology, Materials, Real Estate and Utilities.
# Example of input definitiondepth = 'sub_sector'filter = 'Information Technology'
All the other inputs such as the list of S&P 500 stocks, and the date to compare current market performance against (beginning of 2020) are automatically set.
index_name = 'SP_500'companies = pd.read_html('https://en.wikipedia.org/wiki/List_of_S%26P_500_companies', flavor='bs4')[0]
At this point, the data are downloaded and all the calculations are performed. As anticipated, the yfinance API is used to gather the financial data.
df_all = pd.DataFrame()color_df = pd.DataFrame({0})for stock in companies_codes: stock_data = yf.Ticker(stock.replace(".","")) stock_name = companies[company_label].loc[companies[code_label] == stock].values[0] df_all[stock_name] = stock_data.history(start="2020-01-01")['Close']
The last section is where all plots are created.
if depth == 'sector':df_sector = df_summary.groupby(['Sector']).mean() df_sector = df_sector.sort_values(by=['YTD_performance'], ascending=False)source2 = ColumnDataSource(data=dict(cum_sum_ytd=df_sector.YTD_performance.to_list(), sector=df_sector.index.to_list()))p = figure(plot_height=700, plot_width=1200, x_range=df_sector.index.to_list(), toolbar_location='right', y_range=[-50, 50], title='Year-to-date performance of individual sectors (S&P 500 stocks)', tools="save")color_map = bmo.CategoricalColorMapper(factors=df_summary['Sector'].unique(), palette=Paired11)p.vbar(x='sector', top='cum_sum_ytd', width=0.9, source=source2, fill_color={'field': 'sector', 'transform': color_map}, legend='sector', line_width=0)p.xaxis.major_label_orientation = math.pi/3 p.yaxis.axis_label = 'Year to date average performance (%)'p.title.text_font_size = '12pt' p.yaxis.axis_label_text_font_size = '12pt'show(p)
You can find further comments to the code and additional details in the notebook on the GitHub page.
In this section, the outputs of the code presented above are summarised in Figures 1, 2 and 3.
COVID-19 has had an unprecedented impact on our lives, our habits, on the real economy and on the financial markets. The financial shock caused by the pandemic has not been uniform, and the various business sectors have responded differently to the crisis.
It can be seen from Figure 1 that all sectors, as an average, have been negatively impacted by the pandemic, with the highest losses in the energy sector. In fact, due to a large portion of the world population being in lockdown, the demand for oil and gas has significantly dropped. The industrial sector has experienced significant losses as well, especially in the “aerospace and defence” and “airlines” sub-sectors. The main reason is the reduced demand for these services, due to the restrictions in place and the fear of COVID-19 spread.
Health Care
Health care is the business sector that has suffered the least, with some companies in the biotechnology sub-sector gaining an average+16% in stock price, Figure 2. This is because many companies are investing resources on the COVID-19 vaccine. The company to win the race could have stellar profits.
Communication Services
Smart working and the desire of people to speak and to see their loved ones have increased demand for video conferencing apps and software in these unprecedented times. As people are quarantined in many countries, they tend to spend more time on streaming services. As a consequence, home entertainment and telecom services have performed well during the pandemic (+12% and +23%, respectively), Figure 3.
If you want to read more on the topic, please refer to this article I have written a few weeks ago (May 2020) to explain the effects of COVID-19 on each business sector and sub-sector.
Stay safe, and happy trading!
Riccardo Polihttps://www.linkedin.com/in/riccardopoli/
Disclaimer: investing in the stock market involves risk and can lead to monetary loss. The content of this article is not to be taken as financial advice. | [
{
"code": null,
"e": 392,
"s": 171,
"text": "The Python script presented in this article has been used to analyse the impact of COVID-19 on the various business sectors of the S&P 500 index, but can also be easily adapted to any other analysis of financial markets."
},
{
"code": null,
"e... |
Count number of occurrences for each char in a string with JavaScript? | Take an array to store the frequency of each character. If similar character is found, then
increment by one otherwise put 1 into that array.
Let’s say the following is our string −
var sentence = "My name is John Smith";
Following is the JavaScript code to count occurrences −
var sentence = "My name is John Smith";
sentence=sentence.toLowerCase();
var noOfCountsOfEachCharacter = {};
var getCharacter, counter, actualLength, noOfCount;
for (counter = 0, actualLength = sentence.length; counter <
actualLength; ++counter) {
getCharacter = sentence.charAt(counter);
noOfCount = noOfCountsOfEachCharacter[getCharacter];
noOfCountsOfEachCharacter[getCharacter] = noOfCount ? noOfCount + 1: 1;
}
for (getCharacter in noOfCountsOfEachCharacter) {
if(getCharacter!=' ')
console.log("Character="+getCharacter + " Occurrences=" +
noOfCountsOfEachCharacter[getCharacter]);
}
To run the above program, you need to use the following command −
node fileName.js.
Here, my file name is demo40.js.
This will produce the following output −
PS C:\Users\Amit\JavaScript-code> node demo40.js
Character=m Occurrences=3
Character=y Occurrences=1
Character=n Occurrences=2
Character=a Occurrences=1
Character=e Occurrences=1
Character=i Occurrences=2
Character=s Occurrences=2
Character=j Occurrences=1
Character=o Occurrences=1
Character=h Occurrences=2
Character=t Occurrences=1 | [
{
"code": null,
"e": 1204,
"s": 1062,
"text": "Take an array to store the frequency of each character. If similar character is found, then\nincrement by one otherwise put 1 into that array."
},
{
"code": null,
"e": 1244,
"s": 1204,
"text": "Let’s say the following is our string −... |
MySQL difference between two timestamps in Seconds? | You can use in-built function UNIX_TIMESTAMP() from MySQL to get the timestamps and the difference between two timestamps. The syntax is as follows −
SELECT UNIX_TIMESTAMP(yourColumnName1) - UNIX_TIMESTAMP(yourColumnName2) as anyVariableName from yourTableName;
To understand the above concept, let us create a table. The following is the query to create a table −
mysql> create table DifferenceInSeconds
−> (
−> FirstTimestamp TIMESTAMP,
−> SecondTimestamp TIMESTAMP
−> );
Query OK, 0 rows affected (0.93 sec)
Insert some records in the table using insert command. The query is as follows −
mysql> insert into DifferenceInSeconds values('2012-12-12 13:16:55','2012-12-12 13:13:55');
Query OK, 1 row affected (0.31 sec)
mysql> insert into DifferenceInSeconds values('2014-10-11 12:15:50','2014-10-11 12:13:50');
Query OK, 1 row affected (0.19 sec)
mysql> insert into DifferenceInSeconds values('2018-12-14 13:30:53','2018-12-14 13:27:53');
Query OK, 1 row affected (0.21 sec)
Now display all records from the table using select statement. The query is as follows −
mysql> select *from DifferenceInSeconds;
The following is the output −
+---------------------+---------------------+
| FirstTimestamp | SecondTimestamp |
+---------------------+---------------------+
| 2012-12-12 13:16:55 | 2012-12-12 13:13:55 |
| 2014-10-11 12:15:50 | 2014-10-11 12:13:50 |
| 2018-12-14 13:30:53 | 2018-12-14 13:27:53 |
+---------------------+---------------------+
3 rows in set (0.00 sec)
Here is the query to find the difference between two timestamp in seconds. The query is as follows −
mysql> SELECT UNIX_TIMESTAMP(FirstTimestamp) - UNIX_TIMESTAMP(SecondTimestamp) as Seconds from DifferenceInSeconds;
The following is the output −
+---------+
| Seconds |
+---------+
| 180 |
| 120 |
| 180 |
+---------+
3 rows in set (0.00 sec)
Note - If you do not know which timestamp is greater then use ABS().
The syntax is as follows −
SELECT ABS(UNIX_TIMESTAMP(yourColumnName1) - UNIX_TIMESTAMP(yourColumnName2)) as Seconds from DifferenceInSeconds;
To check the above syntax, let us insert record in which first timestamp has lower value.
mysql> insert into DifferenceInSeconds values('2018-12-14 13:26:53','2018-12-14 13:31:53');
Query OK, 1 row affected (0.21 sec)
The query to display all records from the table.
mysql> select *from DifferenceInSeconds;
The following is the output −
+---------------------+---------------------+
| FirstTimestamp | SecondTimestamp |
+---------------------+---------------------+
| 2012-12-12 13:16:55 | 2012-12-12 13:13:55 |
| 2014-10-11 12:15:50 | 2014-10-11 12:13:50 |
| 2018-12-14 13:30:53 | 2018-12-14 13:27:53 |
| 2018-12-14 13:26:53 | 2018-12-14 13:31:53 |
+---------------------+---------------------+
4 rows in set (0.00 sec)
The following is the use of ABS() function. The query is as follows −
mysql> SELECT ABS(UNIX_TIMESTAMP(FirstTimestamp) - UNIX_TIMESTAMP(SecondTimestamp)) as Seconds from DifferenceInSeconds;
The following is the output −
+---------+
| Seconds |
+---------+
| 180 |
| 120 |
| 180 |
| 300 |
+---------+
4 rows in set (0.00 sec)
Note - If you will not use ABS() then -300 seconds will be the above output. | [
{
"code": null,
"e": 1212,
"s": 1062,
"text": "You can use in-built function UNIX_TIMESTAMP() from MySQL to get the timestamps and the difference between two timestamps. The syntax is as follows −"
},
{
"code": null,
"e": 1324,
"s": 1212,
"text": "SELECT UNIX_TIMESTAMP(yourColumn... |
How to switch between different Activities in Android using Kotlin? | This example demonstrates how to switch between different Activities in Android using Kotlin.
Step 1 − Create a new project in Android Studio, go to File? New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_marginTop="40dp"
android:textColor="@android:color/holo_purple"
android:text="MainActivity"
android:textSize="24sp"
android:textStyle="bold" />
<Button
android:id="@+id/btnOpenAct2"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerInParent="true"
android:text="Open Second Activity" />
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.kt
import android.content.Intent
import android.os.Bundle
import android.widget.Button
import androidx.appcompat.app.AppCompatActivity
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
title = "KotlinApp"
val button:Button = findViewById(R.id.btnOpenAct2)
button.setOnClickListener {
val intent = Intent(this@MainActivity, NewActivity::class.java)
startActivity(intent)
}
}
}
Step 4 − Create a new empty Activity and add the following code −
NewActivity.main
import android.content.Intent
import android.os.Bundle
import android.widget.Button
import androidx.appcompat.app.AppCompatActivity
class NewActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_new)
title = "KotlinApp" val button: Button = findViewById(R.id.btnOpenMain)
button.setOnClickListener {
val i = Intent(this@NewActivity, MainActivity::class.java)
startActivity(i)
}
}
}
activity_new.xml
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_marginTop="40dp"
android:text="New Activity"
android:textColor="@android:color/holo_green_dark"
android:textSize="24sp"
android:textStyle="bold" />
<Button
android:id="@+id/btnOpenMain"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerInParent="true"
android:text="Go back to MainActivity" />
</RelativeLayout>
Step 5 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.q11">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device withyour computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen | [
{
"code": null,
"e": 1156,
"s": 1062,
"text": "This example demonstrates how to switch between different Activities in Android using Kotlin."
},
{
"code": null,
"e": 1284,
"s": 1156,
"text": "Step 1 − Create a new project in Android Studio, go to File? New Project and fill all re... |
Tryit Editor v3.7 | Tryit: HTML border color | [] |
Boolean Parenthesization Problem | DP-37 - GeeksforGeeks | 22 Sep, 2021
Given a boolean expression with following symbols.
Symbols
'T' ---> true
'F' ---> false
And following operators filled between symbols
Operators
& ---> boolean AND
| ---> boolean OR
^ ---> boolean XOR
Count the number of ways we can parenthesize the expression so that the value of expression evaluates to true. Let the input be in form of two arrays one contains the symbols (T and F) in order and the other contains operators (&, | and ^}
Examples:
Input: symbol[] = {T, F, T}
operator[] = {^, &}
Output: 2
The given expression is "T ^ F & T", it evaluates true
in two ways "((T ^ F) & T)" and "(T ^ (F & T))"
Input: symbol[] = {T, F, F}
operator[] = {^, |}
Output: 2
The given expression is "T ^ F | F", it evaluates true
in two ways "( (T ^ F) | F )" and "( T ^ (F | F) )".
Input: symbol[] = {T, T, F, T}
operator[] = {|, &, ^}
Output: 4
The given expression is "T | T & F ^ T", it evaluates true
in 4 ways ((T|T)&(F^T)), (T|(T&(F^T))), (((T|T)&F)^T)
and (T|((T&F)^T)).
Solution: Let T(i, j) represents the number of ways to parenthesize the symbols between i and j (both inclusive) such that the subexpression between i and j evaluates to true.
Let F(i, j) represents the number of ways to parenthesize the symbols between i and j (both inclusive) such that the subexpression between i and j evaluates to false.
Base Cases:
T(i, i) = 1 if symbol[i] = 'T'
T(i, i) = 0 if symbol[i] = 'F'
F(i, i) = 1 if symbol[i] = 'F'
F(i, i) = 0 if symbol[i] = 'T'
If we draw the recursion tree of the above recursive solution, we can observe that it many overlapping subproblems. Like other dynamic programming problems, it can be solved by filling a table in bottom-up manner. Following is the implementation of a dynamic programming solution.
C++
Java
Python
C#
Javascript
#include <cstring>#include <iostream>using namespace std; // Returns count of all possible// parenthesizations that lead// to result true for a boolean// expression with symbols like// true and false and operators// like &, | and ^ filled// between symbolsint countParenth(char symb[], char oper[], int n){ int F[n][n], T[n][n]; // Fill diagonal entries first // All diagonal entries in // T[i][i] are 1 if symbol[i] // is T (true). Similarly, // all F[i][i] entries are 1 if // symbol[i] is F (False) for (int i = 0; i < n; i++) { F[i][i] = (symb[i] == 'F') ? 1 : 0; T[i][i] = (symb[i] == 'T') ? 1 : 0; } // Now fill T[i][i+1], // T[i][i+2], T[i][i+3]... in order // And F[i][i+1], F[i][i+2], // F[i][i+3]... in order for (int gap = 1; gap < n; ++gap) { for (int i = 0, j = gap; j < n; ++i, ++j) { T[i][j] = F[i][j] = 0; for (int g = 0; g < gap; g++) { // Find place of parenthesization using // current value of gap int k = i + g; // Store Total[i][k] // and Total[k+1][j] int tik = T[i][k] + F[i][k]; int tkj = T[k + 1][j] + F[k + 1][j]; // Follow the recursive formulas // according // to the current operator if (oper[k] == '&') { T[i][j] += T[i][k] * T[k + 1][j]; F[i][j] += (tik * tkj - T[i][k] * T[k + 1][j]); } if (oper[k] == '|') { F[i][j] += F[i][k] * F[k + 1][j]; T[i][j] += (tik * tkj - F[i][k] * F[k + 1][j]); } if (oper[k] == '^') { T[i][j] += F[i][k] * T[k + 1][j] + T[i][k] * F[k + 1][j]; F[i][j] += T[i][k] * T[k + 1][j] + F[i][k] * F[k + 1][j]; } } } } return T[0][n - 1];} // Driver codeint main(){ char symbols[] = "TTFT"; char operators[] = "|&^"; int n = strlen(symbols); // There are 4 ways // ((T|T)&(F^T)), (T|(T&(F^T))), (((T|T)&F)^T) and // (T|((T&F)^T)) cout << countParenth(symbols, operators, n); return 0;}
class GFG { // Returns count of all possible // parenthesizations that lead to // result true for a boolean // expression with symbols like true // and false and operators like &, | // and ^ filled between symbols static int countParenth(char symb[], char oper[], int n) { int F[][] = new int[n][n]; int T[][] = new int[n][n]; // Fill diagonal entries first // All diagonal entries in T[i][i] // are 1 if symbol[i] is T (true). // Similarly, all F[i][i] entries // are 1 if symbol[i] is F (False) for (int i = 0; i < n; i++) { F[i][i] = (symb[i] == 'F') ? 1 : 0; T[i][i] = (symb[i] == 'T') ? 1 : 0; } // Now fill T[i][i+1], T[i][i+2], // T[i][i+3]... in order And F[i][i+1], // F[i][i+2], F[i][i+3]... in order for (int gap = 1; gap < n; ++gap) { for (int i = 0, j = gap; j < n; ++i, ++j) { T[i][j] = F[i][j] = 0; for (int g = 0; g < gap; g++) { // Find place of parenthesization // using current value of gap int k = i + g; // Store Total[i][k] // and Total[k+1][j] int tik = T[i][k] + F[i][k]; int tkj = T[k + 1][j] + F[k + 1][j]; // Follow the recursive formulas // according to the current operator if (oper[k] == '&') { T[i][j] += T[i][k] * T[k + 1][j]; F[i][j] += (tik * tkj - T[i][k] * T[k + 1][j]); } if (oper[k] == '|') { F[i][j] += F[i][k] * F[k + 1][j]; T[i][j] += (tik * tkj - F[i][k] * F[k + 1][j]); } if (oper[k] == '^') { T[i][j] += F[i][k] * T[k + 1][j] + T[i][k] * F[k + 1][j]; F[i][j] += T[i][k] * T[k + 1][j] + F[i][k] * F[k + 1][j]; } } } } return T[0][n - 1]; } // Driver code public static void main(String[] args) { char symbols[] = "TTFT".toCharArray(); char operators[] = "|&^".toCharArray(); int n = symbols.length; // There are 4 ways // ((T|T)&(F^T)), (T|(T&(F^T))), // (((T|T)&F)^T) and (T|((T&F)^T)) System.out.println( countParenth(symbols, operators, n)); }} // This code has been contributed// by 29AjayKumar
# Returns count of all possible# parenthesizations that lead to# result true for a boolean# expression with symbols like# true and false and operators# like &, | and ^ filled between symbols def countParenth(symb, oper, n): F = [[0 for i in range(n + 1)] for i in range(n + 1)] T = [[0 for i in range(n + 1)] for i in range(n + 1)] # Fill diagonal entries first # All diagonal entries in # T[i][i] are 1 if symbol[i] # is T (true). Similarly, all # F[i][i] entries are 1 if # symbol[i] is F (False) for i in range(n): if symb[i] == 'F': F[i][i] = 1 else: F[i][i] = 0 if symb[i] == 'T': T[i][i] = 1 else: T[i][i] = 0 # Now fill T[i][i+1], T[i][i+2], # T[i][i+3]... in order And # F[i][i+1], F[i][i+2], # F[i][i+3]... in order for gap in range(1, n): i = 0 for j in range(gap, n): T[i][j] = F[i][j] = 0 for g in range(gap): # Find place of parenthesization # using current value of gap k = i + g # Store Total[i][k] and Total[k+1][j] tik = T[i][k] + F[i][k] tkj = T[k + 1][j] + F[k + 1][j] # Follow the recursive formulas # according to the current operator if oper[k] == '&': T[i][j] += T[i][k] * T[k + 1][j] F[i][j] += (tik * tkj - T[i][k] * T[k + 1][j]) if oper[k] == '|': F[i][j] += F[i][k] * F[k + 1][j] T[i][j] += (tik * tkj - F[i][k] * F[k + 1][j]) if oper[k] == '^': T[i][j] += (F[i][k] * T[k + 1][j] + T[i][k] * F[k + 1][j]) F[i][j] += (T[i][k] * T[k + 1][j] + F[i][k] * F[k + 1][j]) i += 1 return T[0][n - 1] # Driver Codesymbols = "TTFT"operators = "|&^"n = len(symbols) # There are 4 ways# ((T|T)&(F^T)), (T|(T&(F^T))),# (((T|T)&F)^T) and (T|((T&F)^T))print(countParenth(symbols, operators, n)) # This code is contributed by# sahil shelangia
// C# program of above approachusing System; class GFG{ // Returns count of all possible // parenthesizations that lead to // result true for a boolean // expression with symbols like true // and false and operators like &, | // and ^ filled between symbols static int countParenth(char []symb, char []oper, int n) { int [,]F = new int[n, n]; int [,]T = new int[n, n]; // Fill diagonal entries first // All diagonal entries in T[i,i] // are 1 if symbol[i] is T (true). // Similarly, all F[i,i] entries // are 1 if symbol[i] is F (False) for (int i = 0; i < n; i++) { F[i,i] = (symb[i] == 'F') ? 1 : 0; T[i,i] = (symb[i] == 'T') ? 1 : 0; } // Now fill T[i,i+1], T[i,i+2], // T[i,i+3]... in order And F[i,i+1], // F[i,i+2], F[i,i+3]... in order for (int gap = 1; gap < n; ++gap) { for (int i = 0, j = gap; j < n; ++i, ++j) { T[i, j] = F[i, j] = 0; for (int g = 0; g < gap; g++) { // Find place of parenthesization // using current value of gap int k = i + g; // Store Total[i,k] and Total[k+1,j] int tik = T[i, k] + F[i, k]; int tkj = T[k + 1, j] + F[k + 1, j]; // Follow the recursive formulas // according to the current operator if (oper[k] == '&') { T[i, j] += T[i, k] * T[k + 1, j]; F[i, j] += (tik * tkj - T[i, k] * T[k + 1, j]); } if (oper[k] == '|') { F[i,j] += F[i, k] * F[k + 1, j]; T[i,j] += (tik * tkj - F[i, k] * F[k + 1, j]); } if (oper[k] == '^') { T[i, j] += F[i, k] * T[k + 1, j] + T[i, k] * F[k + 1, j]; F[i, j] += T[i, k] * T[k + 1, j] + F[i, k] * F[k + 1, j]; } } } } return T[0,n - 1]; } // Driver code public static void Main() { char []symbols = "TTFT".ToCharArray(); char []operators = "|&^".ToCharArray(); int n = symbols.Length; // There are 4 ways // ((T|T)&(F^T)), (T|(T&(F^T))), // (((T|T)&F)^T) and (T|((T&F)^T)) Console.WriteLine(countParenth(symbols, operators, n)); }} /* This code contributed by PrinciRaj1992 */
<script> // Returns count of all possible // parenthesizations that lead to // result true for a boolean // expression with symbols like true // and false and operators like &, | // and ^ filled between symbols function countParenth(symb, oper, n) { let F = new Array(n); let T = new Array(n); for (let i = 0; i < n; i++) { F[i] = new Array(n); T[i] = new Array(n); for(let j = 0; j < n; j++) { F[i][j] = 0; T[i][j] = 0; } } // Fill diagonal entries first // All diagonal entries in T[i][i] // are 1 if symbol[i] is T (true). // Similarly, all F[i][i] entries // are 1 if symbol[i] is F (False) for (let i = 0; i < n; i++) { F[i][i] = (symb[i] == 'F') ? 1 : 0; T[i][i] = (symb[i] == 'T') ? 1 : 0; } // Now fill T[i][i+1], T[i][i+2], // T[i][i+3]... in order And F[i][i+1], // F[i][i+2], F[i][i+3]... in order for (let gap = 1; gap < n; ++gap) { for (let i = 0, j = gap; j < n; ++i, ++j) { T[i][j] = F[i][j] = 0; for (let g = 0; g < gap; g++) { // Find place of parenthesization // using current value of gap let k = i + g; // Store Total[i][k] // and Total[k+1][j] let tik = T[i][k] + F[i][k]; let tkj = T[k + 1][j] + F[k + 1][j]; // Follow the recursive formulas // according to the current operator if (oper[k] == '&') { T[i][j] += T[i][k] * T[k + 1][j]; F[i][j] += (tik * tkj - T[i][k] * T[k + 1][j]); } if (oper[k] == '|') { F[i][j] += F[i][k] * F[k + 1][j]; T[i][j] += (tik * tkj - F[i][k] * F[k + 1][j]); } if (oper[k] == '^') { T[i][j] += F[i][k] * T[k + 1][j] + T[i][k] * F[k + 1][j]; F[i][j] += T[i][k] * T[k + 1][j] + F[i][k] * F[k + 1][j]; } } } } return T[0][n - 1]; } let symbols = "TTFT".split(''); let operators = "|&^".split(''); let n = symbols.length; // There are 4 ways // ((T|T)&(F^T)), (T|(T&(F^T))), // (((T|T)&F)^T) and (T|((T&F)^T)) document.write(countParenth(symbols, operators, n)); // This code is contributed by mukesh07.</script>
Output:
4
Time Complexity: O(n3) Auxiliary Space: O(n2)
Approach 2:
We can also use the recursive approach (Top Down dp), this approach uses memoization.
C++
Java
Python3
C#
Javascript
#include <bits/stdc++.h>using namespace std; int dp[101][101][2];int parenthesis_count(string s, int i, int j, int isTrue){ // Base Condition if (i > j) return false; if (i == j) { if (isTrue == 1) return s[i] == 'T'; else return s[i] == 'F'; } if (dp[i][j][isTrue] != -1) return dp[i][j][isTrue]; int ans = 0; for (int k = i + 1 ; k <= j - 1; k = k + 2) { int leftF, leftT, rightT, rightF; if (dp[i][k - 1][1] == -1) { leftT = parenthesis_count(s, i, k - 1, 1); } // Count no. of T in left partition else { leftT = dp[i][k - 1][1]; } if (dp[k + 1][j][1] == -1) { rightT = parenthesis_count(s, k + 1, j, 1); } // Count no. of T in right partition else { rightT = dp[k + 1][j][1]; } if (dp[i][k - 1][0] == -1) { // Count no. of F in left partition leftF = parenthesis_count(s, i, k - 1, 0); } else { leftF = dp[i][k - 1][0]; } if (dp[k + 1][j][0] == -1) { // Count no. of F in right partition rightF = parenthesis_count(s, k + 1, j, 0); } else { rightF = dp[k + 1][j][0]; } if (s[k] == '&') { if (isTrue == 1) ans += leftT * rightT; else ans += leftF * rightF + leftT * rightF + leftF * rightT; } else if (s[k] == '|') { if (isTrue == 1) ans += leftT * rightT + leftT * rightF + leftF * rightT; else ans = ans + leftF * rightF; } else if (s[k] == '^') { if (isTrue == 1) ans = ans + leftF * rightT + leftT * rightF; else ans = ans + leftT * rightT + leftF * rightF; } dp[i][j][isTrue] = ans; } return ans;} // Driver Codeint main(){ string symbols = "TTFT"; string operators = "|&^"; string s; int j = 0; for (int i = 0; i < symbols.length(); i++) { s.push_back(symbols[i]); if (j < operators.length()) s.push_back(operators[j++]); } // We obtain the string T|T&F^T int n = s.length(); // There are 4 ways // ((T|T)&(F^T)), (T|(T&(F^T))), (((T|T)&F)^T) and // (T|((T&F)^T)) memset(dp, -1, sizeof(dp)); cout << parenthesis_count(s, 0, n - 1, 1); return 0;}
import java.io.*;import java.util.*; class GFG { public static int countWays(int N, String S) { int dp[][][] = new int[N + 1][N + 1][2]; for (int row[][] : dp) for (int col[] : row) Arrays.fill(col, -1); return parenthesis_count(S, 0, N - 1, 1, dp); } public static int parenthesis_count(String str, int i, int j, int isTrue, int[][][] dp) { if (i > j) return 0; if (i == j) { if (isTrue == 1) { return (str.charAt(i) == 'T') ? 1 : 0; } else { return (str.charAt(i) == 'F') ? 1 : 0; } } if (dp[i][j][isTrue] != -1) return dp[i][j][isTrue]; int temp_ans = 0; int leftTrue, rightTrue, leftFalse, rightFalse; for (int k = i + 1; k <= j - 1; k = k + 2) { if (dp[i][k - 1][1] != -1) leftTrue = dp[i][k - 1][1]; else { // Count number of True in left Partition leftTrue = parenthesis_count(str, i, k - 1, 1, dp); } if (dp[i][k - 1][0] != -1) leftFalse = dp[i][k - 1][0]; else { // Count number of False in left Partition leftFalse = parenthesis_count(str, i, k - 1, 0, dp); } if (dp[k + 1][j][1] != -1) rightTrue = dp[k + 1][j][1]; else { // Count number of True in right Partition rightTrue = parenthesis_count(str, k + 1, j, 1, dp); } if (dp[k + 1][j][0] != -1) rightFalse = dp[k + 1][j][0]; else { // Count number of False in right Partition rightFalse = parenthesis_count(str, k + 1, j, 0, dp); } // Evaluate AND operation if (str.charAt(k) == '&') { if (isTrue == 1) { temp_ans = temp_ans + leftTrue * rightTrue; } else { temp_ans = temp_ans + leftTrue * rightFalse + leftFalse * rightTrue + leftFalse * rightFalse; } } // Evaluate OR operation else if (str.charAt(k) == '|') { if (isTrue == 1) { temp_ans = temp_ans + leftTrue * rightTrue + leftTrue * rightFalse + leftFalse * rightTrue; } else { temp_ans = temp_ans + leftFalse * rightFalse; } } // Evaluate XOR operation else if (str.charAt(k) == '^') { if (isTrue == 1) { temp_ans = temp_ans + leftTrue * rightFalse + leftFalse * rightTrue; } else { temp_ans = temp_ans + leftTrue * rightTrue + leftFalse * rightFalse; } } dp[i][j][isTrue] = temp_ans; } return temp_ans; } // Driver code public static void main(String[] args) { String symbols = "TTFT"; String operators = "|&^"; StringBuilder S = new StringBuilder(); int j = 0; for (int i = 0; i < symbols.length(); i++) { S.append(symbols.charAt(i)); if (j < operators.length()) S.append(operators.charAt(j++)); } // We obtain the string T|T&F^T int N = S.length(); // There are 4 ways // ((T|T)&(F^T)), (T|(T&(F^T))), (((T|T)&F)^T) and // (T|((T&F)^T)) System.out.println(countWays(N, S.toString())); }} // This code is contributed by farheenbano.
def parenthesis_count(Str, i, j, isTrue, dp) : if (i > j) : return 0 if (i == j) : if (isTrue == 1) : return 1 if Str[i] == 'T' else 0 else : return 1 if Str[i] == 'F' else 0 if (dp[i][j][isTrue] != -1) : return dp[i][j][isTrue] temp_ans = 0 for k in range(i + 1, j, 2) : if (dp[i][k - 1][1] != -1) : leftTrue = dp[i][k - 1][1] else : # Count number of True in left Partition leftTrue = parenthesis_count(Str, i, k - 1, 1, dp) if (dp[i][k - 1][0] != -1) : leftFalse = dp[i][k - 1][0] else : # Count number of False in left Partition leftFalse = parenthesis_count(Str, i, k - 1, 0, dp) if (dp[k + 1][j][1] != -1) : rightTrue = dp[k + 1][j][1] else : # Count number of True in right Partition rightTrue = parenthesis_count(Str, k + 1, j, 1, dp) if (dp[k + 1][j][0] != -1) : rightFalse = dp[k + 1][j][0] else : # Count number of False in right Partition rightFalse = parenthesis_count(Str, k + 1, j, 0, dp) # Evaluate AND operation if (Str[k] == '&') : if (isTrue == 1) : temp_ans = temp_ans + leftTrue * rightTrue else : temp_ans = temp_ans + leftTrue * rightFalse + leftFalse * rightTrue + leftFalse * rightFalse # Evaluate OR operation elif (Str[k] == '|') : if (isTrue == 1) : temp_ans = temp_ans + leftTrue * rightTrue + leftTrue * rightFalse + leftFalse * rightTrue else : temp_ans = temp_ans + leftFalse * rightFalse # Evaluate XOR operation elif (Str[k] == '^') : if (isTrue == 1) : temp_ans = temp_ans + leftTrue * rightFalse + leftFalse * rightTrue else : temp_ans = temp_ans + leftTrue * rightTrue + leftFalse * rightFalse dp[i][j][isTrue] = temp_ans return temp_ans def countWays(N, S) : dp = [[[-1 for k in range(2)] for i in range(N + 1)] for j in range(N + 1)] return parenthesis_count(S, 0, N - 1, 1, dp) symbols = "TTFT"operators = "|&^"S = ""j = 0for i in range(len(symbols)) : S = S + symbols[i] if (j < len(operators)) : S = S + operators[j] j += 1 # We obtain the string T|T&F^TN = len(S) # There are 4 ways# ((T|T)&(F^T)), (T|(T&(F^T))), (((T|T)&F)^T) and# (T|((T&F)^T))print(countWays(N, S)) # This code is contributed by divyesh072019
using System;class GFG{ static int parenthesis_count(string str, int i, int j, int isTrue, int[,,] dp) { if (i > j) return 0; if (i == j) { if (isTrue == 1) { return (str[i] == 'T') ? 1 : 0; } else { return (str[i] == 'F') ? 1 : 0; } } if (dp[i, j, isTrue] != -1) return dp[i, j, isTrue]; int temp_ans = 0; int leftTrue, rightTrue, leftFalse, rightFalse; for (int k = i + 1; k <= j - 1; k = k + 2) { if (dp[i, k - 1, 1] != -1) leftTrue = dp[i, k - 1, 1]; else { // Count number of True in left Partition leftTrue = parenthesis_count(str, i, k - 1, 1, dp); } if (dp[i, k - 1, 0] != -1) leftFalse = dp[i, k - 1, 0]; else { // Count number of False in left Partition leftFalse = parenthesis_count(str, i, k - 1, 0, dp); } if (dp[k + 1, j, 1] != -1) rightTrue = dp[k + 1, j, 1]; else { // Count number of True in right Partition rightTrue = parenthesis_count(str, k + 1, j, 1, dp); } if (dp[k + 1, j, 0] != -1) rightFalse = dp[k + 1, j, 0]; else { // Count number of False in right Partition rightFalse = parenthesis_count(str, k + 1, j, 0, dp); } // Evaluate AND operation if (str[k] == '&') { if (isTrue == 1) { temp_ans = temp_ans + leftTrue * rightTrue; } else { temp_ans = temp_ans + leftTrue * rightFalse + leftFalse * rightTrue + leftFalse * rightFalse; } } // Evaluate OR operation else if (str[k] == '|') { if (isTrue == 1) { temp_ans = temp_ans + leftTrue * rightTrue + leftTrue * rightFalse + leftFalse * rightTrue; } else { temp_ans = temp_ans + leftFalse * rightFalse; } } // Evaluate XOR operation else if (str[k] == '^') { if (isTrue == 1) { temp_ans = temp_ans + leftTrue * rightFalse + leftFalse * rightTrue; } else { temp_ans = temp_ans + leftTrue * rightTrue + leftFalse * rightFalse; } } dp[i, j, isTrue] = temp_ans; } return temp_ans; } static int countWays(int N, string S) { int[,,] dp = new int[N + 1, N + 1, 2]; for(int i = 0; i < (N + 1); i++) { for(int j = 0; j < (N + 1); j++) { for(int k = 0; k < 2; k++) { dp[i, j, k] = -1; } } } return parenthesis_count(S, 0, N - 1, 1, dp); } // Driver code static void Main() { string symbols = "TTFT"; string operators = "|&^"; string S = ""; int j = 0; for (int i = 0; i < symbols.Length; i++) { S = S + symbols[i]; if (j < operators.Length) S = S + operators[j++]; } // We obtain the string T|T&F^T int N = S.Length; // There are 4 ways // ((T|T)&(F^T)), (T|(T&(F^T))), (((T|T)&F)^T) and // (T|((T&F)^T)) Console.WriteLine(countWays(N, S)); }} // This code is contributed by divyeshrabadiya07.
<script> function countWays(N, S){ let dp = new Array(N + 1); for(let i = 0; i < N + 1; i++) { dp[i] = new Array(N+1); for(let j = 0; j < N + 1;j++) { dp[i][j] = new Array(2); for(let k = 0; k < 2; k++) dp[i][j][k] = -1; } } return parenthesis_count(S, 0, N - 1, 1, dp);} function parenthesis_count(str, i, j, isTrue, dp){ if (i > j) return 0; if (i == j) { if (isTrue == 1) { return (str[i] == 'T') ? 1 : 0; } else { return (str[i] == 'F') ? 1 : 0; } } if (dp[i][j][isTrue] != -1) return dp[i][j][isTrue]; let temp_ans = 0; let leftTrue, rightTrue, leftFalse, rightFalse; for (let k = i + 1; k <= j - 1; k = k + 2) { if (dp[i][k - 1][1] != -1) leftTrue = dp[i][k - 1][1]; else { // Count number of True in left Partition leftTrue = parenthesis_count(str, i, k - 1, 1, dp); } if (dp[i][k - 1][0] != -1) leftFalse = dp[i][k - 1][0]; else { // Count number of False in left Partition leftFalse = parenthesis_count(str, i, k - 1, 0, dp); } if (dp[k + 1][j][1] != -1) rightTrue = dp[k + 1][j][1]; else { // Count number of True in right Partition rightTrue = parenthesis_count(str, k + 1, j, 1, dp); } if (dp[k + 1][j][0] != -1) rightFalse = dp[k + 1][j][0]; else { // Count number of False in right Partition rightFalse = parenthesis_count(str, k + 1, j, 0, dp); } // Evaluate AND operation if (str[k] == '&') { if (isTrue == 1) { temp_ans = temp_ans + leftTrue * rightTrue; } else { temp_ans = temp_ans + leftTrue * rightFalse + leftFalse * rightTrue + leftFalse * rightFalse; } } // Evaluate OR operation else if (str[k] == '|') { if (isTrue == 1) { temp_ans = temp_ans + leftTrue * rightTrue + leftTrue * rightFalse + leftFalse * rightTrue; } else { temp_ans = temp_ans + leftFalse * rightFalse; } } // Evaluate XOR operation else if (str[k] == '^') { if (isTrue == 1) { temp_ans = temp_ans + leftTrue * rightFalse + leftFalse * rightTrue; } else { temp_ans = temp_ans + leftTrue * rightTrue + leftFalse * rightFalse; } } dp[i][j][isTrue] = temp_ans; } return temp_ans;} // Driver codelet symbols = "TTFT";let operators = "|&^";let S = [];let j = 0; for (let i = 0; i < symbols.length; i++){ S.push(symbols[i]); if (j < operators.length) S.push(operators[j++]);} // We obtain the string T|T&F^Tlet N = S.length; // There are 4 ways// ((T|T)&(F^T)), (T|(T&(F^T))), (((T|T)&F)^T) and// (T|((T&F)^T))document.write(countWays(N, S.join(""))); // This code is contributed by avanitrachhadiya2155</script>
4
References: http://people.cs.clemson.edu/~bcdean/dp_practice/dp_9.swfPlease write comments if you find anything incorrect, or you want to share more information about the topic discussed above
sahilshelangia
29AjayKumar
princiraj1992
bhavneet2000
farheenbano
divyeshrabadiya07
divyesh072019
mukesh07
avanitrachhadiya2155
arorakashish0911
Amazon
Linkedin
Microsoft
Dynamic Programming
Amazon
Microsoft
Linkedin
Dynamic Programming
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Optimal Substructure Property in Dynamic Programming | DP-2
Min Cost Path | DP-6
Count All Palindrome Sub-Strings in a String | Set 1
Maximum sum such that no two elements are adjacent
Optimal Binary Search Tree | DP-24
Binomial Coefficient | DP-9
3 Different ways to print Fibonacci series in Java
Maximum Subarray Sum using Divide and Conquer algorithm
Optimal Strategy for a Game | DP-31
Greedy approach vs Dynamic programming | [
{
"code": null,
"e": 24698,
"s": 24670,
"text": "\n22 Sep, 2021"
},
{
"code": null,
"e": 24750,
"s": 24698,
"text": "Given a boolean expression with following symbols. "
},
{
"code": null,
"e": 24797,
"s": 24750,
"text": "Symbols\n 'T' ---> true \n 'F' -... |
How to create a column with binary variable based on a condition of other variable in an R data frame? | Sometimes we need to create extra variable to add more information about the present data because it adds value. This is especially used while we do feature engineering. If we come to know about something that may affect our response then we prefer to use it as a variable in our data, hence we make up that with the data we have. For example, creating another variable applying conditions on other variable such as creating a binary variable for goodness if the frequency matches a certain criterion.
Consider the below data frame −
Live Demo
set.seed(100)
Group<-rep(c("A","B","C","D","E"),times=4)
Frequency<-sample(20:30,20,replace=TRUE)
df1<-data.frame(Group,Frequency)
df1
Group Frequency
1 A 29
2 B 26
3 C 25
4 D 22
5 E 28
6 A 29
7 B 26
8 C 25
9 D 25
10 E 23
11 A 26
12 B 25
13 C 21
14 D 26
15 E 26
16 A 26
17 B 30
18 C 27
19 D 21
20 E 22
Creating a column category having two levels as Good and Bad, where Good is for those that have Frequency greater than 25−
df1$Category<-ifelse(df1$Frequency>25,"Good","Bad")
df1
Group Frequency Category
1 A 29 Good
2 B 26 Good
3 C 25 Bad
4 D 22 Bad
5 E 28 Good
6 A 29 Good
7 B 26 Good
8 C 25 Bad
9 D 25 Bad
10 E 23 Bad
11 A 26 Good
12 B 25 Bad
13 C 21 Bad
14 D 26 Good
15 E 26 Good
16 A 26 Good
17 B 30 Good
18 C 27 Good
19 D 21 Bad
20 E 22 Bad
Let’s have a look at another example −
Live Demo
Class<-rep(c("Lower","Middle","Upper Middle","Higher"),times=5)
Ratings<-sample(1:10,20,replace=TRUE)
df2<-data.frame(Class,Ratings)
df2
Class Ratings
1 Lower 3
2 Middle 8
3 Upper Middle 2
4 Higher 9
5 Lower 2
6 Middle 3
7 Upper Middle 4
8 Higher 4
9 Lower 4
10 Middle 5
11 Upper Middle 7
12 Higher 9
13 Lower 4
14 Middle 2
15 Upper Middle 6
16 Higher 7
17 Lower 1
18 Middle 6
19 Upper Middle 9
20 Higher 9
df2$Group<-ifelse(df2$Ratings>5,"Royal","Standard")
df2
Class Ratings Group
1 Lower 3 Standard
2 Middle 8 Royal
3 Upper Middle 2 Standard
4 Higher 9 Royal
5 Lower 2 Standard
6 Middle 3 Standard
7 Upper Middle 4 Standard
8 Higher 4 Standard
9 Lower 4 Standard
10 Middle 5 Standard
11 Upper Middle 7 Royal
12 Higher 9 Royal
13 Lower 4 Standard
14 Middle 2 Standard
15 Upper Middle 6 Royal
16 Higher 7 Royal
17 Lower 1 Standard
18 Middle 6 Royal
19 Upper Middle 9 Royal
20 Higher 9 Royal | [
{
"code": null,
"e": 1564,
"s": 1062,
"text": "Sometimes we need to create extra variable to add more information about the present data because it adds value. This is especially used while we do feature engineering. If we come to know about something that may affect our response then we prefer to u... |
How to detect scroll up and scroll down in android listView? | This example demonstrates how do I detect scroll up and down in android listView.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
android:gravity="center"
android:padding="8dp"
tools:context=".MainActivity">
<ListView
android:id="@+id/listView"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
</LinearLayout>
Step 3 − Add the following code to src/MainActivity.java
import androidx.appcompat.app.AppCompatActivity;
import android.os.Bundle;
import android.widget.AbsListView;
import android.widget.ArrayAdapter;
import android.widget.ListView;
import android.widget.ScrollView;
import android.widget.Toast;
public class MainActivity extends AppCompatActivity {
ScrollView scrollView;
ListView listView;
String[] numbers = {"1","2","3","4", "5","6","7","8", "9", "X", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20" , "21", "22", "23", "24", "25"};
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
scrollView = findViewById(R.id.scrollView);
ArrayAdapter adapter = new ArrayAdapter<>(this,
R.layout.support_simple_spinner_dropdown_item, numbers);
listView = findViewById(R.id.listView);
listView.setAdapter(adapter);
listView.setOnScrollListener(new AbsListView.OnScrollListener(){
private int lastFirstVisibleItem;
@Override
public void onScrollStateChanged(AbsListView view, int scrollState) {
}
@Override
public void onScroll(AbsListView view, int firstVisibleItem, int visibleItemCount, int totalItemCount) {
if(lastFirstVisibleItem<firstVisibleItem){
Toast.makeText(getApplicationContext(), "Scrolling down the listView",
Toast.LENGTH_SHORT).show();
}
if(lastFirstVisibleItem>firstVisibleItem){
Toast.makeText(getApplicationContext(), "Scrolling up the listView",
Toast.LENGTH_SHORT).show();
}
lastFirstVisibleItem=firstVisibleItem;
}
});
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="app.com.sample">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from the android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − | [
{
"code": null,
"e": 1144,
"s": 1062,
"text": "This example demonstrates how do I detect scroll up and down in android listView."
},
{
"code": null,
"e": 1273,
"s": 1144,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required deta... |
Alternatively merging two arrays - JavaScript | We are required to write a JavaScript function that takes in two arrays and merges the arrays taking elements alternatively from the arrays.
For example −
If the two arrays are −
const arr1 = [4, 3, 2, 5, 6, 8, 9];
const arr2 = [2, 1, 6, 8, 9, 4, 3];
Then the output should be −
const output = [4, 2, 3, 1, 2, 6, 5, 8, 6, 9, 8, 4, 9, 3];
Following is the code −
const arr1 = [4, 3, 2, 5, 6, 8, 9];
const arr2 = [2, 1, 6, 8, 9, 4, 3];
const mergeAlernatively = (arr1, arr2) => {
const res = [];
for(let i = 0; i < arr1.length + arr2.length; i++){
if(i % 2 === 0){
res.push(arr1[i/2]);
}else{
res.push(arr2[(i-1)/2]);
};
};
return res;
};
console.log(mergeAlernatively(arr1, arr2));
This will produce the following output in console −
[
4, 2, 3, 1, 2, 6,
5, 8, 6, 9, 8, 4,
9, 3
] | [
{
"code": null,
"e": 1203,
"s": 1062,
"text": "We are required to write a JavaScript function that takes in two arrays and merges the arrays taking elements alternatively from the arrays."
},
{
"code": null,
"e": 1217,
"s": 1203,
"text": "For example −"
},
{
"code": null,... |
Subquery in SQL | A subquery is a query within a query i.e a nested query. It is placed inside a query and its result is used to further evaluate the outer query.
There are some rules that a subquery must follow in SQL. Some of these are −
The subquery should be placed within parenthesis.
The subquery can be used with different operators like <,>,<=,>=, IN,BETWEEN etc. Also operators like SELECT, INSERT, DELETE, UPDATE etc. be used.
The ORDER BY operator cannot be used in the subquery. However, it can be there in the main query.
A subquery cannot be written with a BETWEEN operator. But the subquery can contain the BETWEEN operator.
The subquery that returns more than one row cannot be used with all the operators. It can only be used with operators that accept multiple values like IN.
An example of subqueries in SQL is −
<Student>
Select *
from student
where student_marks IN( select student_marks from student where student_marks>50)
This query will return details about all the students who have more than 50 marks i.e. Andrew, Sara and Megan. | [
{
"code": null,
"e": 1207,
"s": 1062,
"text": "A subquery is a query within a query i.e a nested query. It is placed inside a query and its result is used to further evaluate the outer query."
},
{
"code": null,
"e": 1284,
"s": 1207,
"text": "There are some rules that a subquery ... |
Dimensionality Reduction — Can PCA improve the performance of a classification model? | by Satyam Kumar | Towards Data Science | Principal Component Analysis (PCA) is a common feature extraction technique in data science that employs matrix factorization to reduce the dimensionality of data into lower space.
In real-world datasets, there are often too many features in the data. The higher the number of features harder it is to visualize the data and work on it. Sometimes most of the features are correlated, and hence redundant. Hence feature extraction comes into play.
The dataset used in this article is Ionosphere Dataset from the UCI machine learning repository. It is a binary class classification problem. There are 351 observations with 34 features.
Importing necessary libraries and reading the dataset
Preprocessing of dataset
Standardization
The training data has 34 features.
After preprocessing of data, training data is trained using Logistic Regression algorithm for binary class classification
Finetuning Logistic Regression model to find the best parameters
Compute training and test accuracy and f1 score.
Training an LR model using 34 features for c=10**0
Compute training and test accuracy and f1 score
Results obtained by training the entire “X_train” data having 34 features,
the test f1-score is 0.90, as 14 values are misclassified as observed in the confusion matrix.
To extract features from the dataset using the PCA technique, firstly we need to find the percentage of variance explained as dimensionality decreases.
Notations,λ: eigenvalued: number of dimension of original datasetk: number of dimensions of new feature space
From the above plot, it is observed that for 15 dimensions the percentage of variance explained is 90%. This means we are preserving 90% of variance by projecting higher dimensionality (34) into lower space (15).
Now the training data after PCA dimensionality reduction has 15 features.
After preprocessing of data, training data is trained using Logistic Regression algorithm for binary class classification
Finetuning Logistic Regression model to find the best parameters
Compute training and test accuracy and f1 score.
Training an LR model using 15 feature for c=10**0
Compute training and test accuracy and f1 score
Results obtained by training the PCA data with 15 features,
the test f1-score is 0.896, as 12 values are misclassified as observed in the confusion matrix.
After concatenating original data with 34 features and PCA data with 15 features, we form a dataset of 49 features.
After preprocessing of data, training data is trained using Logistic Regression algorithm for binary class classification
Finetuning Logistic Regression model to find the best parameters
Compute training and test accuracy and f1 score.
Training an LR model using 15 feature for c=10**0
Compute training and test accuracy and f1 score
From the above pretty table, we can observe that,
An LR model trained using the raw preprocessed dataset with 34 features, we get an F1-score of 90%.
An LR model trained only with extracted 15 features using PCA, we get an F1-score of 89%.
An LR model trained with a combination of the above two data, we get an F1-score of 92%.
Let’s observe the change in Confusion Matrix results for the above-mentioned 3 models.
Hence we conclude that using only PCA extracted features with only 50% of numbers of features from original data we get 1% less F1-score. But if we combine both data we improve the metric of 2% to get the final F1-score of 91%.
Click below to get code:
colab.research.google.com
Thank You for Reading | [
{
"code": null,
"e": 353,
"s": 172,
"text": "Principal Component Analysis (PCA) is a common feature extraction technique in data science that employs matrix factorization to reduce the dimensionality of data into lower space."
},
{
"code": null,
"e": 619,
"s": 353,
"text": "In re... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.