title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
A Guide to Mining and Analysing Tweets with R | by Céline Van den Rul | Towards Data Science
Twitter provides us with vast amounts of user-generated language data — a dream for anyone wanting to conduct textual analysis. More than that, tweets allow us to gain insights into the online public behaviour. As such, analysing Twitter has become a crucial source of information for brands and agencies. Several factors have given Twitter considerable advantages over other social media platforms for analysis. First, the limited character size of tweets provides us with a relatively homogeneous corpora. Second, the millions of tweets published everyday allows access to large data samples. Third, the tweets are publicly available and easily accessible as well as retrievable via APIs. Nonetheless, extracting these insights still requires a bit of coding and programming knowledge. This is why, most often, brands and agencies rely on easy-to-use analytics tools such as SproutSocial and Talkwalker who provide these insights at a cost in just one click. In this article, I help you to break down these barriers and provide you with a simple guide on how to extract and analyse tweets with the programming software R. Here are 3 reasons why you might chose to do so: Using R is for free, i.e. you will be able to produce a Twitter Analytics Report for free and learn how to code at the same time! R allows you infinite opportunities for analysis. Using it to analyse Twitter therefore allows you to conduct tailor-made analysis depending on what you wish to analyse instead of relying on a one-size-fits-all report R allows you to analyse any Twitter account you want even if you don’t have the log-in details. This is a huge advantage compared to many analytics tools that require you to have the log-in details in order to analyse the information in the first place. Convinced? Let’s get started, then! In order to get started, you first need to get a Twitter API. This will allow you to retrieve the tweets — without it, you cannot do anything. Getting a Twitter API is easy. First make sure you have a Twitter account, otherwise create one. Then, apply for a developer account via the following website: https://developer.twitter.com/en/apply-for-access.html. You’ll need to fill in an application form, which includes explaining a little a bit more what you wish you analyse. Once you application has been accepted by Twitter (which doesn’t take too long), you’ll receive the following credentials that you need to keep safe: Consumer key Consumer Secret Access Token Access Secret Once you have the information above, start R and download the package “rtweet”, which I will use to extract the tweets. install.packages("rtweet")library (rtweet) Then, set up the authentification to connect to Twitter. You do this by entering the name of your app, consumer key and consumer secret — all of it is information you have received when applying for the Twitter API. You will be re-directed to a Twitter page and asked to accept the authentification. Once this is done, you can return to R and start the analysis of your tweets! twitter_token <- create_token( app = ****, consumer_key = ****, consumer_secret = ****, set_renv = TRUE) Depending on the analysis you wish to perform, you may want to search for tweets that contain a specific word or hashtag. Note that you can only extract tweets from the past 6 to 9 days, so keep this in mind for your analysis. To do this, simply use the search_tweets function followed by a few specifications: the number of tweets to extract (n), whether or not to include retweets and the language of the tweets. As an example, see the line of code below. climate <- search_tweets(“climate”, n=1000, include_rts=FALSE, lang=”en”) Alternatively, you may want to analyse a specific user account. In this case, use the get_timeline function followed by the twitter handle and number of tweets you wish to extract. Note that here you can only extract the last 3200 tweets. In this example, I chose to exract the tweets of Bill Gates. The advantage here is that Bill Gates’ account counts 3169 tweets overall, which is under the 3200 threshold. Gates <- get_timeline("@BillGates", n= 3200) In this part, I show you 8 key insights you should include in every Twitter Analytics Report. To do this, let’s delve into the Twitter account of Bill Gates a bit more! The first part of any report should deliver clear information as to what worked best and what didn’t. Finding out the best and least performing tweets gives a quick and clear overall picture of the account. In order to do this, you first need to distinguish between organic tweets, retweets and replies. The following line of code shows you how to remove the retweets and replies from your sample to keep only the organic tweets — content-wise, these are the ones you want to analyse! # Remove retweetsGates_tweets_organic <- Gates_tweets[Gates_tweets$is_retweet==FALSE, ] # Remove repliesGates_tweets_organic <- subset(Gates_tweets_organic, is.na(Gates_tweets_organic$reply_to_status_id)) Then, you’ll want to analyse engagement by looking at the variables: favorite_count (i.e. the number of likes) or retweet_count (i.e. the number of retweets). Simply arrange them in descending order (with a minus “-” before the variable) to find the one with the highest number of likes or retweets or ascending order (without the minus) to find the one with lowest number of engagements. Gates_tweets_organic <- Gates_tweets_organic %>% arrange(-favorite_count)Gates_tweets_organic[1,5]Gates_tweets_organic <- Gates_tweets_organic %>% arrange(-retweet_count)Gates_tweets_organic[1,5] Analysing the ratio of replies, retweets and organic tweets can tell you a great deal about the type of account you’re analysing. No one likes a Twitter account that exclusively retweets for instance, without any individual content. Finding a good ratio of replies, retweets and organic tweets is therefore a key metric to monitor if one wishes to improve the performance of his or her account. As a first step, make sure to create three different data sets. As you’ve already created a dataset containing only the organic tweets in the previous steps, simply now create a dataset containing only the retweets and one containing only the replies. # Keeping only the retweetsGates_retweets <- Gates_tweets[Gates_tweets$is_retweet==TRUE,]# Keeping only the repliesGates_replies <- subset(Gates_tweets, !is.na(Gates_tweets$reply_to_status_id)) Then, create a separate data frame containing the number of organic tweets, retweets, and replies. These numbers are easy to find: they are the number of observations for your three respective datasets. # Creating a data framedata <- data.frame( category=c("Organic", "Retweets", "Replies"), count=c(2856, 192, 120)) Once you’ve done that, you can start preparing your data frame for a donut chart as shown below. This includes adding columns that calculate the ratios and percentages and some visualisation tweaks such as specifying the legend and rounding up your data. # Adding columns data$fraction = data$count / sum(data$count)data$percentage = data$count / sum(data$count) * 100data$ymax = cumsum(data$fraction)data$ymin = c(0, head(data$ymax, n=-1))# Rounding the data to two decimal pointsdata <- round_df(data, 2)# Specify what the legend should sayType_of_Tweet <- paste(data$category, data$percentage, "%")ggplot(data, aes(ymax=ymax, ymin=ymin, xmax=4, xmin=3, fill=Type_of_Tweet)) + geom_rect() + coord_polar(theta="y") + xlim(c(2, 4)) + theme_void() + theme(legend.position = "right") Thanks to the date and hour extracted with each tweet, understanding when Bill Gates tweets most is very easy to analyse. This can give us an overall overview of the activity of the account and can be a useful metric to be analysed against the most and least performing tweets. In this example, I analyse the frequency of tweets by year. Note that you can also do so by month by simply changing “year” to “month” in the following line of code. Alternatively, you can also analyse the publishing behaviour by hour with the R packages hms and scales. colnames(Gates_tweets)[colnames(Gates_tweets)=="screen_name"] <- "Twitter_Account"ts_plot(dplyr::group_by(Gates_tweets, Twitter_Account), "year") + ggplot2::theme_minimal() + ggplot2::theme(plot.title = ggplot2::element_text(face = "bold")) + ggplot2::labs( x = NULL, y = NULL, title = "Frequency of Tweets from Bill Gates", subtitle = "Tweet counts aggregated by year", caption = "\nSource: Data collected from Twitter's REST API via rtweet" ) Analysing the source of the platform from which tweets are published is another cool insight to have. One of the reasons is that we can to a certain extent deduct whether or not Bill Gates is the one tweeting or not. As a result, this helps us define the personality of the tweets. In this step, you’re interested in the source variable collected by the rtweet package. The following line of codes shows you how to aggregate this data by type of source and count the frequency of tweets for each type respectively. Note that I have only kept the sources for which more than 11 tweets were published to simplify the visualisation process. Gates_app <- Gates_tweets %>% select(source) %>% group_by(source) %>% summarize(count=n())Gates_app <- subset(Gates_app, count > 11) Once this is done, the process is similar to the donut chart already created previously! data <- data.frame( category=Gates_app$source, count=Gates_app$count)data$fraction = data$count / sum(data$count)data$percentage = data$count / sum(data$count) * 100data$ymax = cumsum(data$fraction)data$ymin = c(0, head(data$ymax, n=-1))data <- round_df(data, 2)Source <- paste(data$category, data$percentage, "%")ggplot(data, aes(ymax=ymax, ymin=ymin, xmax=4, xmin=3, fill=Source)) + geom_rect() + coord_polar(theta="y") + # Try to remove that to understand how the chart is built initially xlim(c(2, 4)) + theme_void() + theme(legend.position = "right") Note that most of the tweets from Bill Gates originate from Twitter Web Client, Sprinklr and Hootsuite — an indication that Bill Gates is most likely not the one tweeting himself! A Twitter Analytics Report should of course include an analysis of the content of the tweets and this includes finding out which words are used most. Because you’re analysing textual data, make sure to clean it first and remove it from any character that you don’t want to show in your analysis such as hyperlinks, @ mentions or punctuations. The lines of code below provide you with basic cleaning steps for tweets. Gates_tweets_organic$text <- gsub("https\\S*", "", Gates_tweets_organic$text)Gates_tweets_organic$text <- gsub("@\\S*", "", Gates_tweets_organic$text) Gates_tweets_organic$text <- gsub("amp", "", Gates_tweets_organic$text) Gates_tweets_organic$text <- gsub("[\r\n]", "", Gates_tweets_organic$text)Gates_tweets_organic$text <- gsub("[[:punct:]]", "", Gates_tweets_organic$text) As a second step, make sure to remove stop words from the text. This is important for your analysis of the most frequent words as you don’t want the most common used words such as “to” or “and” to appear as these don’t carry much meaning for your analysis. tweets <- Gates_tweets_organic %>% select(text) %>% unnest_tokens(word, text)tweets <- tweets %>% anti_join(stop_words) You can then plot the most frequent words found in the tweets by following the simple steps below. tweets %>% # gives you a bar chart of the most frequent words found in the tweets count(word, sort = TRUE) %>% top_n(15) %>% mutate(word = reorder(word, n)) %>% ggplot(aes(x = word, y = n)) + geom_col() + xlab(NULL) + coord_flip() + labs(y = "Count", x = "Unique words", title = "Most frequent words found in the tweets of Bill Gates", subtitle = "Stop words removed from the list") You can do the same analysis with the hashtags. In this case, you’ll want to use the hashtags variable from the rtweet package. A nice way to visualise these is using a word cloud as shown below. Gates_tweets_organic$hashtags <- as.character(Gates_tweets_organic$hashtags)Gates_tweets_organic$hashtags <- gsub("c\\(", "", Gates_tweets_organic$hashtags)set.seed(1234)wordcloud(Gates_tweets_organic$hashtags, min.freq=5, scale=c(3.5, .5), random.order=FALSE, rot.per=0.35, colors=brewer.pal(8, "Dark2")) Retweeting extensively from one account is usually not what someone looks for in a Twitter account. A helpful insight is therefore to monitor and understand from which accounts most retweets originate. The variable you’ll want to analyse here is retweet_screen_name and the process to visualise it is similar to the one described previously using word clouds. set.seed(1234)wordcloud(Gates_retweets$retweet_screen_name, min.freq=3, scale=c(2, .5), random.order=FALSE, rot.per=0.25, colors=brewer.pal(8, "Dark2")) Finally, you may want to add a sentiment analysis at the end of your Twitter Analytics Report. This is easy to do with the package “syuzhet” and allows you to further deepen your analysis by grasping the tone of the tweets. No one likes a Twitter account that only spreads angry or sad tweets. Capturing the tone of your tweets and how they balance out is a good indication of your account’s performance. library(syuzhet)# Converting tweets to ASCII to trackle strange characterstweets <- iconv(tweets, from="UTF-8", to="ASCII", sub="")# removing retweets, in case needed tweets <-gsub("(RT|via)((?:\\b\\w*@\\w+)+)","",tweets)# removing mentions, in case neededtweets <-gsub("@\\w+","",tweets)ew_sentiment<-get_nrc_sentiment((tweets))sentimentscores<-data.frame(colSums(ew_sentiment[,]))names(sentimentscores) <- "Score"sentimentscores <- cbind("sentiment"=rownames(sentimentscores),sentimentscores)rownames(sentimentscores) <- NULLggplot(data=sentimentscores,aes(x=sentiment,y=Score))+ geom_bar(aes(fill=sentiment),stat = "identity")+ theme(legend.position="none")+ xlab("Sentiments")+ylab("Scores")+ ggtitle("Total sentiment based on scores")+ theme_minimal() In this article, I aimed to show how to extract and analyse tweets using the free-to-use programming software R. I hope you found this guide helpful to build your own Twitter Analytics Report that includes: Showing which tweets worked best and which didn’t The ratio of organic tweets/replies/retweets, the time of tweet publication and the platforms from which tweets are published. These are all insights regarding the tweeting behaviour. The most frequent words used in the tweets, hashtags, from which accounts most retweets originate and a sentiment analysis capturing the tone of the tweets. These are all insights on the content of the tweets. medium.com I regularly write articles about Data Science and Natural Language Processing. Follow me on Twitter or Medium to check out more articles like these or simply to keep updated about the next ones!
[ { "code": null, "e": 478, "s": 172, "text": "Twitter provides us with vast amounts of user-generated language data — a dream for anyone wanting to conduct textual analysis. More than that, tweets allow us to gain insights into the online public behaviour. As such, analysing Twitter has become a cruc...
Node.js Exercises With Solutions
Theory of Computation It is often necessary for a network application to make external HTTP calls. HTTP servers are also often called upon to perform HTTP services for clients making requests. Node.js provides an easy interface for making external HTTP calls. For example, the following code will fetch the front page of 'google.com'. var http = require('http'); http.request({ host: 'www.google.com', method: 'GET', path: "/" }, function(response) { response.setEncoding("utf8"); response.on("readable", function() { console.log(response.read()) }); }).end(); http://www.etutorialspoint.com/index.php/nodejs/node-js-filesystem Whenever a request is made to an HTTP server, the request object will contain URL property, identifying the targeted resource. This is accessible via the request.url. Node's URL module is used to decompose a typical URL string into its constituent parts. Consider the following figure- console.log(url.parse("http://www.etutorialspoint.com/index.php/nodejs/node-js-filesystem")); Output of the above code Cookies are pieces of content that are sent to a user's web browser. Cookies are small data that are stored on a client side and sent to the client along with server requests. The HTTP header Set-Cookie is a response header and used to send cookies from the server to the user agent. So the user agent can send them back to the server later so the server can detect the user. The given program checks the request header for cookies. var http = require('http'); var url = require('url'); var server = http.createServer(function(request, response) { var cookies = request.headers.cookie; if(!cookies) { var cookieName = "session"; var cookieValue = "123456"; var expiryDate = new Date(); expiryDate.setDate(expiryDate.getDate() + 1); var cookieText = cookieName + '=' + cookieValue + ';expires=' + expiryDate.toUTCString() + ';'; response.setHeader('Set-Cookie', cookieText); response.writeHead(302, { 'Location': '/' }); return response.end(); } cookies.split(';').forEach(function(cookie) { var m = cookie.match(/(.*?)=(.*)$/); cookies[m[1].trim()] = (m[2] || '').trim(); }); response.end("Cookie set: " + cookies.toString()); }).listen(8080); aaewewedsdewddsxac JavaScript has powerful regular expression support. A certain number of string functions can take arguments that are regular expressions to perform their work. These regular expressions can either be entered in literal format or as a call to the constructor of a RegExp object. The RegExp object is used for matching text with a pattern. console.log("aaewewedsdewddsxac".replace(new RegExp("[Aa]{2,}"), "b")); bewewedsdewddsxac var user = { first_name: "John", last_name: "Smith", age: "38", department: "Software" }; Objects are one of the core workhorses of the JavaScript language, and something you will use all the time. They are an extremely dynamic and flexible data type, and you can add and remove things from them with ease. var user = { first_name: "John", last_name: "Smith", age: "38", department: "Software" }; console.log(user); console.log(Object.keys(user).length); delete user.last_name; console.log(user); console.log(Object.keys(user).length); { first_name: 'John', last_name: 'Smith', age: '38', department: 'Software' } 4 { first_name: 'John', age: '38', department: 'Software' } 3 JSON (JavaScript Object Notation) is a lightweight, open standard, data-interchange format. It is easy to read and write for humans. It is used primarily to transmit data between web application and server. The given example asynchronously convert directory tree structure into a JavaScript object. Install with NPM npm install dir-to-json --save var dirToJson = require('dir-to-json'); dirToJson( "./album", function( err, dirTree ){ if( err ){ throw err; }else{ console.log( dirTree ); } }); To connect to MySQL server, we will create a connection with mysql module. So, first include the mysql module, then call the create connection method. Here is the sample code to connect to database. var mysql = require("mysql"); var con = mysql.createConnection({ host: "hostname", user: "username", password: "password", database: "database" }); con.connect(function(err){ if(err) { console.log('Error connecting to Db'); return; } console.log('Connection Established'); }) con.end(function(err) { }); ['fish', 'crab', 'dolphin', 'whale', 'starfish'] Node.js provides forEach()function that is used to iterate over items in a given array. const arr = ['fish', 'crab', 'dolphin', 'whale', 'starfish']; arr.forEach(element => { console.log(element); }); fish crab dolphin whale starfish Readline Module in Node.js allows the reading of input stream line by line. The given node.js code open the file 'demo.html' and return the content line by line. var readline = require('readline'); var fs = require('fs'); var file= readline.createInterface({ input: fs.createReadStream('demo.html') }); var lineno = 0; file.on('line', function (line) { lineno++; console.log('Line number ' + lineno + ': ' + line); }); MongoDB is one of the most popular databases used along with Node.js. We need a driver to access Mongo from within a Node application. There are a number of Mongo drivers available, but MongoDB is among the most popular. To install the MongoDB module, run the below command - npm install mongodb Once installed, the given code snippet shows you how to create and close a connection to a MongoDB database. var MongoClient = require('mongodb').MongoClient; var url = "mongodb://localhost:27017/mydb"; MongoClient.connect(url, function(err, db) { if (err) throw err; console.log("Database created!"); db.close(); }); Node.js provides Zlib module to zip a file. The given example demonstrates this - var zlib = require('zlib'); var fs = require('fs'); var gzip = zlib.createGzip(); var r = fs.createReadStream('./demofile.txt'); var w = fs.createWriteStream('./demogzipfile.txt.gz'); r.pipe(gzip).pipe(w); In the given node.js program, we are using a Try Catch block around the piece of code that tries to read a file synchronously. var fs = require('fs'); try{ // file not presenet var data = fs.readFileSync('demo.html'); } catch (err){
[ { "code": null, "e": 112, "s": 90, "text": "Theory of Computation" }, { "code": null, "e": 425, "s": 112, "text": "It is often necessary for a network application to make external HTTP calls. HTTP servers are also often called upon to perform HTTP services for clients making requ...
Baseball and Machine Learning: A Data Science Approach to 2021 Hitting Projections | by John Pette | Towards Data Science
Sports world personalities love to bash analytics these days. It’s hard to go five minutes listening to baseball talk on sports radio without hearing someone make a derogatory comment about “the nerds taking over.” Ironically, they then immediately launch into sports betting ads, and guess what, folks...any time you place a bet because “the Giants always lose in Washington” or whatever, that’s a rudimentary form of analytics, minus the actual modeling. I’ve also noticed a prevalent opinion that advanced statistics and modeling will produce a single (read: boring) way of playing the game. If that happens, it is due to a lack of imagination. I firmly believe that the many facets of analytics represent one type of baseball knowledge. Traditional baseball instincts and experience are another, and you will get the best results when both fields are working in unison. I do think that there is room for analytics to expand in baseball and get significantly deeper into the data science realm. This is to say that I finally got it together enough to construct machine learning models to build my baseball projections this year. Just a heads up: this article is going to be dense. I’m going to walk you through my methodology, thought process, and missteps as I worked through these models. It will be a little heavy on the data science for baseball-centric readers and a little heavy on the baseball for the data scientists. But it’s all pretty cool, if you care about either of those topics. This will be long, so I’m going to break it into two articles: one for hitting and one for pitching. For context, I built these models in March, prior to the season, so I have included no data from the first few weeks of this season. It has just taken some time to put the article together. This is the easy part. FanGraphs. Always start at FanGraphs. Going into this, I wanted to consider every piece of data that might be available to me, and minimize preconceived notions about what would and would not be predictive...so I started with everything. Everything? <paraphrase><garyoldman>EEEEVVVERYTHIIIIING! </garyoldman></paraphrase> Seriously, though...you can do this. FanGraphs allows you to pull every statistic they keep and export it into a tidy csv file. And that’s precisely what I did, for each year from 2015–2020. Doing this will get you a good chunk of repetitive data, including many columns that are perfectly correlated — we’ll sort that all out as we go. The point of this was to predict specific stats for 2021, which meant this had to be a regression approach rather than a classification approach (or so I thought...more to come on that). Regression problems restrict options substantially more than classification problems: this is an oversimplification, but I was mainly choosing from among linear regression, random forest, and XGBoost. Linear regression would not have been a great choice here, as it assumes independence among its input variables, and that was very much not the case here. It is also limited to finding linear relationships. Random forest regression would have been a perfectly reasonable choice, but I selected XGBoost to build the models, as it is an improvement on random forest. XGBoost is a popular ensemble decision tree algorithm that combines many successive decision trees — each tree learns from its predecessors and improves upon the residual errors of previous trees. It also tends to perform very well in these types of problems. My planned steps were:1. Clean and prep data.2. Identify target variables.3. Fit models to the 2017 and 2018 data, trying to predict the subsequent year’s statistics.4. Tune the models’ hyperparameters to their optimal settings.5. Combine 2017 and 2018 data, retrain models, re-tune hyperparameters, and assess for differences. 6. Use the resulting models to predict 2021 statistics using a blended input data set from 2019 and 2020. Let’s unpack this last piece. 2020 posed a problem in that, you may recall, we had a bit of a pandemic issue, so we only had 60 games. My initial plan for this was to make a blended data set from 2019 and 2020. I did this by preparing weighted averages of stats for the two years and then scaling them to 162 games. If a player did not play in 2020, I used 2019 totals. The benefit to this approach was that I had a data set that was not overly reliant on the abbreviated season surrounded by the most abnormal set of circumstances. The major drawback was that I lost a year of data for model training, so I had to use 2017 and 2018. Ultimately, I decided it was worse to lose the year of data. I ended up incorporating 2019 data and scaling 2020 data to 162 games (this was far from a perfect solution, but it worked better than I would have thought...we’ll get to that). The data created some challenges, but not of the kind one normally sees with real-world data sets. It was nice to know up front that these numbers were largely clean. There were some null values to deal with, but that was minimally painful. The vast majority of the gaps were in the Statcast fields, and it was fairly obvious that those were largely because the events in question did not occur. It’s a little tough to peg down a pitcher’s average slider velocity if they don’t throw sliders. My general approach to missing data was: if less than 60% of a column was populated, I cut the column, as it would not have been useful to impute that many missing values. Otherwise, I filled in gaps in two ways: for percentages, I filled nulls with zeroes. This seemed logical, as the percentages related to other fields containing events that did not occur. Otherwise, I filled in nulls with the median values in each column (for each year — I kept the years separate throughout data prep). There were a few fields I added manually. These will likely play more of a role in subsequent studies, but I wanted to see how they did in these models. I added variables for league (0 for AL, 1 for NL), whether the player changed teams and/or leagues in the prior off-season, and whether they changed teams/leagues during the season, accounting for trades/cuts/signings. There was some grey area with this last piece, as there were several examples of players moving among more than two teams during the year. I made judgment calls on these. If someone bounced around to four teams, I looked at where they played the most games. If a guy played for six years for an AL team, was signed by an NL team in the off-season, and played seven games for them before being traded back to the AL for the rest of the year...I did not count that as changing leagues. At any rate, these were all numerically encoded. I one-hot-encoded players’ teams to see if there were any effects seen from being on a particular team. I also added lag variables for about 20 statistics (i.e. the values of those statistics from the prior year). I said I wanted to eliminate preconceived notions about what was predictive, but I also needed to make sure not to overlook simple concepts like, “RBI for each of the last two years were the most predictive measure for RBI in the following season.” The down side to including lag variables is there were quite a few players who did not accumulate statistics in the preceding seasons. I opted to remove those cases from the data set. This was a tough decision, but my assumption here was that there was more to be gained from using the lag variables than I would lose by excluding those without prior-year data. I also thought it would be incorrect to use median values in this case. If I had done that, for example, every single rookie season represented in the data would have assumed that the player had effectively league average performance in the prior year, and I think that would be a very incorrect assumption that would have affected the models more than cutting those rows. What are we predicting, anyway? Initially, I focused on ten statistics: plate appearances (PA), runs (R), home runs (HR), runs batted in (RBI), stolen bases (SB), caught stealing (CS), batting average (AVG), on-base percentage (OBP), and on-base percentage + slugging percentage (OPS). I added target columns for each of those statistics to each dataset from 2016–2019. This just involved mapping the respective fields from the following year’s data sets. For 2019, I used a version of the 2020 data scaled to 162 games for the targets. This was initially for exploratory purposes. Stay tuned. Any player that did not have data in the next year was cut from that year’s data set. The targets for this algorithm cannot contain null values. For fun, I also included the “dollar value” (Dol) figures calculated by FanGraphs. I did not expect these to perform well, but as they are based on overall production, I thought it would be good to test them. If the results had come out to similar quality levels as the individual stats, it would have been an interesting result. Now, we can only predict one of these target variables at a time, so that means eleven different sets of models. No problem. The title says it all here. I ran a set of models including basically everything in the set of the input invariables and took a look at which of them carried the greatest influence. I tuned the hyperparameters to some degree at this stage, mainly to see if and by how much the influence of the input variables changed. I won’t go into too much more detail here — the upshot is I did some feature engineering and trimmed down my input dimensions. Essentially, I cut all of the Statcast fields, as they did not seem to move the needle in predicting any of the target variables. I also dropped all of the team variables for the same reason. Using this pruned dataset, I ran new XGBoost regression models and tuned the hyperparameters with a little more scrutiny. I also tested their consistency from year to year: I made models for the 2017, 2018, and 2019 datasets separately, predicting the target variables for the following year. I did my hyperparameter tuning in several steps. I started with some common mid-range values: param_dict = {'n_estimators':500, 'learning_rate':0.01, 'max_depth':5, 'subsample':0.1, 'colsample_bytree':0.3} Then, I performed grid searches in a few successive rounds. First, I tuned maximum tree depth, a key hyperparameter for preventing model overfitting. Without a maximum depth, the algorithm can build many-level decision trees that can fit your training data perfectly, but will never generalize to other data (XGBoost does assume a default max_depth of 6, but it is still good practice to optimize it). Next, I tuned colsample_bytree and subsample together. These two are related. Subsampling selects a specified portion of your training data as the algorithm grows its decision trees, and that further helps prevent overfitting. After that, I tuned learning rate, which is as it sounds: you can specify how quickly the algorithm learns from each decision tree iteration. The final iteration was tuning number of estimators, which is the quantity of trees in the model. At each stage, I rewrote the parameter dictionary above to match the best parameters. I packaged all of the above in a single function and then ran it for each of the years of training data. To my surprise, the tuned hyperparameters produced exactly the same best parameters for each year (different for each statistic, but consistent in each year). That was both suspicious and encouraging. The precision and accuracy were quite similar for each year’s data. Interestingly, while the 2019 data was worse than the other years, it was not appreciably worse, despite being based on scaled data from a 60-game season played during the COVID-19 pandemic. It performed well enough that I was able to include it in my master data set, which combined 2016–2019. Most of the models performed well. I would say the output was perfectly acceptable and would be comfortable putting them into production if we were going for general accuracy. Here’s the problem: machine learning, in general, does very well predicting values that are not outliers. Unfortunately, in baseball, we care most about predicting the outliers. We want to know who stands out above the rest and who falls flat. To this end, the XGBoost regression models did not cut it. Here’s an example. This is the tuned model’s performance for RBI: As you can see, the predictions are in the middle. And it largely predicts well...except that it misses basically every 90+ RBI season as well as every one below 30. Not ideal. So what do we do about this? My approach: I basically turned this into a hybrid regression/classification exercise. For each statistic, we know where the regression models do well and where they don’t, and we know generally what those outlier ranges are. Using those ranges, I was able to bucket the stats into tiers quite easily, and the tiers enabled me to build XGBoost classifier models to try to predict them. The plan was to combine the results from the regression exercise and the classification exercise in a way that produced a full range of projections. Here’s an example: the regression model for home runs predicted anything from 6–29 reliably. It faltered with 0–5 and with 30+. I encoded my training data such that 0–5 was 0, 6–29 was 1, and 30+ was 2. The classifier algorithm would just try to predict 0, 1, and 2 for each player, based on all of the input stats used above. For anything predicted as a 1, I used the regression predictions. For 0 and 2, I used the 0–5 and 30+ ranges, respectively, and then mapped the output from the regression models to my assumptions of what those ranges would be. One additional complicating factor: even bucketed into tiers, these values are still outliers, meaning we are dealing with imbalanced data. This is a bit of an issue, as the algorithm can pick the majority class for everything and be largely accurate. I employed a method of oversampling the data to address this. Oversampling is a method that replicates the underrepresented data to balance out the data set for modeling. I tried a few different approaches, and ultimately settled on SMOTE (Synthetic Minority Oversampling Technique), which builds synthetic examples of underrepresented data rather than adding straight duplicates. I ran into a little trouble at first with this approach, as the cross-validation step in my models seemed to counteract the oversampling. I learned that I was oversampling incorrectly, applying it before splitting into cross-validation folds. This article gives a good/more in-depth explanation of how to implement this properly (and where you can go wrong). I ultimately implemented it via a pipeline with the imbalance-learn package, which is specifically designed for this purpose. from imblearn.pipeline import Pipeline, make_pipelinefrom imblearn.over_sampling import SMOTERANDOMSTATE = 120imb_pipeline = make_pipeline(SMOTE(random_state=RANDOMSTATE), xgb.XGBClassifier(eval_metric='merror', **param_dict, verbosity=0, use_label_encoder=False)) scores = cross_val_score(imb_pipeline, xtrain, ytrain, scoring='f1_micro', cv=5)print("Mean cross-validation score: %.2f" % scores.mean())kf_cv_scores = cross_val_score(imb_pipeline, xtrain, ytrain, scoring='f1_micro', cv=kfold)print("K-fold CV average score: %.2f" % kf_cv_scores.mean()) I defined success with these classifier algorithms as their ability to predict outliers correctly. I looked at precision (the fraction of outliers predicted correctly) and recall (the fraction of all outliers identified), and how those measures changed after applying the SMOTE technique. All showed some improvement. Overall, precision was very good: of the stats flagged as outliers, the algorithms did a very good job of picking them correctly. Recall was not great, though, meaning there were a lot of outliers the algorithms did not find. Here’s the summary: But good lord, man...where are the results?!? Tell us what happened! Okay, okay. Let’s take it stat by stat. I’ll show the best hyperparameters from tuning the algorithm, and then show you which input variables had the greatest impact on the model using the fantastic summary plots available in the SHAP package. I find these to be amazing explanatory tools: the input features are listed in order of significance to the models. For each feature, you see a red-to-blue spectrum: red is higher value, blue is lower value. The various points are laid out left to right showing the degree of impact on the model. So if all the red points for RBI Lag_1 (i.e. RBI from the year before last) are way on the right, it means higher RBI values from 2019 had stronger positive impact on the model than lower ones. Some of these results will be intuitive. Take stolen bases, for example: the most predictive variables were prior year stolen bases, speed (Spd), stolen bases from two years prior, prior year caught stealing, base running (BsR), and caught stealing from two years prior. That all seems obvious. Some of the other stats were not. What I found most striking was how much age figured into these. I knew that it would be significant, but for half of these stats, it was the #1 predictor, and it was in the top five for most. Best hyperparameters:{'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3, 'subsample': 0.5, 'colsample_bytree': 0.25} Best hyperparameters:{'n_estimators': 400, 'learning_rate': 0.01, 'max_depth': 4, 'subsample': 0.5, 'colsample_bytree': 0.15} Best hyperparameters:{'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 4, 'subsample': 0.6, 'colsample_bytree': 0.3} Best hyperparameters:{'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 6, 'subsample': 0.4, 'colsample_bytree': 0.3} Best hyperparameters:{'n_estimators': 400, 'learning_rate': 0.01, 'max_depth': 3, 'subsample': 0.4, 'colsample_bytree': 0.35} Best hyperparameters:{'n_estimators': 300, 'learning_rate': 0.01, 'max_depth': 3, 'subsample': 0.7, 'colsample_bytree': 0.35} Best hyperparameters:{'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3, 'subsample': 0.4, 'colsample_bytree': 0.25} Best hyperparameters:{'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3, 'subsample': 0.5, 'colsample_bytree': 0.15} Best hyperparameters:{'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3, 'subsample': 0.7, 'colsample_bytree': 0.15} Best hyperparameters:{'n_estimators': 500, 'learning_rate': 0.01, 'max_depth': 3, 'subsample': 1.0, 'colsample_bytree': 0.35} Once I had my completed projections, I calculated scores based on my own scoring formulas. I wanted to see how they compared to published projections, so I used Derek Carty’s fabulous THEBAT projections (I consider these to be the most technologically advanced of the projection systems). I then scored them the same way I scored my own so that I could examine the differences. I only looked at those players with average draft position of 300 or lower, because, well, those are more interesting. First, we have the players who my projections said would do better than THEBAT: The first big takeaway here is that my projections do not take into account whether players have starting jobs. That’s why you see guys like Pillar and Villar on here. The algorithms don’t know that those players were signed as backups. We’ll see how these play out over the course of the 2021 season. Now, let’s look at where the models have predicted worse production than THEBAT: The biggest takeaways that I see with this set are: 1) My models have punished poor 2020 performance more than THEBAT. As I discussed earlier, 2020 data has a lot of problems baked into it. There’s just so much that we don’t know from 60 games being played while the world was falling apart.2) My models are much more pessimistic about players who have missed a lot of time to injuries. It will be really interesting to revisit these at the end of the year. If you’ve made it this far, congratulations. I hope you found this interesting. The pitching version of this will follow in the next week or so. What did you think of my approach? What should I do differently in future iterations? I am making my code and data files available on GitHub.
[ { "code": null, "e": 1169, "s": 171, "text": "Sports world personalities love to bash analytics these days. It’s hard to go five minutes listening to baseball talk on sports radio without hearing someone make a derogatory comment about “the nerds taking over.” Ironically, they then immediately launc...
Matplotlib.axis.Axis.get_major_locator() function in Python - GeeksforGeeks
03 Jun, 2020 Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. It is an amazing visualization library in Python for 2D plots of arrays and used for working with the broader SciPy stack. The Axis.get_major_locator() function in axis module of matplotlib library is used to get the locator of the major ticker. Syntax: Axis.get_major_locator(self) Parameters: This method does not accepts any parameters. Return value: This method returns the locator of the major ticker. Below examples illustrate the matplotlib.axis.Axis.get_major_locator() function in matplotlib.axis: Example 1: Python3 # Implementation of matplotlib function import numpy as npfrom matplotlib.axis import Axis import matplotlib.pyplot as plt x = np.linspace(0, 10, 1000) y = np.sin(2 * x) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x, y)ax.grid() print(Axis.get_major_locator(ax.yaxis)) plt.title("Matplotlib.axis.Axis.get_major_locator()\n\Function Example", fontsize = 12, fontweight ='bold') plt.show() Output: <matplotlib.ticker.AutoLocator object at 0x07D67ED0> Example 2: Python3 # Implementation of matplotlib function import numpy as npfrom matplotlib.axis import Axis import matplotlib.pyplot as plt np.random.seed(19680801) fig, ax = plt.subplots() x, y, s, c = np.random.rand(4, 100) s *= 100 ax.scatter(x, y, s, c)ax.grid() print(Axis.get_major_locator(ax.xaxis)) plt.title("Matplotlib.axis.Axis.get_major_locator()\n\Function Example", fontsize = 12, fontweight ='bold') plt.show() Output: <matplotlib.ticker.AutoLocator object at 0x07CA9FD0> Matplotlib-Axis Class Python-matplotlib Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? How To Convert Python Dictionary To JSON? Check if element exists in list in Python How to drop one or multiple columns in Pandas Dataframe Python Classes and Objects Python | os.path.join() method Create a directory in Python Defaultdict in Python Python | Pandas dataframe.groupby() Python | Get unique values from a list
[ { "code": null, "e": 25647, "s": 25619, "text": "\n03 Jun, 2020" }, { "code": null, "e": 25868, "s": 25647, "text": "Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. It is an amazing visualization library in Python for 2D plots of ...
How to Use MultiIndex in Pandas to Level Up Your Analysis | by Byron Dolon | Towards Data Science
What if you could have more than one column as in your DataFrame’s index? The multi-level index feature in Pandas allows you to do just that. A regular Pandas DataFrame has a single column that acts as a unique row identifier, or in other words, an “index”. These index values can be numbers, from 0 to infinity. They can also be more detailed, like having “Dish Name” as the index value for a table of all the food at a McDonald’s franchise. But what if you owned two McDonald’s franchises, and wanted to compare the sales of one dish across both franchises? While thegroupby() function in Pandas would work, this case is also an example of where a MultiIndex could come in handy. A MultiIndex, also known as a multi-level index or hierarchical index, allows you to have multiple columns acting as a row identifier, while having each index column related to another through a parent/child relationship. At the end of this piece, we’ll have answered the following questions by creating and selecting from a DataFrame with a hierarchical index: Which characters speak in the first chapter of “The Fellowship of the Ring”? (answered with .loc) Who are the first three elves to speak in the “The Fellowship of the Ring”? (answered with .loc) How much do Gandalf and Saruman talk in “The Two Towers”? (answered with .loc) How much does Isildur speak in all of the films? (answered with .xs) Which hobbits speak the most in each film and across all three films? (answered with a pivot table and.loc) You can find the data used in this article here. We will be using data from “The Lord of the Rings” films, specifically the “WordsByCharacter.csv” file in the data set. This file will have each character’s number of words spoken in each scene of every movie. As always, don’t forget to import pandas before trying any of the code. import pandas as pd# load datadf = pd.read_csv('WordsByCharacter.csv') Let’s dive into how multi-level indexes can be used for data analysis. A hierarchical index means that your DataFrame will have two or more dimensions that can be used to identify every row. To get the original DataFrame’s index label, we can use this code: df.index.names This outputs a “FrozenList”, which is just a Pandas specific construct used to show the index label(s) of a DataFrame. Here, we see the value is “None”, as this is the default value of a DataFrame’s index. To create a MultiIndex with our original DataFrame, all we need to do is pass a list of columns into the .set_index() Pandas function like this: multi = df.set_index(['Film', 'Chapter', 'Race', 'Character']) Here, we can already see that the new DataFrame called “multi” has been organized so that there are now four columns that make up the index. We can check this by looking at the index names once more: multi.index.names We now see that the value “None” earlier has been replaced by the names of the four columns we assigned to be our new index. Each index value in the regular, unaltered DataFrame would just be a number from 0 to 730 (because the DataFrame has 731 rows). To show you what each index value is in our newly created MultiIndex, we can use this line of code: multi.index.values Now, we see that each row value in the “Words” column can be identified by which film it came from, what chapter it refers to, what race the character who spoke the word is, and that character’s name. One thing to note before we dive into some analysis is the .sort_index() Pandas function. When creating a DataFrame with a MultiIndex, make sure to append that to the end of the line of code like this: multi = df.set_index([‘Film’, ‘Chapter’, ‘Race’, ‘Character’]).sort_index() The Pandas documentation has this note on it: Indexing will work even if the data are not sorted, but will be rather inefficient (and show a PerformanceWarning). It will also return a copy of the data rather than a view. It’s also worth noting that you can remove the hierarchical index simply by passing .reset_index() to your DataFrame, like this: multi.reset_index() This will just return the original DataFrame without the parent/child relationships on multiple index columns. Now we’re ready to perform some analysis! Here, we’re going to use the trusty .loc Pandas function. If you haven’t used it before, feel free to check out this quick guide on vectorized Pandas functions. Here, we are interested in two components of our index, namely “Film” and “Chapter”. In our .loc selector, we write the following: multi.loc[('The Fellowship Of The Ring', '01: Prologue'), :] You can select your desired value of each part of the MultiIndex of your DataFrame by passing it into a tuple. In the first part of the selector we just wrote, we passed the tuple (‘The Fellowship Of The Ring’, ’01: Prologue’). We didn’t need to pass values for “Race” or “Character”, because we don’t know who spoke in the first chapter yet. Notice that the “Film” and “Chapter” index columns are no longer present in the view. This is because we already input “The Fellowship Of The Ring” for “Film” and “01: Prologue” as values in the tuple, so the output does not need to duplicate these values in the view. Also, the : is inserted as the final argument in the .loc selector to indicate that we want all columns to be displayed. In our case, the DataFrame only has one column called “Words”, as every other column in the DataFrame has been converted to an index column. However, if you had multiple columns in your DataFrame and only wanted to see a subset, you would the column names instead of :. Now, we know that Elrond, Galadriel, Gollum, and Bilbo spoke in the first scene of the first movie. Let’s move on to a slightly more complicated example. Here, we’ll still be using .loc, but now we’re going to use some syntax that will give us some flexibility when selecting from our MultiIndex values. To answer this question, we’re interested in the “Film” and “Race” index columns. This time, we’re “skipping” an index column, as “Chapter” lies between “Film” and “Race”. We don’t yet know which “Chapters” elves first speak in, so we need to leave that blank. To do so, we’re going to use this line of code: multi.loc[(‘The Fellowship Of The Ring’,slice(None),’Elf’), :].head(3) Again, we pass a tuple in with our desired index values, but instead of adding values for “Chapter”, we pass slice(None). This is the default slice command in Pandas to select all the contents of the MultiIndex level. So here, we are selecting all possible “Chapter” values. Now, we know that Elrond, Galadriel, and Arwen are the first three elves to speak in the first book. But so far, all we’ve been doing is passing one value per index column into the .loc selectors. What if we wanted to get more than one value? In this question, we’re interested in all “Chapters” of the second “Film”, both of which we already know how to select. But this time, we’re interested in two values for the “Character” index. To get the answer, we write this line of code: multi.loc[('The Two Towers',slice(None),slice(None),['Gandalf','Saruman']), :] All we need to do here is pass a list of our desired index values to the tuple we insert into .loc. Since we don’t know which “Chapter” or “Race” to get, we pass slice(None) for these two index values before getting to “Character”. So far, we’ve had multiple conditions in our selections, which is why using .loc was handy. But what if we only wanted to specify one level of the index? It would be tiresome to write slice(None) three times, so we’ll turn to another method. To answer this question, we only need to specify one index level value for “Character”. We will use the xs() (cross-section) method in Pandas, which allows you to specify which part of the MultiIndex you want to search across. We use the following code: multi.xs('Isildur', level='Character').sum() At it’s most basic, xs() requires an input for the index value you want to look for and the level you want to search in. In this case, we want the “Character” level, so we pass that into level=, and we want the value for “Character” to be “Isildur”. Then, to aggregate all the values, we add .sum() to the end of the line. I cheated a bit so I could verify the accuracy of the data, since I knew the answer to this (the one word Isildur says in the whole trilogy is “No”). But you could also use this same line of code to find how much any character spoke, simply by substituting their name as a parameter in xs(). To answer this question, it would be great if we had one table with the “Words” values aggregated for every character across every film. This is a great place to create a pivot table! We’re going to use the .pivot_table() function from Pandas, but you’ll see that we pass a list into the index= parameter setting to create a MultiIndex again. Our code to create the pivot table will look like this: pivoted = df.pivot_table(index = ['Race','Character'], columns = 'Film', aggfunc = 'sum', margins = True, # total column margins_name = 'All Films', fill_value = 0).sort_index()order = [('Words', 'The Fellowship Of The Ring'), ('Words', 'The Two Towers'), ('Words', 'The Return Of The King'), ('Words', 'All Films')]pivoted = pivoted.sort_values(by=('Words', 'All Films'), ascending=False)pivoted = pivoted.reindex(order, axis=1) Here, our multi-level index has two components instead of the four from the previous example. This is because we want the “Film” values to also be a column and not an index. We also don’t need to see the “Chapter” values, so that’s excluded. As we’re interested in the total number of words, we want our aggfunc (aggregate function) parameter to be “sum”. Pandas also has a built-in total column for the .pivot_table() function. All you need to do is pass margins=True to enable it, and optionally set the name of the total column in the margins_name parameter. Because we want to see the total number of words for every “Character” in that has a “Race” value as “Hobbit”, we can select this condition with a really easy application of .loc. pivoted.loc['Hobbit'] Boom! Now we have a table ordered by which Hobbit spoke the most across all films. We can also see how much each character spoke in each film. Some useful insights include how Bilbo didn’t speak at all in “The Two Towers”, but still has more total dialogue than Pippin and Merry, who spoke in all three films. What’s great about having this table is we can do the same kind of analysis for “Elf”, “Men”, “Ainur”, and the rest of the races in The Lord of the Rings, all by substituting one argument in the above .loc function. I hope you found this introduction to hierarchical indexes in Pandas useful! I’ve found that having more than one level in a DataFrame index means that I can quickly group my data to specific levels, without having to write multiple groupby() functions. As always, check out the official documentation for a more in-depth look at how you can use the MultiIndex feature in your analysis. I like getting involved in data collection as well as analysis, which is why I follow a quick and easy method of gathering sample data discussed in that article. If you want to grab some of your own test data online to try this code, you can check out this piece on getting tables from a website with Pandas. Good luck with your multi-level indexing adventures!
[ { "code": null, "e": 121, "s": 47, "text": "What if you could have more than one column as in your DataFrame’s index?" }, { "code": null, "e": 490, "s": 121, "text": "The multi-level index feature in Pandas allows you to do just that. A regular Pandas DataFrame has a single colum...
Are Deeper Networks Better? —A Case Study | by Timothy Tan | Towards Data Science
Have you ever wondered about the effects of building deeper networks and what the changes entail? I know I have. I’m talking about the added complexity in terms of additional layers to your model and scrutinizing what downstream changes occur. Ceteris paribus: What effects (if any) does an additional layers have on your training/validation loss curve?To what extent does an additional layer affect your downstream prediction outcomes?What is the relationship between the number of input features versus the complexity of a model?Are there any trade-offs involved here? What effects (if any) does an additional layers have on your training/validation loss curve? To what extent does an additional layer affect your downstream prediction outcomes? What is the relationship between the number of input features versus the complexity of a model? Are there any trade-offs involved here? In this post, I will be sharing a series of experiments and their respective outcomes to answer these questions. Let’s get down to it! Before we begin, I’ll need to explain the experimental setup involved. The setup comprises of 3 parts: The training data set usedThe model architecture built for the experimentThe experiment methodology The training data set used The model architecture built for the experiment The experiment methodology The data set used for the experiments comes from a Kaggle competition; tweet sentiment extraction. The goal? To extract out phrases that supports the given sentiment. Here’s a glimpse of the training data: I will not be going in-depth into the data preparation methodology as I will be writing about the Kaggle experience in another post. In essence, we cleaned the text up following the Tweeter GloVe methodology and created our own labels for the downstream Named Entity Recognition (NER) task. Once we had a modelling ready data set, I started to build a baseline seq2seq model. The model architecture consists of 3 main segments: Pre-attention Bi-LSTMMulti-head Self-attention Encoder Stack(s)Post-attention LSTM Pre-attention Bi-LSTM Multi-head Self-attention Encoder Stack(s) Post-attention LSTM The model starts with a pre-attention Bi-LSTM whose outputs are fed into encoder stack(s). The outputs from these encoder stacks are then fed into a post-attention LSTM to generate sequential predictions for the NER task at hand. Why encoder stacks? The inspiration of using encoder stacks came from the the paper “Attention Is All You Need” by Vaswani et. al. The paper details how a transformer model looks like. In short, the transformer model consists of 6 encoder stacks and 6 decoder stacks. I won’t go into the details of the paper but if you want to know the architecture that underlies the famed Bidirectional Encoder Representations from Transformers (BERT). I recommend the read. Anyway, here’s how an encoder stack looks like: As you can see, each encoder stack consists of two sub-layers. The first being a multi-head self-attention mechanism and the second, a fully connected feed-forward network. Around each sub-layer, a residual connection followed by a layer normalization was employed. This makes sense because the residual connection sums up the pre-attention Bi-LSTM outputs with the context learned by the multi-head self-attention layer. So how does our resulting model architecture look like? If I were to draw it out, it will look like this: As a baseline, the architecture starts with only a single encoder stack. Here were the layer parameters used across experiments which were kept constant: Pre-attention Bi-LSTM:preatn_forward={type="recurrent", n=64, act='tanh', init='xavier', rnnType='LSTM',outputType='samelength', reversed=False, dropout=0.3}preatn_backward={type="recurrent", n=64, act='tanh', init='xavier', rnnType='LSTM',outputType='samelength', reversed=True, dropout=0.3}concatenation_layer={type="concat"} Encoder Stack:attention_layer={type="mhattention", n=128, act='gelu', init='xavier', nAttnHeads=16}residual_layer1={type="residual"}norm_layer1=layer={type="layernorm"}fully_connected_layer={type="fc", n=128, act='relu', init='xavier', dropout=0.3}residual_layer2={type="residual"}norm_layer2=layer={type="layernorm"} Post-attention LSTM:postatn_forward={type="recurrent", n=128, act='tanh', init='xavier', rnnType='LSTM',reversed=False, dropout=0.3}; The methodology used were as follows. Across experiment cases, I needed a way to: Simulate the model complexity of deep networksSimulate different input featuresUse a standard evaluation metric to measure prediction outcomes Simulate the model complexity of deep networks Simulate different input features Use a standard evaluation metric to measure prediction outcomes Simulate Model Complexity To systematically add model complexity, I added an encoder stack to the model architecture while keeping all other hyper-parameters constant for every experiment. For example, a single experiment set for tweeter glove 25d would have 4 distinct architectures. Notice how the number of encoder stacks in each experiment increases systematically. The hyper-parameters used were also kept constant throughout: Optimizer hyper-parameters:miniBatchSize=32stagnation = 10algorithm = {method = ‘adam’, gamma = 0.9, beta1 = 0.9, beta2 = 0.999, learningRate = 0.001, clipGradMax = 100, clipGradMin = -100, lrPolicy = ‘step’, stepSize=10}regL2 = 0.0007maxEpochs = 10dropout=0.3 Simulate Input Features To test if feature-rich input data has any effect on outcomes, I ran the model through tweeter glove 25d, 50d, 100d and 200d. This would mean for each glove dimension, I would have ran 4 sets of experiments and logged their results respectively. Baseline Model Experiment 1: Baseline + 1 additional Encoder Stack Experiment 2: Baseline + 2 additional Encoder Stacks Experiment 3: Baseline + 3 additional Encoder Stacks Therefore, there were 16 (4 architectures * 4 different glove dimension sets) different model configurations in total that were ran. Standardized Evaluation Metric Lastly, I decided to use F1-Score as the standardized metric of comparison as it is the harmonic mean between Precision and Recall. If you need a refresher on Precision, Recall & F1-Score. I found a good video on it. For this section, I will first share the experiment results within each tweeter glove dimensions. Then, I will share the results of each experiment across tweeter glove dimensions. From here on out, I will display a series of result outputs as images and will add comments as I deem fit. For every result output, take note of the Precision, Recall, F1 scores and how the loss curves look like. Remember, the only change across experiments is an additional encoder stack. Do take some time to interpret the results yourself as you go along. :) For this result set, it is a little difficult to see the effects of a deeper network across the experiments. I do have an idea and sensing on what is occurring but will hold off my comments till the pattern is clearer. Note: It gets clearer in the next result set :) Having said that, there is an interesting pattern in the F1-Scores of the positive tag (P). As the model gets more complex, the F1-Scores for the Positive Tag (P) seem to be slowly increasing at every experiment. I suspect it has to do with the deeper network learning more complex relationships. For example, co-references, long-term dependencies and the like. Now let’s increase the number of features from 25D to 50D to get a better picture on what is happening as the model gets deeper. This is definitely a better result set to interpret compared to the previous one. First off, notice how the validation loss starts to diverge from the training loss curve as the model gets deeper? Why do you think this is the case? Similar to the previous result set (albeit very subtle), this has something to do with the bias-variance trade off in machine learning. At gist, as model complexity increases, the model starts to learn better on the training data causing bias to decrease. This is evident from the results by the training loss (blue line) decreasing ever so slightly across experiments. Note: Training bigger networks is a standard mitigation effort to lower bias. This results clearly shows how bigger networks reduces bias. However... As the model begins to overfit the training data, it becomes less generalizable causing the variances to increase. Note the behavior of the validation loss curves diverging across experiments. To get a more concrete example on the trade off, just take a look at Experiments 2 and 3. Note how the training loss falls from 0.1618 to 0.1595 while the validation error starts to spike. i.e. bias decreasing, variance increasing Now on to the next finding! The next finding stems from the general F1-Scores across the experiments. Compared to the previous result set, all the F1-Scores are better. F1-Scores for Positive Tag (P) averages about ~32% while Negative Tag (N) averages ~27%. Contrasting this to the previous results, the maximum F1-Scores for Positive Tag (P) was ~28% and Negative Tag (N) was only ~18%. Remember that the only change here was the a richer input set was used. i.e. Tweeter Glove 25D swapped to 50D. Everything else remained constant. This is telling of the relationship between the number of input features and prediction outcomes. In this case, using a richer set of features improved the model’s overall performance. So what would we see by increasing features from 50D to 100D? This result set actually validates our previous point on the relationship between input features and prediction outcomes. But.. not in the way you’d expect. In this case, the model is sightly better at predicting the Positive Tag (P); F1-Scores across experiments are higher than previous results. However, it looks like the model is not doing so well at predicting the Negative Tag (N); F1-Scores across experiments dropped compared to previous results. It is possible that the added features, 50D to 100D, might be allowing the model to learn Positive Tags (P) better but have added noise when learning Negative Tags (N) instead. This would be one of the reasons why I think we see a fall in F1-Scores across experiments compared to the previous result. The other reason (which might be more plausible) is that the F1-Scores are fluctuating due to the model losing it’s generalizability. Notice how erratic the validation loss curves are. To get a better result, we would probably need to tune other hyper-parameters before looking at the F1-Scores again. However, the purpose of this experiment was to only change the model architecture by adding more layers and note the downstream changes thus I will not be doing any hyper-parameter tuning here. The next finding is pretty obvious. Hint: It’s got to do with the validation loss curves. Notice how the validation loss is typically below the training loss across experiments? This is an uncommon pattern but it does happen. On this point, the next logical question is... Why? Why is the validation loss below training loss? Couple of reasons: It could have something to do with how validation loss is calculated during the training process. For each epoch, validation loss is calculated after training loss. i.e. After the model has updated the weights, the validation loss is calculated.I used dropout layers in the training set. Dropout layers have a regularization effect. This means that it penalizes the weights so as to allow predictions to be more generalizable. During the validation calculation, dropout is disabled. It could have something to do with how validation loss is calculated during the training process. For each epoch, validation loss is calculated after training loss. i.e. After the model has updated the weights, the validation loss is calculated. I used dropout layers in the training set. Dropout layers have a regularization effect. This means that it penalizes the weights so as to allow predictions to be more generalizable. During the validation calculation, dropout is disabled. Now that we are done with 100D, let’s see what happens with 200D! Food for thought. Can having too many features be a problem? Let’s find out shall we? Apparently it can! Looks like having 200D caused the models to overfit tremendously across the experiments. Let’s take a step back and think for a moment. What happened here? The only change was that a more feature-rich input set was used. i.e. 100D to 200D features. Shouldn’t having more features give us better results? Not exactly. Having an extended set of features might have inevitably introduced more noise to the model. This is evident by the over-fitting we are seeing. This is what I think is happening. Every input feature can be mapped to other concepts like “man”, “woman”, “crown”, “royal” etc. The learned embeddings for each feature (those numbers) reflects whether the input term meets the criteria of those features or not. This is usually in a range of [-1,1]. For example, for the input term “King”. Is a king a man? Yes. So the learned embeddings has a score of 0.98 for the feature “man”. Is a King a woman? No. So the learned embeddings has a score of 0.01 for the feature “woman”. So on and so forth. I think at 200D, there were too many features causing too much noise. For example, features like “red” or “green” might be just noise to the model. So what can we take away from this? Adding more input features does not guarantee better models or better results. Sometimes, having more features mean your model starts to learn the noise instead. I’ve also noticed that across all 16 experiments, the model always does poorly on the Negative Tags (N). Take a look at the F1-Scores across 25D, 50D, 100D and 200D. The F1-Scores for the Negative Tags (N) is always below that of the Positive Tags (P). Why would that be so? I reckon the Positive Tag (P) training examples might be easier to learn compared to the Negative Tag (N) training examples. Perhaps learning negation or certain negative phrases is proving hard for the model. This would explain why (N) does not seem to be getting much performance boost. Alright! We have come to the end of the results within each input dimension. For the final segment of the post, I want to give you a glimpse of how the results look across experiments. This will give you a different view of the results. Just take note on how the curves changes as input features increases. Remember that everything else is constant. The only change here are the input features. I will let you view the results on your own without any commentary. Before I conclude this post, I thought I’d summarize some takeaways from the experiment. The bias-variance trade off very much still exists in deep learning. It can be mitigated if you actively reduce the bias and variance by applying mitigation efforts as mentioned in the linked post.Deeper networks reduce bias in general but if everything else is held constant, variances increases. It would be best to iterate through and apply regularization/add more training data the deeper you build your network.Sometimes the validation loss can appear below training loss. Don’t be taken aback by that result.Adding more input features does not guarantee a better model or better results. 200D introduced more noise into the training set resulting in a worse off outcome compared to 100D results. The bias-variance trade off very much still exists in deep learning. It can be mitigated if you actively reduce the bias and variance by applying mitigation efforts as mentioned in the linked post. Deeper networks reduce bias in general but if everything else is held constant, variances increases. It would be best to iterate through and apply regularization/add more training data the deeper you build your network. Sometimes the validation loss can appear below training loss. Don’t be taken aback by that result. Adding more input features does not guarantee a better model or better results. 200D introduced more noise into the training set resulting in a worse off outcome compared to 100D results. So after reviewing the results of this experiment... Are Deeper Networks Better? It depends. There seems to be an optimal depth which ties back to the amount of training data you have got to begin with. In the case of this tweeter dataset with 27,481 rows, it looks like the model architecture for Experiment 1 with 100D input features is the optimum. For easy reference, Experiment 1’s architecture looks like this with 2 encoder stacks. Well, that’s it then! It definitely took a while to run all the experiments to write this post. Hope you found this post insightful! If it was, let me know so I’d know what types of posts to continue writing. :) Till the next post... Sayonara! LinkedIn Profile: Timothy Tan
[ { "code": null, "e": 144, "s": 46, "text": "Have you ever wondered about the effects of building deeper networks and what the changes entail?" }, { "code": null, "e": 159, "s": 144, "text": "I know I have." }, { "code": null, "e": 290, "s": 159, "text": "I’m t...
Difference between structured and unstructured programming - GeeksforGeeks
13 Jul, 2021 Structured Programming Structured Programming is a type of programming that generally converts large or complex programs into more manageable and small pieces of code. These small pieces of codes are usually known as functions or modules or sub-programs of large complex programs. It is known as modular programming and minimizes the chances of function affecting another. Below is the program to illustrate the structured programming: C // C program to demonstrate the// structured programming#include <stdio.h> // Function for additionint sum(int a, int b){ return a + b;} // Function for Subtractionint sub(int a, int b){ return a - b;} // Driver Codeint main(){ // Variable initialisation int a = 10, b = 5; int add, minus; // Function Call add = sum(a, b); minus = sub(a, b); printf("Addition = %d\n", add); printf("Subtraction = %d\n", minus); return 0;} Addition = 15 Subtraction = 5 Unstructured Programming: Unstructured Programming is a type of programming that generally executes in sequential order i.e., these programs just not jumped from any line of code and each line gets executed sequentially. It is also known as non-structured programming that is capable of creating turning-complete algorithms. Below is the program to illustrate the unstructured programming: C // C program to demonstrate the// unstructured programming #include <stdio.h> // Driver Codeint main(){ // Variable initialisation int a = 10, b = 5; int add, minus; // Operations performed add = a + b; minus = a - b; printf("Addition = %d\n", add); printf("Subtraction = %d\n", minus); return 0;} Addition = 15 Subtraction = 5 Tabular difference between structured vs unstructured programming: Structured Programming Unstructured Programming labeshsharma2001 Programming Basics Difference Between Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Difference Between Method Overloading and Method Overriding in Java Difference between Prim's and Kruskal's algorithm for MST Difference between Internal and External fragmentation Differences and Applications of List, Tuple, Set and Dictionary in Python Difference between HashMap and HashSet Difference between Compile-time and Run-time Polymorphism in Java Similarities and Difference between Java and C++ Difference between List and ArrayList in Java Difference Between map() And flatMap() In Java Stream
[ { "code": null, "e": 24858, "s": 24830, "text": "\n13 Jul, 2021" }, { "code": null, "e": 24881, "s": 24858, "text": "Structured Programming" }, { "code": null, "e": 25026, "s": 24881, "text": "Structured Programming is a type of programming that generally conv...
How to programmatically take a screenshot in android?
This example demonstrate about how to programmatically take a screenshot in android. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/parent" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity" android:background="#33FFFF00" android:orientation="vertical"> <ImageView android:id="@+id/screenShot" android:layout_width="300dp" android:layout_height="300dp" /> <TextView android:id="@+id/text" android:textSize="18sp" android:layout_gravity="center" android:text="Click" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </LinearLayout> In the above code we have taken two views as imageview and textview. when user click on textview, it going to take screen shot and append to imageview. Step 3 − Add the following code to src/MainActivity.java package com.example.andy.myapplication; import android.graphics.Bitmap; import android.graphics.Canvas; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.view.View; import android.widget.ImageView; import android.widget.LinearLayout; import android.widget.TextView; public class MainActivity extends AppCompatActivity { int view=R.layout.activity_main; ImageView screenShort; TextView textView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(view); final LinearLayout parent=findViewById(R.id.parent); screenShot=findViewById(R.id.screenShot); textView=findViewById(R.id.text); textView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { textView.setText("It is screen shot text"); screenShot(parent); } }); } public void screenShot(View view) { Bitmap bitmap = Bitmap.createBitmap(view.getWidth(), view.getHeight(), Bitmap.Config.ARGB_8888); Canvas canvas = new Canvas(bitmap); view.draw(canvas); screenShot.setImageBitmap(bitmap); textView.setText("click"); } } In the above code we have written screenshot(view), in this method we have passed parent view to take screen shot from top end bottom. To take screen use the following code - Bitmap bitmap = Bitmap.createBitmap(view.getWidth(), view.getHeight(), Bitmap.Config.ARGB_8888); Canvas canvas = new Canvas(bitmap); view.draw(canvas); screenShort.setImageBitmap(bitmap); Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − Now click on text view, it will take screen shot and append to image view as shown below - Click here to download the project code
[ { "code": null, "e": 1147, "s": 1062, "text": "This example demonstrate about how to programmatically take a screenshot in android." }, { "code": null, "e": 1276, "s": 1147, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required d...
Java Math abs() Method
❮ Math Methods Return the absolute (positive) value of different numbers: System.out.println(Math.abs(-4.7)); System.out.println(Math.abs(4.7)); System.out.println(Math.abs(3)); Try it Yourself » The abs() method returns the absolute (positive) value of a number. One of the following: public static double abs(double number) public static float abs(float number) public static int abs(int number) public static long abs(long number) We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: help@w3schools.com Your message has been sent to W3Schools.
[ { "code": null, "e": 17, "s": 0, "text": "\n❮ Math Methods\n" }, { "code": null, "e": 76, "s": 17, "text": "Return the absolute (positive) value of different numbers:" }, { "code": null, "e": 181, "s": 76, "text": "System.out.println(Math.abs(-4.7));\nSystem.o...
Difference between Applets and Servlets in Java.
In java, both Applets and servlets are the programs or applications that run in a java environment. The main difference in both the programs is in their processing is done in different environments. The following are the important differences between Applets and Servlets. AppletDemo.java import java.applet.Applet; import java.awt.Graphics; public class AppletDemo extends Applet { // Overriding paint() method @Override public void paint(Graphics g){ g.drawString("AppletDemo", 20, 20); } } AppletDemo ServletDemo.java import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class ServletDemo extends HttpServlet { private String message; public void init() throws ServletException{ // Do required initialization message = "Servlet Demo"; } public void doGet(HttpServletRequest request,HttpServletResponse response) throws ServletException, IOException{ response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.println(message); } } Servlet Demo
[ { "code": null, "e": 1261, "s": 1062, "text": "In java, both Applets and servlets are the programs or applications that run in a java environment. The main difference in both the programs is in their processing is done in different environments." }, { "code": null, "e": 1335, "s": 12...
How to add unique Id to each record in your local/custom database in Node.js ? - GeeksforGeeks
18 Jul, 2020 The custom database signifies the local database in your file system. There are two types of database ‘SQL’ and ‘NoSQL’. In SQL database data are stored as table manner and in Nosql database data are stored independently with some particular way to identify each record independently. We can also create our own database or datastore locally in Nosql manner. To store the information in a NoSQL manner, we have to add a unique Id to each record so that each record can be identified independently. There are some steps involve in creating the local database and add records with unique Id to it. These steps are as follows: Create package.json file in root of project directory.Command to create package.json filenpm init -y npm init -y Install express and body-parser package.Command to install packagesnpm install express body-parser npm install express body-parser Create a GET route to show the form (HTML form to submit the information to the database). Create the subsequent post route to handle the form submission request. Set the server to run on a specific port(Developer’s port – 3000). Create a repository file and add all the logic related to creating local database. Create a method in repository file to add a unique alphanumeric id to each record. Create a method in repository file to add each unique record into the database in json format. This example illustrates how to create a local database and add records to it with a unique id. Filename: index.js const express = require('express')const bodyParser = require('body-parser')const repo = require('./repository') const app = express() const port = process.env.PORT || 3000 // The body-parser middleware // to parse form dataapp.use(bodyParser.urlencoded({extended : true})) // Get route to display HTML formapp.get('/signup', (req, res) => { res.send(` <div> <form method='POST'> <div> <div> <label id='email'>Username</label> </div> <input type='text' name='email' placeholder='Email' for='email'> </div> <div> <div> <label id='password'>Password</label> </div> <input type='password' name='password' placeholder='Password' for='password'> </div> <div> <button>Sign Up</button> </div> </form> </div> `)}); // Post route to handle form submission// logic and add data to the databaseapp.post('/signup', async (req, res) => { const {email, password} = req.body const addedRecord = await repo.createNewRecord({email, password}) console.log(`Added Record : ${JSON.stringify(addedRecord, null, 4)}`) res.send("Information added to the " + "database successfully.")}) // Server setupapp.listen(port, () => { console.log(`Server start on port ${port}`)}) Filename: repository.js // Importing node.js file system, crypto module const fs = require('fs')const crypto = require('crypto') class Repository { constructor(filename) { // Filename where datas are going to store if(!filename) { throw new Error('Filename is required to create a datastore!') } this.filename = filename try { fs.accessSync(this.filename) } catch (err) { // If file not exist // it is created with empty array fs.writeFileSync(this.filename, '[]') } } // Logic to add data or record async createNewRecord(attributes) { // Assign unique Id to each record attributes.id = this.generateUniqueID() // Read filecontents of the datastore const jsonRecords = await fs.promises.readFile(this.filename, { encoding : 'utf8' }) // Parsing JSON records in JavaScript // object type records const objRecord = JSON.parse(jsonRecords) // Adding new record objRecord.push(attributes) // Writing all records back to the file await fs.promises.writeFile( this.filename, JSON.stringify(objRecord, null, 2) ) return attributes; } // Method to generate unique ID generateUniqueID() { return crypto.randomBytes(8).toString('hex') }} // The 'datastore.json' file created at runtime // and all the information provided via signup// form store in this file in JSON format.module.exports = new Repository('datastore.json') Package.json file: Step to run this programRun index.js file using the following command: node index.js Form to submit the responses:Note: Here three responses are submitted one after other and all the responses are stored in datastore.json file. Redirected page after submitting the request: Output: Database: Note: For the first time running the program database(datastore.json) file not exist in the project directory, it created dynamically after running the program and store the submitted response. After that, all the submitted responses are appended in the database one by one. Node.js-Misc Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to install the previous version of node.js and npm ? Difference between promise and async await in Node.js How to use an ES6 import in Node.js? Express.js res.sendFile() Function Mongoose | findByIdAndUpdate() Function Top 10 Front End Developer Skills That You Need in 2022 Top 10 Projects For Beginners To Practice HTML and CSS Skills How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS? Difference between var, let and const keywords in JavaScript
[ { "code": null, "e": 24235, "s": 24207, "text": "\n18 Jul, 2020" }, { "code": null, "e": 24733, "s": 24235, "text": "The custom database signifies the local database in your file system. There are two types of database ‘SQL’ and ‘NoSQL’. In SQL database data are stored as table m...
How can we return multiple values from a function in C#?
In c# multiple values can be returned using the below approaches − Reference parameters Reference parameters Output parameters Output parameters Returning an Array Returning an Array Returning a Tuple Returning a Tuple class Program{ static int ReturnMultipleValuesUsingRef(int firstNumber, ref int secondNumber){ secondNumber = 20; return firstNumber; } static void Main(){ int a = 10; int refValue = 0; var res = ReturnMultipleValuesUsingRef(a, ref refValue); System.Console.WriteLine($" Ref Value {refValue}"); System.Console.WriteLine($" Function Return Value {res}"); Console.ReadLine(); } } Ref Value 20 Function Return Value 10 class Program{ static int ReturnMultipleValuesUsingOut(int firstNumber, out int secondNumber){ secondNumber = 20; return firstNumber; } static void Main(){ int a = 10; int outValue = 0; var res = ReturnMultipleValuesUsingOut(a, out outValue); System.Console.WriteLine($" Out Value {outValue}"); System.Console.WriteLine($" function Return Value {res}"); Console.ReadLine(); } } Out Value 20 Function Return Value 10 class Program{ static int[] ReturnArrays(){ int[] arrays = new int[2] { 1, 2 }; return arrays; } static void Main(){ var res = ReturnArrays(); System.Console.WriteLine($"{res[0]} {res[1]}"); Console.ReadLine(); } } 1 2 class Program{ static Tuple<int, int>ReturnMulitipleVauesUsingTuples(){ return new Tuple<int, int>(10, 20); } static void Main(){ var res = ReturnMulitipleVauesUsingTuples(); System.Console.WriteLine($"{res.Item1} {res.Item2}"); Console.ReadLine(); } } 10 20
[ { "code": null, "e": 1129, "s": 1062, "text": "In c# multiple values can be returned using the below approaches −" }, { "code": null, "e": 1150, "s": 1129, "text": "Reference parameters" }, { "code": null, "e": 1171, "s": 1150, "text": "Reference parameters" ...
How to detect device is Android phone or Android tablet?
This example demonstrates how do I detect the device is an Android phone or Android tablet. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <TextView android:id="@+id/textView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:layout_marginTop="30dp" android:text="Detect device is Android phone or Android tablet" android:textAlignment="center" android:textSize="24sp" android:textStyle="bold" /> </RelativeLayout> Step 3 − Add the following code to src/MainActivity.java import androidx.appcompat.app.AppCompatActivity; import android.content.Context; import android.os.Bundle; import android.telephony.TelephonyManager; import android.widget.Toast; import java.util.Objects; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); TelephonyManager manager = (TelephonyManager)getApplicationContext().getSystemService(Context.TELEPHONY_SERVICE); if (Objects.requireNonNull(manager).getPhoneType() == TelephonyManager.PHONE_TYPE_NONE) { Toast.makeText(MainActivity.this, "Detected... You're using a Tablet", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(MainActivity.this, "Detected... You're using a Mobile Phone", Toast.LENGTH_SHORT).show(); } } } Step 4 − Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from the android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − Click here to download the project code.
[ { "code": null, "e": 1154, "s": 1062, "text": "This example demonstrates how do I detect the device is an Android phone or Android tablet." }, { "code": null, "e": 1283, "s": 1154, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all req...
Program to convert Centimeter to Feet and Inches - GeeksforGeeks
04 Feb, 2022 In this article, we will learn how to convert Height, given in centimeter to Height in feet and inches. Examples: Input : centimeter = 10 Output : inches = 3.94 feet = 0.33 Input : centimeter = 85 Output : inches = 33.46 feet = 2.79 We know that 1 inch is equal to 2.54 centimeters, so 1 centimeter is equal to 0.3937 inches. Therefore, n centimeters are equal to (n * 0.3937)inches. We also know that 1 foot is equal to 30.48 centimeters, therefore, 1 centimeter is equal to 0.0328 feet. So, n centimeters are equal to (n * 0.0328)feet. C++ Java Python3 C# PHP Javascript // C++ program to convert Centimeter to Feet and Inches#include <iostream>using namespace std; // Function to perform conversionvoid conversion(int centimeter){ double inches = 0.3937 * centimeter; double feet = 0.0328 * centimeter; // Printing the output cout << "Inches is: " << inches << "\n"; cout << "Feet is: " << feet<< "\n";} // Driver Codeint main(){ int centimeter = 10; conversion(centimeter);} /*This code is contributed by Gunjan Gupta.*/ // Java program to convert// centimeter to feet and Inchesimport java.io.*; class GFG { // Function to perform conversion static double Conversion(int centi) { double inch = 0.3937 * centi; double feet = 0.0328 * centi; System.out.printf("Inches is: %.2f \n", inch); System.out.printf("Feet is: %.2f", feet); return 0; } // Driver Code public static void main(String args[]) { int centi = 10; Conversion(centi); }} /*This code is contributed by Nikita Tiwari.*/ # Python program to convert centimeter to feet and# Inches Function to perform conversiondef Conversion(centi): inch = 0.3937 * centi feet = 0.0328 * centi print ("Inches is:", round(inch, 2)) print ("Feet is:", round(feet, 2)) # Driver Codecenti = 10Conversion(centi) // C# program to convert// centimeter to feet and Inchesusing System; class GFG { // Function to perform conversion static double Conversion(int centi) { double inch = 0.3937 * centi; double feet = 0.0328 * centi; Console.WriteLine("Inches is: " + inch); Console.WriteLine("Feet is: " + feet); return 0; } // Driver Code public static void Main() { int centi = 10; Conversion(centi); }} // This code is contributed by vt_m. <?php// PHP program to convert// centimeter to feet and Inches // Function to perform conversionfunction Conversion($centi){ $inch = 0.3937 * $centi; $feet = 0.0328 * $centi; echo("Inches is: " . $inch . "\n"); echo("Feet is: " . $feet);} // Driver Code$centi = 10;Conversion($centi); // This code is contributed by Ajit.?> <script>//Program to convert centimeter to feet and Inches // Function to perform conversion function Conversion(centi) { var inch = 0.3937 * centi; var feet = 0.0328 * centi; inch=inch.toFixed(2); feet=feet.toFixed(2); document.write("Inches is: \n" + inch + "<br>"); document.write("Feet is: " + feet); return 0;} // Driver Code centi = 10; Conversion(centi); //This code is contributed by simranarora5sos</script> Output: Inches is: 3.94 Feet is: 0.33 jit_t simranarora5sos gunjang12 School Programming Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Interfaces in Java Operator Overloading in C++ Overriding in Java Types of Operating Systems Python program to check if a string is palindrome or not Friend class and function in C++ Polymorphism in C++ Inheritance in Java Constructors in Java Different ways of Reading a text file in Java
[ { "code": null, "e": 23837, "s": 23809, "text": "\n04 Feb, 2022" }, { "code": null, "e": 23953, "s": 23837, "text": "In this article, we will learn how to convert Height, given in centimeter to Height in feet and inches. Examples: " }, { "code": null, "e": 24091, ...
IDE | GeeksforGeeks | A computer science portal for geeks
Please enter your email address or userHandle. 12345678910111213141516171819202122232425262728293031323334353637https://c.mi.com/thread-3967156-1-1.html https://forums.ubisoft.com/showthread.php/2371427-44-Free-Cookie-Run-Cookies-Generator-2022iwd?p=15523073#post15523073https://netflix-free-account-generator.tumblr.comhttps://psn-gift-card-code-2022.tumblr.comhttps://bingo-blitz-credit-generator.tumblr.comhttps://steamwalletgiftcardgenerator2022.tumblr.comhttps://vc-generator-nba-2k22.tumblr.comhttps://amazon-gift-card-generator-2022.tumblr.comhttps://psn-gift-card-code-generator.tumblr.comhttps://tiktok-fans-generator-2022.tumblr.comhttps://pes-22-free-coins-generator.tumblr.comhttps://amazon-giftcard-generator.tumblr.comhttps://itunes-code-generator-2022.tumblr.comhttps://forums.ubisoft.com/showthread.php/2371436-efeoif4r09300i9o-fr434effregfg?p=15523082#post15523082http://zacriley.ning.com/photo/albums/ewfeofii9ie0w4f09i4rifpo0re4gft4https://paiza.io/projects/YYVB5HSwv9k9R5PPMqz-VQ?language=phphttps://jsfiddle.net/5rpzs1o9/https://notes.io/ABARhttps://onecompiler.com/java/3xq2b2dhuhttps://paste2.org/E8Dge2zJhttps://pasteio.com/xONewfBwCzSQhttps://ideone.com/ydG5wEhttps://authors.curseforge.com/paste/6fefb8d1https://dev.bukkit.org/paste/4b1359b9https://paste.feed-the-beast.com/view/ab65547fhttp://cpp.sh/2x2wnhttps://brainly.co.id/tugas/47544448https://pasteall.org/Ncsmhttps://ide.geeksforgeeks.org/VYvlOsqxzVhttps://pastebin.com/GTE8fuA6https://paste.centos.org/view/c981ea12https://paste.in/3GwtxUhttps://ctxt.io/2/AABgGfJQEghttps://wow.curseforge.com/pastehttp://paste.jp/1c144e86/https://paste.ofcode.org/ZhrNdFC8ymGieRHZWyy7HAhttps://www.onfeetnation.com/profiles/blogs/iuuwduweoidewfocdefr34rההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההההXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX https://ide.geeksforgeeks.org/sztAdrcNtF prog.c:1:6: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘:’ token
[ { "code": null, "e": 164, "s": 117, "text": "Please enter your email address or userHandle." }, { "code": null, "e": 2352, "s": 164, "text": "12345678910111213141516171819202122232425262728293031323334353637https://c.mi.com/thread-3967156-1-1.html https://forums.ubisoft.com/showt...
Julia - Tuples
Similar to an array, tuple is also an ordered set of elements. Tuples work in almost the same way as arrays but there are following important differences between them − An array is represented by square brackets whereas a tuple is represented by parentheses and commas. An array is represented by square brackets whereas a tuple is represented by parentheses and commas. Tuples are immutable. Tuples are immutable. We can create tuples as arrays and most of the array’s functions can be used on tuples also. Some of the example are given below − julia> tupl=(5,10,15,20,25,30) (5, 10, 15, 20, 25, 30) julia> tupl (5, 10, 15, 20, 25, 30) julia> tupl[3:end] (15, 20, 25, 30) julia> tupl = ((1,2),(3,4)) ((1, 2), (3, 4)) julia> tupl[1] (1, 2) julia> tupl[1][2] 2 We cannot change a tuple: julia> tupl[2]=0 ERROR: MethodError: no method matching setindex!(::Tuple{Tuple{Int64,Int64},Tuple{Int64,Int64}}, ::Int64, ::Int64) Stacktrace: [1] top-level scope at REPL[7]:1 A named tuple is simply a combination of a tuple and a dictionary because − A named tuple is ordered and immutable like a tuple and A named tuple is ordered and immutable like a tuple and Like a dictionary in named tuple, each element has a unique key which can be used to access it. Like a dictionary in named tuple, each element has a unique key which can be used to access it. In next section, let us see how we can create named tuples − You can create named tuples in Julia by − Providing keys and values in separate tuples Providing keys and values in separate tuples Providing keys and values in a single tuple Providing keys and values in a single tuple Combining two existing named tuples Combining two existing named tuples One way to create named tuples is by providing keys and values in separate tuples. Example julia> names_shape = (:corner1, :corner2) (:corner1, :corner2) julia> values_shape = ((100, 100), (200, 200)) ((100, 100), (200, 200)) julia> shape_item2 = NamedTuple{names_shape}(values_shape) (corner1 = (100, 100), corner2 = (200, 200)) We can access the elements by using dot(.) syntax − julia> shape_item2.corner1 (100, 100) julia> shape_item2.corner2 (200, 200) We can also create named tuples by providing keys and values in a single tuple. Example julia> shape_item = (corner1 = (1, 1), corner2 = (-1, -1), center = (0, 0)) (corner1 = (1, 1), corner2 = (-1, -1), center = (0, 0)) We can access the elements by using dot(.) syntax − julia> shape_item.corner1 (1, 1) julia> shape_item.corner2 (-1, -1) julia> shape_item.center (0, 0) julia> (shape_item.center,shape_item.corner2) ((0, 0), (-1, -1)) We can also access all the values as with ordinary tuples as follows − julia> c1, c2, center = shape_item (corner1 = (1, 1), corner2 = (-1, -1), center = (0, 0)) julia> c1 (1, 1) Julia provides us a way to make new named tuples by combining two named tuples together as follows − Example julia> colors_shape = (top = "red", bottom = "green") (top = "red", bottom = "green") julia> shape_item = (corner1 = (1, 1), corner2 = (-1, -1), center = (0, 0)) (corner1 = (1, 1), corner2 = (-1, -1), center = (0, 0)) julia> merge(shape_item, colors_shape) (corner1 = (1, 1), corner2 = (-1, -1), center = (0, 0), top = "red", bottom = "green") If you want to pass a group of keyword arguments to a function, named tuple is a convenient way to do so in Julia. Following is the example of a function that accepts three keyword arguments − julia> function ABC(x, y, z; a=10, b=20, c=30) println("x = $x, y = $y, z = $z; a = $a, b = $b, c = $c") end ABC (generic function with 1 method) It is also possible to define a named tuple which contains the names as well values for one or more keywords as follows − julia> options = (b = 200, c = 300) (b = 200, c = 300) In order to pass the named tuples to the function we need to use; while calling the function − julia> ABC(1, 2, 3; options...) x = 1, y = 2, z = 3; a = 10, b = 200, c = 300 The values and keyword can also be overridden by later function as follows − julia> ABC(1, 2, 3; b = 1000_000, options...) x = 1, y = 2, z = 3; a = 10, b = 200, c = 300 julia> ABC(1, 2, 3; options..., b= 1000_000) x = 1, y = 2, z = 3; a = 10, b = 1000000, c = 300 73 Lectures 4 hours Lemuel Ogbunude 24 Lectures 3 hours Mohammad Nauman 29 Lectures 2.5 hours Stone River ELearning Print Add Notes Bookmark this page
[ { "code": null, "e": 2247, "s": 2078, "text": "Similar to an array, tuple is also an ordered set of elements. Tuples work in almost the same way as arrays but there are following important differences between them −" }, { "code": null, "e": 2348, "s": 2247, "text": "An array is r...
Building A Custom Model in Scikit-Learn | by Tim Book | Towards Data Science
Scikit-Learn is incredible. It allows its users to fit almost any machine learning model you can think of, plus many you may never have even heard of! All in just two lines of code! However, it doesn’t have everything. For example, ordinal regression is nowhere to be found. And its deep learning capabilities are... lacking. But who cares? You can find that stuff elsewhere, right? True. But! Scikit-Learn isn’t just modeling. There are also some really great tools to help you streamline your modeling process, such as GridSearchCV and Pipeline. These tools are invaluable, but they only work with Scikit-Learn models. Turns out, if Scikit-Learn and our Google overlords don’t give it to us directly, we can make our own custom Scikit-Learn-compliant models! And it’s easier than you think! In this post, I’m going to build something that is conspicuously missing from Scikit-Learn: the ability to use k-means clustering to do transfer learning in a Pipeline. That is, feeding the results of clustering into a supervised learning model in order to find the optimal value k. This post is going to be a little technical. Specifically, I’m going to assume you have a working knowledge of object-oriented programming (OOP). That is, you know how and why to use the class Python keyword. One of the best things about Scikit-Learn is its incredible consistency. Fitting one type of model is nominally the same as fitting any other type of model. That is, modeling in Scikit-Learn is as easy as: model = MyModel(parameters)model.fit(X, y) And that’s it! You can now analyze your model, probably with the help of the model’s .predict() and .score() methods. In fact, there are 5 methods every Scikit-Learn estimator is guaranteed to have: .fit() .predict() .score() .set_params() .get_params() To build our own model, we need only construct a class that has the above 5 methods and implements them in “the usual way.” That sounds like a lot of work. Luckily, Scikit-Learn does the hard work for us. In order to build our own model, we can inherit from a base class built into Scikit-Learn. In OOP, if we specify that one class inherits from another, the “subclass” then gains all the methods of the “superclass.” The syntax of inheritance looks like this: class Car(Vehicle): # stuff... A car is a type of vehicle. The Car class will include every method of the Vehicle class, plus more that we can define in the Car class definition. In order to comply with Scikit-Learn, our model needs to inherit from some mixin. A mixin is just a class that was never intended to work on its own, instead it just contains a lot of methods that you can add to your current class via inheritance. Scikit-Learn gives us one for each general type of model: RegressorMixin, ClassifierMixin, ClusterMixin, TransformerMixin, and several others we don’t need to worry about. Everything we want to build ourselves, we make it by simply overriding what we inherit! This has been a mouthful with no examples to show for it. First, a simple example. After, the motivation example using clustering. The null model, sometimes called the “baseline” model, is the model where you have no information besides random guessing. For example, the null model for a regression problem would be just taking the mean y of your training data and using that as every prediction. For classification, it’s just taking the majority class for every prediction. For example, if you had to predict whether or not someone was going to win the lottery, the null model would dictate that you always lose, since that is the most likely outcome. The null model is useful for telling how well your current model is doing. After all, if your model is good, it should beat the baseline. The null model is not built into Scikit-Learn, but it’s easy for us to implement! import numpy as npfrom sklearn.base import RegressorMixinclass NullRegressor(RegressorMixin): def fit(self, X=None, y=None): # The prediction will always just be the mean of y self.y_bar_ = np.mean(y) def predict(self, X=None): # Give back the mean of y, in the same # length as the number of X observations return np.ones(X.shape[0]) * self.y_bar_ Easy enough! We’re now free to do the usual... model = NullRegressor()model.fit(X, y)model.predict(X) The important part is, our new NullRegressor is now compatible with all of Scikit-Learn’s built-in tools such as cross_val_score and GridSearchCV. This example was borne out of curiosity, when a coworker asked me if I could “tune” a k-means model using GridSearchCV and Pipeline. I originally said no, since you would need to use the clusterer as a transformer to pass into your supervised model, which Scikit-Learn doesn’t allow for. But why let that stop us? We just learned how to hack Scikit-Learn to do whatever we want! To spell it out, essentially what I want is to create the following pipeline: Pipeline([ ("sc", StandardScaler()), ("km", KMeansSomehow()), ("lr", LogisticRegression()]) Where KMeansSomehow() is a clusterer used as a Scikit-Learn transformer. That is, it appends the onehot-encoded cluster labels to the data matrix X to then pass into our model. In order for this to work, we’ll start by defining a class that inherits from TransformerMixin. We’ll then give it the appropriate .fit(), .transform(), and .fit_transform() methods. But to start, the initialization: from sklearn.base import TransformerMixinfrom sklearn.cluster import KMeansclass KMeansTransformer(TransformerMixin): def __init__(self, *args, **args): self.model = KMeans(*args, **args) The purpose of self.model is to contain the underlying cluster model. But what are *args and **kwargs, you ask? They’re a shortcut for lazy programmers. They essentially capture all other arguments you pass into __init__() and pass them right along to KMeans(). This is essentially me saying “Whatever I pass into KMeansTransformer will also pass into KMeans, and I’m too lazy to figure out what those arguments might be in the future.” Next, we need to give it the appropriate fitting methods: from self.preprocessing import OneHotEncoderclass KMeansTransformer(TransformerMixin): def __init__(self, *args, **args): self.model = KMeans(*args, **args) def fit(self, X): self.X = X self.model.fit(X) def transform(self, X): # Need to reshape into a column vector in order to use # the onehot encoder. cl = self.model.predict(X).reshape(-1, 1) self.oh = OneHotEncoder( categories="auto", sparse=False, drop="first" ) cl_matrix = self.oh.fit_transform(cl) return np.hstack([self.X, cl_matrix]) def fit_transform(self, X, y=None): self.fit(X) return self.transform(X) And that should be it! We can now use this KmeansTransformer just like a built-in Scikit-Learn transformer. To top it off, try this example out: from sklearn.preprocessing import StandardScalerfrom sklearn.linear_model import LogisticRegressionfrom sklearn.datasets import make_blobsX, y = make_blobs( n_samples=100, n_features=2, centers=3)pipe = Pipeline([ ("sc", StandardScaler()), ("km", KMeansTransformer()), ("lr", LogisticRegression(penalty="none", solver="lbfgs"))])pipe.fit(X, y)pipe.score(X, y)# ==> 1.0 In less artificial examples, you might also want to use GridSearchCV to find the optimal number of clusters to pass into your logistic regression (or whatever model you have). Now you should understand how to build your own custom machine learning models within the framework of Scikit-Learn, which is currently the most popular and (in many cases) powerful ML library out there. Is this blog post pedantic? Sure. Will you ever need this? No, you probably won’t ever need this, but that’s not the point. This technique is used to build something. It takes a builder to recognize the need to build a more creative solution. Plus, as a wise person once put it: You are never less for knowing more. I hope you appreciate adding this tool to your tool belt. Master it, and you may just achieve true Machine Learning Ascension.
[ { "code": null, "e": 354, "s": 172, "text": "Scikit-Learn is incredible. It allows its users to fit almost any machine learning model you can think of, plus many you may never have even heard of! All in just two lines of code!" }, { "code": null, "e": 555, "s": 354, "text": "Howe...
JavaScript for...in loop
The for...in loop is used to loop through an object's properties. As we have not discussed Objects yet, you may not feel comfortable with this loop. But once you understand how objects behave in JavaScript, you will find this loop very useful. for (variablename in object) { statement or block to execute } In each iteration, one property from object is assigned to variablename and this loop continues till all the properties of the object are exhausted. Try the following example to implement ‘for-in’ loop. It prints the web browser’s Navigator object. <html> <body> <script type = "text/javascript"> <!-- var aProperty; document.write("Navigator Object Properties<br /> "); for (aProperty in navigator) { document.write(aProperty); document.write("<br />"); } document.write ("Exiting from the loop!"); //--> </script> <p>Set the variable to different object and then try...</p> </body> </html> Navigator Object Properties serviceWorker webkitPersistentStorage webkitTemporaryStorage geolocation doNotTrack onLine languages language userAgent product platform appVersion appName appCodeName hardwareConcurrency maxTouchPoints vendorSub vendor productSub cookieEnabled mimeTypes plugins javaEnabled getStorageUpdates getGamepads webkitGetUserMedia vibrate getBattery sendBeacon registerProtocolHandler unregisterProtocolHandler Exiting from the loop! Set the variable to different object and then try... 25 Lectures 2.5 hours Anadi Sharma 74 Lectures 10 hours Lets Kode It 72 Lectures 4.5 hours Frahaan Hussain 70 Lectures 4.5 hours Frahaan Hussain 46 Lectures 6 hours Eduonix Learning Solutions 88 Lectures 14 hours Eduonix Learning Solutions Print Add Notes Bookmark this page
[ { "code": null, "e": 2710, "s": 2466, "text": "The for...in loop is used to loop through an object's properties. As we have not discussed Objects yet, you may not feel comfortable with this loop. But once you understand how objects behave in JavaScript, you will find this loop very useful." }, {...
No labels? No problem!. Machine learning without labels using... | by Josh Taylor | Towards Data Science
There is a certain irony that machine learning, a tool used for the automation of tasks and processes, often starts with the highly manual process of data labelling. The task of creating labels to teach computers new tasks is quickly becoming the blue collar job of the 21st century. It has created complex supply chains which often end in lower income countries such as Kenya, India and the Philippines. Whilst this new industry is creating thousands of jobs, workers can be underpaid and exploited. The market for data labelling passed $500 million in 2018 and it will reach $1.2 billion by 2023. It accounts for 80 percent of the time spent building A.I. technology. A.I. Is Learning From Humans. Many Humans. New York Times The ability, therefore, to automate the process for creating data labels is highly desirable from a cost, time and even ethical standpoint. In this post, we will explore how this can be achieved through a worked example in Python using the excellent Snorkel library. Snorkel is a really innovative concept; create a series of ‘messy’ label functions and combine these in an intelligent way to build labels for a data set. These labels can then be used to train a machine learning model in exactly the same way as in a standard machine learning workflow. Whilst it is outside the scope of this post it is worth noting that the library also helps to facilitate the process of augmenting training sets and also monitoring key areas of a dataset to ensure a model is trained to handle these effectively. Snorkel itself has been around since 2016 but has continued to evolve. It is now used by many of the big names in the industry (Google, IBM, Intel). Version 0.9 in 2019 brought with it a more sophisticated way of building a label model, as well as a suite of well documented tutorials covering all of the key areas of the software. Even if you have come across it before, with these updates it is worth a second look. Before we get started with a worked example, it is worth considering when you should use this library over a traditional (manual) approach of creating labels. If the answer to all of the below is ‘Yes’ then it would be worth considering Snorkel: You have a data set with no labels or an incomplete set of labels. It will take significant time & effort to label the data set manually. You have domain knowledge of the data (or can work closely with someone who has). You can think of one or more simplistic functions which could be used to split the data into different classes (for example, by using a key word search, or setting a particular threshold on a value). The process for using Snorkel is a simple one: 🏅[Optional] create a small subset of ‘golden’ labels for items within the dataset (this is helpful for reviewing performance of the final model but is not essential. The alternative to this is ‘eyeballing’ results to understand how the model performs.)⌨ Write a series of ‘Label Functions’ which define the different classes across the training data.🏗 Build a label model and apply this to the dataset to create a set of labels.📈 Use these labels in your normal machine learning pipeline (ie use the labels produced to train a model). 🏅[Optional] create a small subset of ‘golden’ labels for items within the dataset (this is helpful for reviewing performance of the final model but is not essential. The alternative to this is ‘eyeballing’ results to understand how the model performs.) ⌨ Write a series of ‘Label Functions’ which define the different classes across the training data. 🏗 Build a label model and apply this to the dataset to create a set of labels. 📈 Use these labels in your normal machine learning pipeline (ie use the labels produced to train a model). This process is iterative and you will find yourself evaluating the results and re-thinking and refining the label functions to improve the output. Let's take a real life problem to show how Snorkel can be used a machine learning pipeline. We will be trying to split out ‘frameworks’ from ‘contracts’ in an open source commercial dataset (from Contracts Finder, a UK transparency system which logs all Government contracts above £10k). What is a framework? A framework can be thought of as a ‘parent agreement’. It is a way of settling the ‘Ts and Cs’ with one or more suppliers which then allow for contracts to be agreed without having to go through the paperwork all over again. The issue is, because there is a parent-child relationship between frameworks and contracts, this can lead to double counting when analysing the data. It is therefore important to be able to split out frameworks from contracts. The data The data in this example consists of a contract title, description and value. The below shows an example of what we have to work with: We have a placeholder column called ‘framework’ which we will be using to add our labels. The naming convention we will use is: 1 = Framework 0 = Not Framework-1 = Abstain (ie not sure!) We will start by creating a series of label functions. These can essentially be any standard Python function and can be as simple (or as complex) as you need them to be. We will start with a simple keyword search on the data set. This example searches for the phrase: “use by uk public sector bodies” as this is only likely to occur in the descriptions of frameworks. Snorkel makes this really simple, all you have to do is wrap a standard Python function with the decorator @labeling_function(): from snorkel.labeling import labeling_function@labeling_function()def ccs(x): return 1 if "use by uk public sector bodies" in x.desc.lower() else -1 Great we have just created our first label function! We now build up a number of other functions which will help separate frameworks from contracts. Tips on creating effective labelling functions After working with the library, the following guidelines should be helpful in designing effective label functions: 🤔Always have the end outcome in mind when designing label functions. In particular, think about precision and recall. This will help when deciding on whether coverage or specificity is more important in the labels produced. 📑 Think through potential functions before coding them. Creating a list of these in plain English is helpful to allow you to prioritise the most effective label functions before coding them up. 🎯Less is more. It is often tempting to take a scatter-gun approach to building functions however a smaller number of well thought out functions are always more effective than a larger number of less refined label functions. ⚗️Always test any new function on the dataset by itself before adding to your label functions. What results does it return? Are these what you were expecting. Once you have built one or more labelling functions, you need to apply these to create a set of data points. This can be achieved using the PandasLFApplier which allows you to build these data points directly from a Pandas dataframe. from snorkel.labeling import PandasLFApplierlfs = [ccs,Other_label_functions...]applier = PandasLFApplier(lfs=lfs)L_train = applier.apply(df=df_train) #unlabelled datasetL_dev = applier.apply(df=df_test) #small label dev set Note that in this example we have a train and ‘dev’ set of data. The dev set is a small data set of manually labelled items (our ‘Gold’ labels). This makes things easier to quickly get a rough sense of how label functions are performing (the alternative to eyeballing the results produced). Once you have applied your label functions, Snorkel provides easy access to label performance by using LFAnalysis LFAnalysis(L=L_dev, lfs=lfs).lf_summary(Y=Y_test) As we have a dev set of information with labels this will provide us with the below information: The meaning of each of these is below (straight from the Snorkel documentation): Polarity: The set of unique labels this LF outputs (excluding abstains) Coverage: The fraction of the dataset the LF labels Overlaps: The fraction of the dataset where this LF and at least one other LF label Conflicts: The fraction of the dataset where this LF and at least one other LF label and disagree Correct: The number of data points this LF labels correctly (if gold labels are provided) Incorrect: The number of data points this LF labels incorrectly (if gold labels are provided) Empirical Accuracy: The empirical accuracy of this LF (if gold labels are provided) Once we are happy with our label functions, we can bring these together in a probabilistic model. This is Snorkel’s ‘magic source’ which combines the outputs of each of the functions and either returns the probability of a data point having a particular label, or the labels themselves. from snorkel.labeling import LabelModellabel_model = LabelModel(cardinality=2, verbose=True)label_model.fit(L_train=L_train, n_epochs=500, lr=0.001, log_freq=100, seed=123) Filter the dataset Depending on the coverage of our label functions we will need to filter out some of our training data before using the labels to train a machine learning model. The reason is that some data points will not have been picked up by any of our label functions. We want to remove these items as they will add noise to the training data. Snorkel has an inbuilt function which makes this easy: from snorkel.labeling import filter_unlabeled_dataframe#For label probabilities (optional):probs_train = label_model.predict_proba(L=L_train)#For actual labels:probs_train = label_model.predict(L=L_train,return_probs=False)#filtering the data:df_train_filtered, probs_train_filtered = filter_unlabeled_dataframe(X=df_train, y=probs_train, L=L_train) You now have a training set of data with labels without having to perform any manual labelling on the dataset. You can use these as a starting point for a supervised machine learning task. As highlighted earlier, the documentation on the Snorkel is excellent and there are a number of in-depth tutorials which go into greater depth of the features available within the library: www.snorkel.org Image sources: Cover image from needpix.com Snorkel logo from project web site Meme from meme arsenal
[ { "code": null, "e": 337, "s": 171, "text": "There is a certain irony that machine learning, a tool used for the automation of tasks and processes, often starts with the highly manual process of data labelling." }, { "code": null, "e": 455, "s": 337, "text": "The task of creating...
A Beginners Introduction into MapReduce | by Dima Shulga | Towards Data Science
Many times, as Data Scientists, we have to deal with huge amount of data. In those cases, many approaches won’t work or won’t be feasible. A massive amount of data is good, it’s very good, and we want to utilize as much as possible. Here I want to introduce the MapReduce technique, which is a broad technique that is used to handle a huge amount of data. There are many implementations of MapReduce, including the famous Apache Hadoop. Here, I won’t talk about implementations. I’ll try to introduce the concept in the most intuitive way and present examples for both toy and real-life examples. Let’s start with some straightforward task. You’re given a list of strings, and you need to return the longest string. It’s pretty easy to do in python: def find_longest_string(list_of_strings): longest_string = None longest_string_len = 0 for s in list_of_strings: if len(s) > longest_string_len: longest_string_len = len(s) longest_string = s return longest_string We go over the strings one by one, compute the length and keep the longest string until we finished. For small lists it works pretty fast: list_of_strings = ['abc', 'python', 'dima']%time max_length = print(find_longest_string(list_of_strings))OUTPUT:pythonCPU times: user 0 ns, sys: 0 ns, total: 0 nsWall time: 75.8 μs Even for lists with much more than 3 elements it works pretty well, here we try with 3000 elements: large_list_of_strings = list_of_strings*1000%time print(find_longest_string(large_list_of_strings))OUTPUT:pythonCPU times: user 0 ns, sys: 0 ns, total: 0 nsWall time: 307 μs But what if we try for 300 million elements? large_list_of_strings = list_of_strings*100000000%time max_length = max(large_list_of_strings, key=len)OUTPUT:pythonCPU times: user 21.8 s, sys: 0 ns, total: 21.8 sWall time: 21.8 s This is a problem, in most applications, 20 seconds response time is not acceptable. One way to improve the computation time is by buying a much better and faster CPU. Scaling your system by introducing better and faster hardware is called “Vertical Scaling”. This, of course, won’t work forever. Not only it’s not so trivial to find a CPU that work 10 times faster, but also, our data will probably get bigger, and we don’t want to upgrade our CPU every time the code gets slower. Our solution is not scalable. Instead, we can do “Horizontal Scaling”, we’ll design our code so it could run in parallel, and it will get much faster when we’ll add more processors and/or CPUs. To do that, we need to break our code into smaller components and see how we can execute computations in parallel. The intuition is as follows: 1) break our data into many chunks, 2) execute the find_longest_string function for every chunk in parallel and 3) find the longest string among the outputs of all chunks. Our code is very specific and it hard to break and modify, so instead of using thefind_longest_string function, we’ll develop a more generic framework that will help us perform different computations in parallel on large data. The two main things we do in our code is computing the len of the string and comparing it to the longest string until now. We’ll break our code into two steps: 1) compute the len of all strings and 2) select the max value. %%time# step 1:list_of_string_lens = [len(s) for s in list_of_strings]list_of_string_lens = zip(list_of_strings, list_of_string_lens)#step 2:max_len = max(list_of_string_lens, key=lambda t: t[1])print(max_len)OUTPUT:('python', 6)CPU times: user 51.6 s, sys: 804 ms, total: 52.4 sWall time: 52.4 s (I’m calculating the length of the strings and then zip them together because this is much faster than doing it in one line and duplicating the list of strings) In this state, the code runs actually slower than before because instead of performing a single pass on all of our strings, we do it 2 times, first to compute the len and then to find the max value. Why it is good for us? because now our “step 2” gets as input not the original list of strings, but some preprocessed data. This allows us to execute step two using the output of another “step two”s! We’ll understand that better in a bit, but first, let’s give those steps a name. We’ll call “step one” a “mapper” because it maps some value into some other value, and we’ll call “step two” a reducer because it gets a list of values and produces a single (in most cases) value. Here’re two helper functions for mapper and reducer: mapper = lendef reducer(p, c): if p[1] > c[1]: return p return c The mapper is just the len function. It gets a string and returns its length. The reducer gets two tuples as input and returns the one with the biggest length. Let’s rewrite our code using map and reduce, there are even built-in functions for this in python (In python 3, we have to import it from functools ). %%time#step 1mapped = map(mapper, list_of_strings)mapped = zip(list_of_strings, mapped)#step 2:reduced = reduce(reducer, mapped)print(reduced)OUTPUT:('python', 6)CPU times: user 57.9 s, sys: 0 ns, total: 57.9 sWall time: 57.9 s The code does exactly the same thing, it looks bit fancier, but also it is more generic and will help us parallelize it. Let’s look more closely at it: Step 1 maps our list of strings into a list of tuples using the mapper function (here I use the zip again to avoid duplicating the strings). Step 2 uses the reducer function, goes over the tuples from step one and applies it one by one. The result is a tuple with the maximum length. Now let's break our input into chunks and understand how it works before we do any parallelization (we’ll use the chunkify that breaks a large list into chunks of equal size): data_chunks = chunkify(list_of_strings, number_of_chunks=30)#step 1:reduced_all = []for chunk in data_chunks: mapped_chunk = map(mapper, chunk) mapped_chunk = zip(chunk, mapped_chunk) reduced_chunk = reduce(reducer, mapped_chunk) reduced_all.append(reduced_chunk) #step 2:reduced = reduce(reducer, reduced_all)print(reduced)OUTPUT:('python', 6) In step one, we go over our chunks and find the longest string in that chunk using a map and reduce. In step two, we take the output of step one, which is a list of reduced values, and perform a final reduce to get the longest string. We use number_of_chunks=36 because this is the number of CPUs I have on my machine. We are almost ready to run our code in parallel. The only thing that we can do better is to add the first reduce step into a single the mapper. We do that because we want to break our code into two simple steps and as the first reduce works on a single chunk and we want to parallelize it as well. This is how it looks like: def chunks_mapper(chunk): mapped_chunk = map(mapper, chunk) mapped_chunk = zip(chunk, mapped_chunk) return reduce(reducer, mapped_chunk)%%timedata_chunks = chunkify(list_of_strings, number_of_chunks=30)#step 1:mapped = map(chunks_mapper, data_chunks)#step 2:reduced = reduce(reducer, mapped)print(reduced)OUTPUT:('python', 6)CPU times: user 58.5 s, sys: 968 ms, total: 59.5 sWall time: 59.5 s Now we have a nice looking two steps code. If we’ll execute it as is, we’ll get the same computation time, but, now we can parallelize step 1 using the multiprocessing module simply by using the pool.map function instead of the regular map function: from multiprocessing import Poolpool = Pool(8)data_chunks = chunkify(large_list_of_strings, number_of_chunks=8)#step 1:mapped = pool.map(mapper, data_chunks)#step 2:reduced = reduce(reducer, mapped)print(reduced)OUTPUT:('python', 6)CPU times: user 7.74 s, sys: 1.46 s, total: 9.2 sWall time: 10.8 s We can see it runs 2 times faster! It’s not a huge improvement, but the good news is that we can improve it by increasing the number of processes! We can even do it on more than one machine, if our data is very big, we can use tens or even thousands of machines to make our computation time as short as we want (almost). Our architecture is built using two functions: map and reduce . Each computation unit maps the input data and executes the initial reduce. Finally, some centralized unit executes the final reduce and returns the output. It looks like this: This architecture has two important advantages: It is scalable: if we have more data, the only thing we need to do is to add more processing units. No code change needed!It is generic: this architecture supports a vast variety of tasks, we can replace our map and reduce function with almost anything and this way computer many different things in a scalable way. It is scalable: if we have more data, the only thing we need to do is to add more processing units. No code change needed! It is generic: this architecture supports a vast variety of tasks, we can replace our map and reduce function with almost anything and this way computer many different things in a scalable way. It is important to note that in most cases, our data will be very big and static. It means the breaking into chunks every time is inefficient and actually redundant. So in most applications in real life, we’ll store our data in chunks (or shards) from the very beginning. Then, we’ll be able to do different computations using the MapReduce technique. Now let's see a more interesting example: Word Count! Say we have a very big set of news articles and we want to find the top 10 used words not including stop words, how would we do that? First, let's get the data: from sklearn.datasets import fetch_20newsgroupsdata = news.data*10 For this post, I made the data x10 larger so we could see the difference. For each text in the dataset, we want to tokenize it, clean it, remove stop words and finally count the words: def clean_word(word): return re.sub(r'[^\w\s]','',word).lower()def word_not_in_stopwords(word): return word not in ENGLISH_STOP_WORDS and word and word.isalpha() def find_top_words(data): cnt = Counter() for text in data: tokens_in_text = text.split() tokens_in_text = map(clean_word, tokens_in_text) tokens_in_text = filter(word_not_in_stopwords, tokens_in_text) cnt.update(tokens_in_text) return cnt.most_common(10) Let’s see how much time does it take without MapReduce: %time find_top_words(data)OUTPUT:[('subject', 122520), ('lines', 118240), ('organization', 111850), ('writes', 78360), ('article', 67540), ('people', 58320), ('dont', 58130), ('like', 57570), ('just', 55790), ('university', 55440)]CPU times: user 51.7 s, sys: 0 ns, total: 51.7 sWall time: 51.7 s Now, let’s write our mapper ,reducer and chunk_mapper: def mapper(text): tokens_in_text = text.split() tokens_in_text = map(clean_word, tokens_in_text) tokens_in_text = filter(word_not_in_stopwords, tokens_in_text) return Counter(tokens_in_text)def reducer(cnt1, cnt2): cnt1.update(cnt2) return cnt1def chunk_mapper(chunk): mapped = map(mapper, chunk) reduced = reduce(reducer, mapped) return reduced The mapper gets a text, splits it into tokens, cleans them and filters stop words and non-words, finally, it counts the words within this single text document. The reducer function gets 2 counters and merges them. The chunk_mapper gets a chunk and does a MapReduce on it. Now let’s run using the framework we built it and see: %%timedata_chunks = chunkify(data, number_of_chunks=36)#step 1:mapped = pool.map(chunk_mapper, data_chunks)#step 2:reduced = reduce(reducer, mapped)print(reduced.most_common(10))OUTPUT:[('subject', 122520), ('lines', 118240), ('organization', 111850), ('writes', 78360), ('article', 67540), ('people', 58320), ('dont', 58130), ('like', 57570), ('just', 55790), ('university', 55440)]CPU times: user 1.52 s, sys: 256 ms, total: 1.77 sWall time: 4.67 s This is 10 times faster! Here, we were able to really utilize our computational power because the task is much more complex and requires more. To sum up, MapReduce is an exciting and essential technique for large data processing. It can handle a tremendous number of tasks including Counts, Search, Supervised and Unsupervised learning and more. Today there’s a lot of implementations and tools that can make our lives much more comfortable, but I think it is very important to understand the basics.
[ { "code": null, "e": 404, "s": 171, "text": "Many times, as Data Scientists, we have to deal with huge amount of data. In those cases, many approaches won’t work or won’t be feasible. A massive amount of data is good, it’s very good, and we want to utilize as much as possible." }, { "code": ...
Introduction to hierarchical time series forecasting — part II | by Eryk Lewinson | Towards Data Science
In the first part of this article, I provided an introduction to hierarchical time series forecasting, described different types of hierarchical structures, and went over the most popular approaches to forecasting such time series. In the second part, I present an example of how to approach such a task in Python using the scikit-hts library. As always, we start with the setup. First, we need to install scikit-hts using pip. Then, we import the following libraries. Note that scikit-hts is imported simply as hts. In this article, we use the Australian tourism data set, which was also used in Forecasting: Principles and Practice (you can read my opinion about the book here). The data set is natively available in the R package called tsibble, but you can also download it from Kaggle or my GitHub. The data set contains the quarterly number of trips to Australia between 1998 and 2016. Furthermore, it comes with a geographical breakdown (per state and region) and the purpose of the trip (business, holiday, visiting, other). So we could create either a hierarchical or grouped time series forecast. For this article, we will focus on the strictly hierarchical one (though scikit-hts can handle the grouped variant as well). The original data set contains 8 states, while the one we will be working on only contains 7. The difference is Tasmania, which was apparently dropped in the Kaggle data set. In the following snippet, we carry out some preprocessing steps. Two noteworthy steps are aggregating over the purpose (as we will be working with a strictly hierarchical structure) and renaming the states of Australia to their abbreviations. In the very end, we also create a concatenation of state and region, to create a unique identifier which we will then use for creating the hierarchical structure. After all the steps, the DataFrame looks as follows: Using this one-liner we can inspect which regions are within each of the states in our data set. # inspect all the regions per statedf.groupby("state")["region"].apply(set).to_frame() The next step is to create a DataFrame, which we can feed to the scikit-hts models. The DataFrame should contain each time series as a column and the rows indicate the observations in a given time period (quarter in this case). Using the snippet above, we create three DataFrames, one for each level of the hierarchy: bottom level — this is simply a pivot that transforms the initial DataFrame in the long format to the wide format, middle level — before pivoting the DataFrame, we first sum over the states, total level — the highest level, which is the sum of all states. After preparing the DataFrames, we join them using the index and print the number of unique series in each level of the hierarchy: Number of time series at the bottom level: 77Number of time series at the middle level: 7 Adding the grand total level, we arrive at 85 unique time series. The estimators we are going to use also require a clear definition of the hierarchical structure. The easiest way to prepare it is to build the hierarchical tree as a dictionary. Each node (identified as the key in the dict) has a list of children as the corresponding value in the dict. Then, each of the children can also be a key in the dict and have children of its own. The easiest way to understand it is by looking at an example. In the following snippet, we create the hierarchy of our time series. As you can see, we benefited from creating the unique state-region names, because now we can easily identify the state’s children by checking if the regions’ names start with the state’s abbreviation. Below we show a piece of the hierarchy dictionary. Alternatively, we can use the HierarchyTree class to represent the underlying structure of our series. We must provide the DataFrame and the dictionary as inputs and then we can directly pass the instantiated object to the estimator (without the individual components of the HierarchyTree). An additional benefit is a slightly nicer representation of the tree’s structure, which you can see below. Using the prepared hierarchy, we can quickly visualize the data. We start with the grand total level. In the image, we can observe the entire history of data available for this exercise. hierarchy_df["total"].plot(title="Trips - total level"); As the second step, let’s plot all the state-level series. To do so, we can simply fetch all columns which are the children of the total level in the already defined hierarchy. ax = hierarchy_df[hierarchy['total']].plot(title="Trips - state level")ax.legend(bbox_to_anchor=(1.0, 1.0)); Lastly, let’s plot the lowest level for the Western Australia state. I chose this one as it has the fewest regions, so the plot is still easy to read. ax = hierarchy_df[hierarchy['WA']].plot(title="Trips - regions of Western Australia")ax.legend(bbox_to_anchor=(1.0, 1.0)); Finally, we can focus on the modeling part. In this article, I just want to highlight the functionalities of scikit-hts. That is why I present simplified examples, in which I use the entire data set for training and then forecast 4 steps (a year) into the future. Naturally, in a real-life scenario we would employ an appropriate time-series cross-validation scheme and try to tune the hyperparameters of the model to obtain the best fit. The main class of the library is the HTSRegressor. The usage of the library will be familiar to anyone who has used scikit-learn before (initialize the estimator -> fit to data -> predict). The two arguments we should pay attention to are model and revision_method. model determines the underlying type of model that will be used to forecast the individual time series. Currently, the library supports: auto_arima — from the pmdarima library, SARIMAX — from statsmodels , Holt-Winters exponential smoothing — also from statsmodels , Facebook’s Prophet. The revision_method argument is responsible for the approach to hierarchical time series forecasting. We can choose from: BU — the bottom-up approach, AHP — average historical proportions (top-down approach), PHA — proportions of historical averages (top-down approach), FP — the forecasted proportions (top-down approach), OLS — the optimal combination using OLS, WLSS - optimal combination using structurally weighted OLS, WLSV - optimal combination using variance-weighted OLS. For a more detailed understanding of these methods, I refer you to Forecasting: Principles and Practice. I tried to start off with the bottom-up approach, however, I kept on encountering weird errors. The same syntax worked for other revision methods, so I do assume there is some kind of a bug in the library. But we can still look at the syntax. First, I instantiate the estimator— in this case, I am using auto-ARIMA and the bottom-up revision method. Then, I fit the model using the familiar fit method. As arguments, I pass the DataFrame with all the time series and the dictionary containing the hierarchy. Lastly, I obtain 4 out-of-sample predictions using the predict method. The resulting DataFrame contains n + 4 rows, where n is the number of rows in the original DataFrame. We get the fitted values (n) and the 4 predictions on top of those. After failing to run the previous example, I moved to the top-down approach using average historical proportions. This time, the code ran successfully. Below, you can see the fitted values and predictions for each of the three levels — the grand total level, state, and region levels. For brevity, I only show a single example of the lower levels. In the Notebook, you can find a convenience function used for generating the plots for any level of the hierarchy. As we can see, the models are quite off at the very begging of the time series, but they stabilize quite quickly. Of course, this is quite an unscientific evaluation of the performance, but as mentioned before, we focus purely on obtaining the hierarchical forecasts, and tuning them is beyond of the scope of this article. Another observation is that the lower levels are more erratic, that is, they contain more noise. While playing around with the library, I noticed a few quirks: unlike the scikit-learn API, we cannot use n_jobs=-1 to indicate using all the available cores. And when using numbers other than 0 (as in the examples provided by the author of the library), I was getting weird results. in the available examples, the author suggests assigning an output while calling the fit method. the results are way off when using Prophet as the underlying model. You can see that in the image below. Lastly, we use the optimal reconciliation using OLS. The code only requires slight adjustments. We present the plots of the very same levels as for the top-down approach to enable easy comparison. The difference is mostly visible at the lowest level, where the fit using the OLS approach is much better, even though the series itself is quite noisy. That should not come as a surprise, as the optimal reconciliation approach is known to provide the most accurate forecasts (for more information about its advantages, please see the previous article). There is also one thing that we should be aware of — the OLS approach created a negative fitted value for the first observation. And as we know, all the values in the series are non-negative. However, given it’s only the fitted value and just for the first observation in the series (when the algorithm has very little information), this should not be an issue. In this article, I showed how to use scikit-hts for hierarchical time series forecasting in Python. The library offers an API similar to scikit-learn and is quite easy to start playing around with. However, the library is in the alpha version. After spending quite some time trying to make the example work (and not succeeding for all the revision methods), I must say that there is still a long way to go for the Python library to catch up to the R’s equivalent — fable. Unfortunately, the documentation is also quite far from complete and not that easy to understand (the same goes for the examples in the repo). Having said that, I would give it some extra thought if it makes sense to use scikit-hts for a serious project. I would either consider creating a custom solution (especially if using the simpler approaches such as the bottom-up one) or maybe move to R for the particular forecast. You can find the code used for this article on my GitHub. If you managed to overcome the issues I mentioned in the article, I would be very happy to hear how you did it. Also, any constructive feedback is welcome. You can reach out to me on Twitter or in the comments. Liked the article? Become a Medium member to continue learning by reading without limits. If you use this link to become a member, you will support me at no extra cost to you. Thanks in advance and see you around! You might also be interested in one of the following articles: towardsdatascience.com towardsdatascience.com towardsdatascience.com Hyndman, R.J., & Athanasopoulos, G. (2021) Forecasting: principles and practice, 3rd edition, OTexts: Melbourne, Australia. OTexts.com/fpp3.
[ { "code": null, "e": 516, "s": 172, "text": "In the first part of this article, I provided an introduction to hierarchical time series forecasting, described different types of hierarchical structures, and went over the most popular approaches to forecasting such time series. In the second part, I p...
Set variable point size in Matplotlib
To set the variable point size in matplotlib, we can take the following steps− Initialize the coordinates of the point. Make a variable to store the point size. Plot the point using scatter method, with marker=o, color=red, s=point_size.To display the figure, use show() method. from matplotlib import pyplot as plt plt.rcParams["figure.figsize"] = [7.00, 3.50] plt.rcParams["figure.autolayout"] = True xy = (3, 4) point_size = 100 plt.scatter(x=xy[0], y=xy[1], marker='o', c='red', s=point_size) plt.show()
[ { "code": null, "e": 1141, "s": 1062, "text": "To set the variable point size in matplotlib, we can take the following steps−" }, { "code": null, "e": 1182, "s": 1141, "text": "Initialize the coordinates of the point." }, { "code": null, "e": 1223, "s": 1182, ...
A Step-by-Step Guide to Speech Recognition and Audio Signal Processing in Python | by Rahulraj Singh | Towards Data Science
Speech is the primary form of human communication and is also a vital part of understanding behavior and cognition. Speech Recognition in Artificial Intelligence is a technique deployed on computer programs that enables them in understanding spoken words. As images and videos, sound is also an analog signal that humans perceive through sensory organs. For machines, to consume this information, it needs to be stored as digital signals and analyzed through software. The conversion from analog to digital consists of the below two processes: Sampling: It is a procedure used to convert a time-varying (changing with time) signal s(t) to a discrete progression of real numbers x(n). Sampling period (Ts) is a term that defines the interval between two successive discrete samples. Sampling Frequency (fs = 1/Ts) is the inverse of the sampling period. Common sampling frequencies are 8 kHz, 16 kHz, and 44.1 kHz. A 1 Hz sampling rate means one sample per second and therefore high sampling rates mean better signal quality.Quantization: This is the process of replacing every real number generated by sampling with an approximation to obtain a finite precision (defined within a range of bits). In the majority of scenarios, 16 bits per sample are used for the representation of a single quantized sample. Therefore, raw audio samples generally have a signal range of -215 to 215 although, during analysis, these values are standardized to the range (-1, 1) for simpler validation and model training. A sample resolution is always measured in bits per sample. Sampling: It is a procedure used to convert a time-varying (changing with time) signal s(t) to a discrete progression of real numbers x(n). Sampling period (Ts) is a term that defines the interval between two successive discrete samples. Sampling Frequency (fs = 1/Ts) is the inverse of the sampling period. Common sampling frequencies are 8 kHz, 16 kHz, and 44.1 kHz. A 1 Hz sampling rate means one sample per second and therefore high sampling rates mean better signal quality. Quantization: This is the process of replacing every real number generated by sampling with an approximation to obtain a finite precision (defined within a range of bits). In the majority of scenarios, 16 bits per sample are used for the representation of a single quantized sample. Therefore, raw audio samples generally have a signal range of -215 to 215 although, during analysis, these values are standardized to the range (-1, 1) for simpler validation and model training. A sample resolution is always measured in bits per sample. A general Speech Recognition system is designed to perform the tasks mentioned below and can easily be correlated with a standard data analytics architecture: The capture of speech (words, sentences, phrases) given by a human. You can think of this as the Data Acquisition part of any general Machine Learning workflow.Transforming audio frequencies to make it machine-ready. This process is the data pre-processing part where we clean features of the data for the machine to process it.Application of Natural Language Processing (NLP) on the acquired data to understand the content of speech.Synthesis of the recognized words to help the machine speak a similar dialect. The capture of speech (words, sentences, phrases) given by a human. You can think of this as the Data Acquisition part of any general Machine Learning workflow. Transforming audio frequencies to make it machine-ready. This process is the data pre-processing part where we clean features of the data for the machine to process it. Application of Natural Language Processing (NLP) on the acquired data to understand the content of speech. Synthesis of the recognized words to help the machine speak a similar dialect. Let us walk through all these steps and processes one by one, in detail, with the corresponding pseudo-code. Also, before we start, below is a link for the complete code repository that will be handy to go through alongside this tutorial. github.com File I/O in Python (scipy.io): SciPy has numerous methods of performing file operations in Python. The I/O module that includes methods read(filename[, mmap]) and write(filename, rate, data) is used to read from a .wav file and write a NumPy array in the form of a .wav file. We will be using these methods to read from and write to sound (audio) file formats. The first step in starting a speech recognition algorithm is to create a system that can read files that contain audio (.wav, .mp3, etc.) and understanding the information present in these files. Python has libraries that we can use to read from these files and interpret them for analysis. The purpose of this step is to visualize audio signals as structured data points. Recording: A recording is a file we give to the algorithm as its input. The algorithm then works on this input to analyze its contents and build a speech recognition model. This could be a saved file or a live recording, Python allows for both. Sampling: All signals of a recording are stored in a digitized manner. These digital signatures are hard for software to work upon since machines only understand numeric input. Sampling is the technique used to convert these digital signals into a discrete numeric form. Sampling is done at a certain frequency and it generates numeric signals. Choosing the frequency levels depends on the human perception of sound. For instance, choosing a high frequency implies that the human perception of that audio signal is continuous. # Using IO module to read Audio Filesfrom scipy.io import wavfilefreq_sample, sig_audio = wavfile.read("/content/Welcome.wav")# Output the parameters: Signal Data Type, Sampling Frequency and Durationprint('\nShape of Signal:', sig_audio.shape)print('Signal Datatype:', sig_audio.dtype)print('Signal duration:', round(sig_audio.shape[0] / float(freq_sample), 2), 'seconds')>>> Shape of Signal: (645632,) >>> Signal Datatype: int16 >>> Signal duration: 40.35 seconds# Normalize the Signal Value and Plot it on a graphpow_audio_signal = sig_audio / np.power(2, 15)pow_audio_signal = pow_audio_signal [:100]time_axis = 1000 * np.arange(0, len(pow_audio_signal), 1) / float(freq_sample)plt.plot(time_axis, pow_audio_signal, color='blue') This is the representation of the sound amplitude of the input file against its duration of play. We have successfully extracted numerical data from an audio (.wav) file. The representation of the audio signal we did in the first section represents a time-domain audio signal. It shows the intensity (loudness or amplitude) of the sound wave with respect to time. Portions with amplitude = 0, represent silence. In terms of sound engineering, amplitude = 0 is the sound of static or moving air particles when no other sound is present in the environment. Frequency-Domain Representation: To better understand an audio signal, it is necessary to look at it through a frequency domain. This representation of an audio signal will give us details about the presence of different frequencies in the signal. Fourier Transform is a mathematical concept that can be used in the conversion of a continuous signal from its original time-domain state to a frequency-domain state. We will be using Fourier Transforms (FT) in Python to convert audio signals to a frequency-centric representation. Fourier Transforms in Python: Fourier Transforms is a mathematical concept that can decompose this signal and bring out the individual frequencies. This is vital for understanding all the frequencies that are combined together to form the sound we hear. Fourier Transform (FT) gives all the frequencies present in the signal and also shows the magnitude of each frequency. All audio signals are composed of a collection of many single-frequency sound waves that travel together and create a disturbance in the medium of movement, for instance, a room. Capturing sound is essentially the capturing of the amplitudes that these waves generated in space. NumPy (np.fft.fft): This NumPy function allows us to compute a 1-D discrete Fourier Transform. The function uses Fast Fourier Transform (FFT) algorithm to convert a given sequence to a Discrete Fourier Transform (DFT). In the file we are processing, we have a sequence of amplitudes drawn from an audio file, that were originally sampled from a continuous signal. We will use this function to covert this time-domain to a discrete frequency-domain signal. Let us now walk through some code to implement Fourier Transform to Audio signals with the aim of representing sound to its intensity (decibels (dB)) # Working on the same input file# Extracting the length and the half-length of the signal to input to the foruier transformsig_length = len(sig_audio)half_length = np.ceil((sig_length + 1) / 2.0).astype(np.int)# We will now be using the Fourier Transform to form the frequency domain of the signalsignal_freq = np.fft.fft(sig_audio)# Normalize the frequency domain and square itsignal_freq = abs(signal_freq[0:half_length]) / sig_lengthsignal_freq **= 2transform_len = len(signal_freq)# The Fourier transformed signal now needs to be adjusted for both even and odd casesif sig_length % 2: signal_freq[1:transform_len] *= 2else: signal_freq[1:transform_len-1] *= 2# Extract the signal's strength in decibels (dB)exp_signal = 10 * np.log10(signal_freq)x_axis = np.arange(0, half_length, 1) * (freq_sample / sig_length) / 1000.0plt.plot(x_axis, exp_signal, color='green', linewidth=1) With this, we were able to apply Fourier Transforms to the Audio input file and subsequently see a frequency domain (frequency against signal strength) representation of the audio. Once the speech is moved from a time-domain signal to a frequency domain signal, the next step is to convert this frequency domain data into a usable feature vector. Before starting this, we have to know about a new concept called MFCC. MFCC is a technique designed to extract features from an audio signal. It uses the MEL scale to divide the audio signal’s frequency bands and then extracts coefficients from each individual frequency band, thus, creating a separation between frequencies. MFCC uses the Discrete Cosine Transform (DCT) to perform this operation. The MEL scale is established on the human perception of sound, i.e., how the human brain process audio signals and differentiates between the varied frequencies. Let us look at the formation of the MEL scale below. Human voice sound perception: An adult human, has a fundamental hearing capacity that ranges from 85 Hz to 255 Hz, and this can further be distinguished between genders (85Hz to 180 Hz for Male and 165 Hz to 255 Hz for females). Above these fundamental frequencies, there also are harmonics that the human ear processes. Harmonics are multiplications of the fundamental frequency. These are simple multipliers, for instance, a 100 Hz frequency’s second harmonic will be 200 Hz, third would be 300 Hz, and so on. The rough hearing range for humans is 20Hz to 20KHz and this sound perception is also non-linear. We can distinguish low-frequency sounds better in comparison to high-frequency sounds. For example, we can clearly state the difference between signals of 100Hz and 200Hz but cannot distinguish between 15000 Hz and 15100 Hz. To generate tones of varied frequencies we can use the program above or use this tool. MEL Scale: Stevens, Volkmann, and Newmann proposed a pitch in 1937 that introduced the MEL scale to the world. It is a pitch scale (scale of audio signals with varying pitch levels) that is judged by humans on the basis of equality in their distances. It is basically a scale that is derived from human perception. For example, if you were exposed to two sound sources distant from each other, the brain will perceive a distance between these sources without actually seeing them. This scale is based on how we humans measure audio signal distances with the sense of hearing. Because our perception is non-linear, the distances on this scale increase with frequency. MEL-spaced Filterbank: To compute the power (strength) of every frequency band, the first step is to distinguish the different feature bands available (done by MFCC). Once these segregations are made, we use filter banks to create partitions in the frequencies and separate them. Filter banks can be created using any specified frequency for partitions. The spacing between filters within a filter bank grows exponentially as the frequency grows. In the code section, we will see how to separate frequency bands. MFCC and the creation of filter banks are all motivated by the nature of audio signals and impacted by the way in which humans perceive sound. But this processing also requires a lot of mathematical computation that goes behind the scenes in its implementation. Python directly gives us methods to build filters and perform the MFCC functionality on sound but let us glance at the maths behind these functions. Three discrete mathematical models that go into this processing are the Discrete Cosine Transform (DCT), which is used for decorrelation of filter bank coefficients, also termed as whitening of sound, and Gaussian Mixture Models — Hidden Markov Models (GMMs-HMMs) that are a standard for Automatic Speech Recognition (ASR) algorithms. Although, in the present day, when computation costs have gone down (thanks to Cloud Computing), deep learning speech systems that are less susceptible to noise, are used over these techniques. DCT is a linear transformation algorithm, and it will therefore rule out a lot of useful signals, given sound is highly non-linear. # Installing and importing necessary librariespip install python_speech_featuresfrom python_speech_features import mfcc, logfbanksampling_freq, sig_audio = wavfile.read("Welcome.wav")# We will now be taking the first 15000 samples from the signal for analysissig_audio = sig_audio[:15000]# Using MFCC to extract features from the signalmfcc_feat = mfcc(sig_audio, sampling_freq)print('\nMFCC Parameters\nWindow Count =', mfcc_feat.shape[0])print('Individual Feature Length =', mfcc_feat.shape[1])>>> MFCC Parameters Window Count = 93 >>> Individual Feature Length = 13mfcc_feat = mfcc_feat.Tplt.matshow(mfcc_feat) # Generating filter bank featuresfb_feat = logfbank(sig_audio, sampling_freq)print('\nFilter bank\nWindow Count =', fb_feat.shape[0])print('Individual Feature Length =', fb_feat.shape[1])>>> Filter bank Window Count = 93>>> Individual Feature Length = 26fb_feat = fb_feat.Tplt.matshow(fb_feat) If we see the two distributions, it is evident that the low frequency and high-frequency sound distributions are separated in the second image. The MFCC, along with application of Filter Banks is a good algorithm to separate the high and low frequency signals. This expedites the analysis process as we can trim sound signals into two or more separate segments and individually analyze them based on their frequencies. Speech Recognition is the process of understanding the human voice and transcribing it to text in the machine. There are several libraries available to process speech to text, namely, Bing Speech, Google Speech, Houndify, IBM Speech to Text, etc. We will be using the Google Speech library to convert Speech to Text. More about the Google Speech API can be read from the Google Cloud Page and the Speech Recognition PyPi page. A few key features that the Google Speech API is capable of are the adaptation of speech. This means that the API understands the domain of the speech. For instance, currencies, addresses, years are all prescribed into the speech-to-text conversion. There are domain-specific classes defined in the algorithm that recognize these occurrences in the input speech. The API works with both on-prem, pre-recorded files as well as live recordings on the microphone in the present working environment. We will analyze live speech through microphonic input in the next section. Working with Microphones: The PyAudio open-source package allows us to directly record audio through an attached microphone and analyze it with Python in real-time. The installation of PyAudio will vary based on the operating system (the installation explanation is mentioned in the code section below).Microphone Class: The instance of the .microphone() class can be used with Speech Recognizer to directly record audio within the working directory. To check if microphones are available in the system, use the list_microphone_names static method. To use any of the available listed microphones use the device_index method (Implementation shown in the code below)Capturing Microphone Input: The listen() function is used to capture input given to the microphone. All the sound signals that the selected microphone receives, are stored in the variable that calls the listen() function. This method continues recording until a silent ( 0 amplitude) signal is detected.Ambient Noise Reduction: Any functional environment is prone to have ambient noise that will hamper the recording. Therefore, the adjust_for_ambient_noise() method within the Recognizer class helps automatically cancel out ambient noise from the recording.Recognition of Sound: The speech recognition workflow below explains the part after processing of signals where the API performs tasks like Semantic and Syntactic corrections, understands the domain of sound, the spoken language, and finally creates the output by converting speech to text. Below we will also see the implementation of Google’s Speech Recognition API using the Microphone class. Working with Microphones: The PyAudio open-source package allows us to directly record audio through an attached microphone and analyze it with Python in real-time. The installation of PyAudio will vary based on the operating system (the installation explanation is mentioned in the code section below). Microphone Class: The instance of the .microphone() class can be used with Speech Recognizer to directly record audio within the working directory. To check if microphones are available in the system, use the list_microphone_names static method. To use any of the available listed microphones use the device_index method (Implementation shown in the code below) Capturing Microphone Input: The listen() function is used to capture input given to the microphone. All the sound signals that the selected microphone receives, are stored in the variable that calls the listen() function. This method continues recording until a silent ( 0 amplitude) signal is detected. Ambient Noise Reduction: Any functional environment is prone to have ambient noise that will hamper the recording. Therefore, the adjust_for_ambient_noise() method within the Recognizer class helps automatically cancel out ambient noise from the recording. Recognition of Sound: The speech recognition workflow below explains the part after processing of signals where the API performs tasks like Semantic and Syntactic corrections, understands the domain of sound, the spoken language, and finally creates the output by converting speech to text. Below we will also see the implementation of Google’s Speech Recognition API using the Microphone class. # Install the SpeechRecognition and pipwin classes to work with the Recognizer() classpip install SpeechRecognitionpip install pipwin# Below are a few links that can give details about the PyAudio class we will be using to read direct microphone input into the Jupyter Notebook# https://anaconda.org/anaconda/pyaudio# https://www.lfd.uci.edu/~gohlke/pythonlibs/#pyaudio# To install PyAudio, Run in the Anaconda Terminal CMD: conda install -c anaconda pyaudio# Pre-requisite for running PyAudio installation - Microsoft Visual C++ 14.0 or greater will be required. Get it with "Microsoft C++ Build Tools" : https://visualstudio.microsoft.com/visual-cpp-build-tools/# To run PyAudio on Colab, please install PyAudio.whl in your local system and give that path to colab for installationpip install pyaudioimport speech_recognition as speech_recog# Creating a recording object to store inputrec = speech_recog.Recognizer()# Importing the microphone class to check availabiity of microphonesmic_test = speech_recog.Microphone()# List the available microphonesspeech_recog.Microphone.list_microphone_names()# We will now directly use the microphone module to capture voice input. Specifying the second microphone to be used for a duration of 3 seconds. The algorithm will also adjust given input and clear it of any ambient noisewith speech_recog.Microphone(device_index=1) as source: rec.adjust_for_ambient_noise(source, duration=3) print("Reach the Microphone and say something!") audio = rec.listen(source)>>> Reach the Microphone and say something!# Use the recognize function to transcribe spoken words to texttry: print("I think you said: \n" + rec.recognize_google(audio))except Exception as e: print(e)>>> I think you said: >>> hi hello hello hello With this, we bring an end to our Speech Recognition and Sound Analysis article. I would still advise you to go through the links mentioned in the References section and the code repository linked at the top of the story to be able to follow it at every step. THAT’S A WRAP! Speech recognition is an AI concept that allows a machine to listen to a human voice and transcribe text out of it. Although complex in nature, the use cases revolving around Speech Recognition are plenty. From helping differently-abled users with access to computing to an automated response machine, Automatic Speech Recognition (ASR) algorithms are being used by many industries today. This chapter gave a brief introduction to the engineering of sound analysis and showed some basic manipulation techniques to work with audio. Though not detailed, it will help in creating an overall picture of how speech analysis works in the world of AI. I am Rahul, currently researching Artificial Intelligence and implementing Big Data Analytics on Xbox Games. I work with Microsoft. Apart from professional work, I am also trying to work out a program that deals with understanding how economic situations can be improved across developing nations in the world by using AI. I am at Columbia University in New York at the moment and you are free to connect with me on LinkedIn on Twitter.
[ { "code": null, "e": 428, "s": 172, "text": "Speech is the primary form of human communication and is also a vital part of understanding behavior and cognition. Speech Recognition in Artificial Intelligence is a technique deployed on computer programs that enables them in understanding spoken words....
How to find the mean of multiple columns based on a character column in R?
If we have a character column that means we are more likely to have duplicated values in that column hence finding the mean of numerical columns based on the values in character column cannot be done directly. For this purpose, we can use aggregate function as shown in the below examples. Consider the below data frame − Live Demo set.seed(214) x1<−sample(c("A","B","C"),20,replace=TRUE) x2<−rnorm(20,2,0.25) x3<−rnorm(20,524,32.14) x4<−rnorm(20,0.4,0.007) df1<−data.frame(x1,x2,x3,x4) df1 x1 x2 x3 x4 1 C 1.798039 483.1139 0.4105840 2 A 2.044579 574.3082 0.3970706 3 B 1.563161 517.0169 0.3983528 4 A 1.722058 546.6778 0.4049876 5 C 2.279675 527.9437 0.3959229 6 C 1.721763 549.1909 0.4096398 7 A 1.903552 486.4005 0.4091737 8 C 2.268382 502.0860 0.3998178 9 B 1.637324 540.1949 0.3970408 10 B 2.204911 433.7537 0.4003291 11 A 2.124006 521.2464 0.3935380 12 C 1.830028 525.4325 0.3937420 13 C 1.353421 553.7063 0.3974116 14 B 2.067984 547.9531 0.3958582 15 C 1.849775 537.0034 0.3986610 16 C 1.763957 531.4417 0.4015034 17 A 1.932413 545.7784 0.4215530 18 B 2.147503 491.7046 0.3918060 19 C 2.222639 563.4852 0.3902110 20 A 1.979280 483.2847 0.3820964 Finding the column means based on values in x1 − aggregate(.~x1,data=df1,mean) x1 x2 x3 x4 1 A 1.950981 526.2827 0.4014032 2 B 1.924176 506.1246 0.3966774 3 C 1.898631 530.3782 0.3997215 Live Demo y1<−sample(c("Male","Female"),20,replace=TRUE) y2<−rpois(20,2) y3<−rpois(20,5) y4<−rpois(20,3) y5<−rpois(20,2) y6<−rpois(20,1) y7<−rpois(20,4) df2<−data.frame(y1,y2,y3,y4,y5,y6,y7) df2 y1 y2 y3 y4 y5 y6 y7 1 Male 1 6 1 1 1 4 2 Male 4 2 1 4 0 5 3 Male 2 6 5 3 1 2 4 Male 4 4 0 3 2 0 5 Male 1 2 6 2 2 2 6 Female 5 4 2 4 0 5 7 Male 2 4 3 1 2 3 8 Male 3 8 1 2 1 5 9 Male 0 3 2 1 2 5 10 Female 3 5 1 2 1 3 11 Female 2 5 3 2 1 2 12 Female 3 8 1 0 1 6 13 Female 1 2 5 2 1 3 14 Male 2 7 1 1 0 3 15 Male 3 5 2 4 2 4 16 Female 5 6 2 4 1 0 17 Female 2 3 1 0 1 3 18 Male 2 6 3 0 1 6 19 Female 2 6 5 1 0 0 20 Female 2 6 4 5 0 3 Finding the column means based on values in y1 − aggregate(.~y1,data=df2,mean) y1 y2 y3 y4 y5 y6 y7 1 Female 2.777778 5.000000 2.666667 2.222222 0.6666667 2.777778 2 Male 2.181818 4.818182 2.272727 2.000000 1.2727273 3.545455
[ { "code": null, "e": 1352, "s": 1062, "text": "If we have a character column that means we are more likely to have duplicated values in that column hence finding the mean of numerical columns based on the values in character column cannot be done directly. For this purpose, we can use aggregate func...
Kotlin - Operators
An operator is a symbol that tells the compiler to perform specific mathematical or logical manipulations. Kotlin is rich in built-in operators and provide the following types of operators: Arithmetic Operators Arithmetic Operators Relational Operators Relational Operators Assignment Operators Assignment Operators Unary Operators Unary Operators Logical Operators Logical Operators Bitwise Operations Bitwise Operations Now let's look into these Kotlin Operators one by one. Kotlin arithmetic operators are used to perform basic mathematical operations such as addition, subtraction, multiplication and division etc. Following example shows different calculations using Kotlin Arithmetic Operators: fun main(args: Array<String>) { val x: Int = 40 val y: Int = 20 println("x + y = " + (x + y)) println("x - y = " + (x - y)) println("x / y = " + (x / y)) println("x * y = " + (x * y)) println("x % y = " + (x % y)) } When you run the above Kotlin program, it will generate the following output: x + y = 60 x - y = 20 x / y = 2 x * y = 800 x % y = 0 Kotlin relational (comparison) operators are used to compare two values, and returns a Boolean value: either true or false. Following example shows different calculations using Kotlin Relational Operators: fun main(args: Array<String>) { val x: Int = 40 val y: Int = 20 println("x > y = " + (x > y)) println("x < y = " + (x < y)) println("x >= y = " + (x >= y)) println("x <= y = " + (x <= y)) println("x == y = " + (x == y)) println("x != y = " + (x != y)) } When you run the above Kotlin program, it will generate the following output: x > y = true x < y = false x >= y = true x <= y = false x == y = false x != y = true Kotlin assignment operators are used to assign values to variables. Following is an example where we used assignment operator = to assign a values into two variables: fun main(args: Array<String>) { val x: Int = 40 val y: Int = 20 println("x = " + x) println("y = " + y) } When you run the above Kotlin program, it will generate the following output: x = 40 y = 20 Following is one more example where we used assignment operator += to add the value of self variable and assign it back into the same variable: fun main(args: Array<String>) { var x: Int = 40 x += 10 println("x = " + x) } When you run the above Kotlin program, it will generate the following output: x = 50 Following is a list of all assignment operators: Following example shows different calculations using Kotlin Assignment Operators: fun main(args: Array<String>) { var x: Int = 40 x += 5 println("x += 5 = " + x ) x = 40; x -= 5 println("x -= 5 = " + x) x = 40 x *= 5 println("x *= 5 = " + x) x = 40 x /= 5 println("x /= 5 = " + x) x = 43 x %= 5 println("x %= 5 = " + x) } When you run the above Kotlin program, it will generate the following output: x += 5 = 45 x -= 5 = 35 x *= 5 = 200 x /= 5 = 8 x %= 5 = 3 The unary operators require only one operand; they perform various operations such as incrementing/decrementing a value by one, negating an expression, or inverting the value of a boolean. Following is the list of Kotlin Unary Operators: Following example shows different calculations using Kotlin Unary Operators: fun main(args: Array<String>) { var x: Int = 40 var b:Boolean = true println("+x = " + (+x)) println("-x = " + (-x)) println("++x = " + (++x)) println("--x = " + (--x)) println("!b = " + (!b)) } When you run the above Kotlin program, it will generate the following output: +x = 40 -x = -40 ++x = 41 --x = 40 !b = false Here increment (++) and decrement (--) operators can be used as prefix as ++x or --x as well as suffix as x++ or x--. The only difference between the two forms is that in case we use them as prefix then operator will apply before expression is executed, but if use them as suffix then operator will apply after the expression is executed. Kotlin logical operators are used to determine the logic between two variables or values: Following is the list of Kotlin Logical Operators: Following example shows different calculations using Kotlin Logical Operators: fun main(args: Array<String>) { var x: Boolean = true var y:Boolean = false println("x && y = " + (x && y)) println("x || y = " + (x || y)) println("!y = " + (!y)) } When you run the above Kotlin program, it will generate the following output: x && y = false x || y = true !y = true Kotlin does not have any bitwise operators but Kotlin provides a list of helper functions to perform bitwise operations. Following is the list of Kotlin Bitwise Functions: Following example shows different calculations using Kotlin bitwise functions: fun main(args: Array<String>) { var x:Int = 60 // 60 = 0011 1100 var y:Int = 13 // 13 = 0000 1101 var z:Int z = x.shl(2) // 240 = 1111 0000 println("x.shl(2) = " + z) z = x.shr(2) // 15 = 0000 1111 println("x.shr(2) = " + z) z = x.and(y) // 12 = 0000 1100 println("x.and(y) = " + z) z = x.or(y) // 61 = 0011 1101 println("x.or(y) = " + z) z = x.xor(y) // 49 = 0011 0001 println("x.xor(y) = " + z) z = x.inv() // -61 = 1100 0011 println("x.inv() = " + z) } When you run the above Kotlin program, it will generate the following output: x.shl(2) = 240 x.shr(2) = 15 x.and(y) = 12 x.or(y) = 61 x.xor(y) = 49 x.inv() = -61 Q 1 - What does the Kotlin operator % do? A - It is used to divide a number by another number. B - Kotlin does not support any such operator C - This is bitwise XOR operator D - This is called modulus operator and returns the division remainder. Given operator % is called Arithmetic Modulus operator and returns the remainder after dividing one number by another number. Q 2 - Kotlin supports a good number of bitwise operators A - Correct B - Incorrect This is incorrect statement because Kotlin does not provide any Bitwise operators, rather it provides a set of function to perform bitwise operations. Q 3 - What does Kotlin operator ++ do? A - It is used to add two values B - There is no any such operators like ++ in Kotlin C - This is called unary increment operator D - None of the above Given operator ++ is called unary increment operator in Kotlin Q 4 - Which of the following function will do bitwise right shift operation? A - x.ushr(y) B - x.shr(y) C - x.shl(y) D - None of the above Function x.shr(y) is used to shift the bits towards right by y times for a given x operand. Q 5 - Which of the following is a logical inverse operator: A - inv() B - ! C - && D - || Yes operator ! is used to inverse the value of an operand. Q 6 - What will be the output of the following Kotlin code: fun main(args: Array<String>) { var x: Int = 40 x += 10 println(x) } A - 40 B - Syntax Error C - 50 D None of the above Here given operator += is addition assignment operator which mean it will be equvivalent to x = x + 10 which will yield a value of 50. Q 7 - What will be the output of the following Kotlin code: fun main(args: Array<String>) { var x: Int = 60 println(x.shr(2)) } A - 15 B - Syntax Error C - 50 D None of the above Correct value is 15 because x.shr(2) will shift x value by two places. So it x value is 60 = 0011 1100 , then after doing right shift it will give us 15 which is 0000 1111 68 Lectures 4.5 hours Arnab Chakraborty 71 Lectures 5.5 hours Frahaan Hussain 18 Lectures 1.5 hours Mahmoud Ramadan 49 Lectures 6 hours Catalin Stefan 49 Lectures 2.5 hours Skillbakerystudios 22 Lectures 1 hours CLEMENT OCHIENG Print Add Notes Bookmark this page
[ { "code": null, "e": 2615, "s": 2425, "text": "An operator is a symbol that tells the compiler to perform specific mathematical or logical manipulations. Kotlin is rich in built-in operators and provide the following types of operators:" }, { "code": null, "e": 2636, "s": 2615, "...
Tree-Boosted Mixed Effects Models | by Fabio Sigrist | Towards Data Science
This article shows how tree-boosting (sometimes also referred to as “gradient tree-boosting”) can be combined with mixed effects models using the GPBoost algorithm. Background is provided on both the methodology as well as on how to apply the GPBoost library using Python. We show how (i) models are trained, (ii) parameters tuned, (iii) model are interpreted, and (iv) predictions are made. Further, we do a comparison of several alternative approaches. Tree-boosting with its well-known implementations such as XGBoost, LightGBM, and CatBoost, is widely used in applied data science. Besides state-of-the-art predictive accuracy, tree-boosting has the following advantages: Automatic modeling of non-linearities, discontinuities, and complex high-order interactions Robust to outliers in and multicollinearity among predictor variables Scale-invariance to monotone transformations of the predictor variables Automatic handling of missing values in predictor variables Mixed effects models are a modeling approach for clustered, grouped, longitudinal, or panel data. Among other things, they have the advantage that they allow for more efficient learning of the chosen model for the regression function (e.g. a linear model or a tree ensemble). As outlined in Sigrist (2020), combined gradient tree-boosting and mixed effects models often performs better than (i) plain vanilla gradient boosting, (ii) standard linear mixed effects models, and (iii) alternative approaches for combing machine learning or statistical models with mixed effects models. Grouped data (aka clustered data, longitudinal data, panel data) occurs naturally in many applications when there are multiple measurements for different units of a variable of interest. Examples include: One wants to investigate the impact of some factors (e.g. learning technique, nutrition, sleep, etc.) on students’ test scores and every student does several tests. In this case, the units, i.e. the grouping variable, are the students and the variable of interest is the test score. A company gathers transaction data about its customers. For every customer, there are several transactions. The units are then the customers and the variable of interest can be any attribute of the transactions such as prices. Basically, such grouped data can be modeled using four different approaches: Ignore the grouping structure. This is rarely a good idea since important information is neglected.Model each group (i.e. each student or each customer) separately. This is also rarely a good idea as the number of measurements per group is often small relative to the number of different groups.Include the grouping variable (e.g. student or customer ID) in your model of choice and treat it as a categorical variable. While this is a viable approach, it has the following disadvantages. Often, the number of measurements per group (e.g. number of tests per student, number of transactions per customer) is relatively small and the number of different groups is large (e.g. number of students, customers, etc.). In this case, the model needs to learn many parameters (one for every group) based on relatively little data which can make the learning inefficient. Further, for trees, high cardinality categorical variables can be problematic.Model the grouping variable using so-called random effects in a mixed effects model. This is often a sensible compromise between the approaches 2. and 3. above. In particular, as illustrated below and in Sigrist (2020), this is beneficial compared to the other approaches in the case of tree-boosting. Ignore the grouping structure. This is rarely a good idea since important information is neglected. Model each group (i.e. each student or each customer) separately. This is also rarely a good idea as the number of measurements per group is often small relative to the number of different groups. Include the grouping variable (e.g. student or customer ID) in your model of choice and treat it as a categorical variable. While this is a viable approach, it has the following disadvantages. Often, the number of measurements per group (e.g. number of tests per student, number of transactions per customer) is relatively small and the number of different groups is large (e.g. number of students, customers, etc.). In this case, the model needs to learn many parameters (one for every group) based on relatively little data which can make the learning inefficient. Further, for trees, high cardinality categorical variables can be problematic. Model the grouping variable using so-called random effects in a mixed effects model. This is often a sensible compromise between the approaches 2. and 3. above. In particular, as illustrated below and in Sigrist (2020), this is beneficial compared to the other approaches in the case of tree-boosting. For the GPBoost algorithm, it is assumed that the response variable y is the sum of a potentially non-linear mean function F(X) and so-called random effects Zb: y = F(X) + Zb + e where y is the response variable (aka label) X contains the predictor variables (aka features) and F() is a potentially non-linear function. In linear mixed effects models, this is simply a linear function. In the GPBoost algorithm, this is an ensemble of trees. Zb are the random effects which are assumed to follow a multivariate normal distribution e is an error term The model is trained using the GPBoost algorithm, where trainings means learning the (co-)variance parameters (aka hyper-parameters) of the random effects and the regression function F(X) using a tree ensemble. The random effects Zb can be estimated (or predicted, as it is often called) after the model has been learned. In brief, the GPBoost algorithm is a boosting algorithm that iteratively learns the (co-)variance parameters and adds a tree to the ensemble of trees using a gradient and/or a Newton boosting step. The main difference to existing boosting algorithms is that, first, it accounts for dependency among the data due to clustering and, second, it learns the (co-)variance parameters of the random effects. See Sigrist (2020) for more details on the methodology. In the GPBoost library, (co-)variance parameters can be learned using (accelerated) gradient descent or Fisher scoring, and trees are learned using the LightGBM library. In particular, this means that the full functionality of LightGBM is available. In the following, we show how combined tree-boosting and mixed effects models can be applied using the GPBoost library from Python. The complete code used in this article can be found here as a Python script. Note that there is also an equivalent R package. More information on this can be found here. pip install gpboost -U We use simulated data here. We adopt a well known non-linear function F(X). For simplicity, we use one grouping variable. But one could equally well use several random effects including hierarchically nested ones, crossed ones, or random slopes. The number of samples is 5'000 and the number of different groups or clusters is 500. We also generate test data for evaluating the predictive accuracy. For the test data, we include both known, observed groups as well as novel, unobserved groups. import gpboost as gpbimport numpy as npimport sklearn.datasets as datasetsimport timeimport pandas as pd# Simulate datantrain = 5000 # number of samples for trainingn = 2 * ntrain # combined number of training and test datam = 500 # number of categories / levels for grouping variablesigma2_1 = 1 # random effect variancesigma2 = 1 ** 2 # error variance# Simulate non-linear mean functionnp.random.seed(1)X, F = datasets.make_friedman3(n_samples=n)X = pd.DataFrame(X,columns=['variable_1','variable_2','variable_3','variable_4'])F = F * 10**0.5 # with this choice, the fixed-effects regression function has the same variance as the random effects# Simulate random effectsgroup_train = np.arange(ntrain) # grouping variablefor i in range(m): group_train[int(i * ntrain / m):int((i + 1) * ntrain / m)] = igroup_test = np.arange(ntrain) # grouping variable for test data. Some existing and some new groupsm_test = 2 * mfor i in range(m_test): group_test[int(i * ntrain / m_test):int((i + 1) * ntrain / m_test)] = igroup = np.concatenate((group_train,group_test))b = np.sqrt(sigma2_1) * np.random.normal(size=m_test) # simulate random effectsZb = b[group]# Put everything togetherxi = np.sqrt(sigma2) * np.random.normal(size=n) # simulate error termy = F + Zb + xi # observed data# split train and test datay_train = y[0:ntrain]y_test = y[ntrain:n]X_train = X.iloc[0:ntrain,]X_test = X.iloc[ntrain:n,] The following code shows how one trains a model and makes predictions. As can be seen below, the learned variance parameters are close to the true ones. Note that when making predictions, one can make separate predictions for the mean function F(X) and the random effects Zb. # Define and train GPModelgp_model = gpb.GPModel(group_data=group_train)# create dataset for gpb.train functiondata_train = gpb.Dataset(X_train, y_train)# specify tree-boosting parameters as a dictparams = { 'objective': 'regression_l2', 'learning_rate': 0.1, 'max_depth': 6, 'min_data_in_leaf': 5, 'verbose': 0 }# train modelbst = gpb.train(params=params, train_set=data_train, gp_model=gp_model, num_boost_round=32)gp_model.summary() # estimated covariance parameters# Covariance parameters in the following order:# ['Error_term', 'Group_1']# [0.9183072 1.013057 ]# Make predictionspred = bst.predict(data=X_test, group_data_pred=group_test)y_pred = pred['fixed_effect'] + pred['random_effect_mean'] # sum predictions of fixed effect and random effectnp.sqrt(np.mean((y_test - y_pred) ** 2)) # root mean square error (RMSE) on test data. Approx. = 1.25 A careful choice of the tuning parameters is important for all boosting algorithms. Arguably the most important tuning parameter is the number of boosting iterations. A too large number will often result in over-fitting in regression problems and a too small value in “under-fitting”. In the following, we show how the number of boosting iterations can be chosen using cross-validation. Other important tuning parameters include the learning rate, the tree-depth, and the minimal number of samples per leaf. For simplicity, we do not tune them here but use some default values. # Parameter tuning using cross-validation (only number of boosting iterations)gp_model = gpb.GPModel(group_data=group_train)cvbst = gpb.cv(params=params, train_set=data_train, gp_model=gp_model, use_gp_model_for_validation=False, num_boost_round=100, early_stopping_rounds=5, nfold=4, verbose_eval=True, show_stdv=False, seed=1)best_iter = np.argmin(cvbst['l2-mean'])print("Best number of iterations: " + str(best_iter))# Best number of iterations: 32 Update: as of version 0.4.3, GPBoost has now a function (`grid_search_tune_parameters`) which can be used for parameter tuning using a random or deterministic grid search. See this Python parameter tuning demo for more details. Feature importance plots and partial dependence plots are tools for interpreting machine learning models. These can be used as follows. # Plotting feature importancesgpb.plot_importance(bst) Univariate partial dependence plots from pdpbox import pdp# Single variable plots (takes a few seconds to compute)pdp_dist = pdp.pdp_isolate(model=bst, dataset=X_train, model_features=X_train.columns, feature='variable_2', num_grid_points=50, predict_kwds={"ignore_gp_model": True})pdp.pdp_plot(pdp_dist, 'variable_2', plot_lines=True) Multivariate partial dependence plots # Two variable interaction plotinter_rf = pdp.pdp_interact(model=bst, dataset=X_train, model_features=X_train.columns, features=['variable_1','variable_2'], predict_kwds={"ignore_gp_model": True})pdp.pdp_interact_plot(inter_rf, ['variable_1','variable_2'], x_quantile=True, plot_type='contour', plot_pdp=True) # ignore any error message SHAP values and dependence plots are another important tool for model interpretation. These can be created as follows. Note: you need shap version>=0.36.0 for this. import shapshap_values = shap.TreeExplainer(bst).shap_values(X_test)shap.summary_plot(shap_values, X_test)shap.dependence_plot("variable_2", shap_values, X_test) In the following, we compare the GPBoost algorithm to several existing approaches using the above simulated data. We consider the following alternative approaches: A linear mixed effects model (‘Linear_ME’) where F(X) is a linear function Standard gradient tree-boosting ignoring the grouping structure (‘Boosting_Ign’) Standard gradient tree-boosting including the grouping variable as a categorical variables (‘Boosting_Cat’) Mixed-effects random forest (‘MERF’) (see here and Hajjem et al. (2014) for more information) We compare the algorithms in terms of predictive accuracy measured using the root mean square error (RMSE) and computational time (clock time in seconds). The results are shown in the table below. The code for producing these results can be found below in the appendix. We see that GPBoost and MERF perform clearly best (and almost equally well) in terms of predictive accuracy. Further, the GPBoost algorithm is approximately 1000 times faster than the MERF algorithm. The linear mixed effects model (‘Linear_ME’) and tree-boosting ignoring the grouping variable (‘Boosting_Ign’) have clearly lower predictive accuracy. Tree-boosting with the grouping variable included as a categorical variable also shows lower predictive accuracy than GPBoost or MERF. Note that, for simplicity, we do only one simulation run (see Sigrist (2020) for a much more detailed comparison). Except for MERF, all computations are done using the GPBoost library version 0.2.1 compiled with MSVC version 19.24.28315.0. Further, we use the MERF Python package version 0.3. GPBoost allows for combining mixed effects models and tree-boosting. If you apply linear mixed effects models, you should investigate whether the linearity assumption is indeed appropriate. The GPBoost model allows for relaxing this assumption. It may help you to find non-linearities and interactions and achieve higher predictive accuracy. If you are a frequent user of boosting algorithms such as XGBoost and LightGBM and you have categorical variables with potentially high-cardinality, GPBoost (which extends LightGBM) can make learning more efficient and result in higher predictive accuracy. To the best of our knowledge, the GPBoost library is currently unmatched in terms of computational speed and predictive accuracy. Additional advantages are that GPBoost supports a range of model interpretation tools (variable importance values, partial dependence plots, SHAP values etc.). Further, it also supports other types of random effects such as Gaussian processes in addition to grouped or clustered random effects. Hopefully, you have found this article useful. More information on GPBoost can be found in the companion article Sigrist (2020) and on github. Hajjem, A., Bellavance, F., & Larocque, D. (2014). Mixed-effects random forest for clustered data. Journal of Statistical Computation and Simulation, 84(6), 1313–1328. Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., ... & Liu, T. Y. (2017). Lightgbm: A highly efficient gradient boosting decision tree. In Advances in neural information processing systems (pp. 3146–3154). Pinheiro, J., & Bates, D. (2006). Mixed-effects models in S and S-PLUS. Springer Science & Business Media. Sigrist, F. (2020). Gaussian Process Boosting. arXiv preprint arXiv:2004.02653. results = pd.DataFrame(columns = ["RMSE","Time"], index = ["GPBoost", "Linear_ME","Boosting_Ign","Boosting_Cat","MERF"])# 1. GPBoostgp_model = gpb.GPModel(group_data=group_train)start_time = time.time() # measure timebst = gpb.train(params=params, train_set=data_train, gp_model=gp_model, num_boost_round=best_iter)results.loc["GPBoost","Time"] = time.time() - start_timepred = bst.predict(data=X_test, group_data_pred=group_test)y_pred = pred['fixed_effect'] + pred['random_effect_mean'] # sum predictions of fixed effect and random effectresults.loc["GPBoost","RMSE"] = np.sqrt(np.mean((y_test - y_pred) ** 2))# 2. Linear mixed effects model ('Linear_ME')gp_model = gpb.GPModel(group_data=group_train)X_train_linear = np.column_stack((np.ones(ntrain),X_train))X_test_linear = np.column_stack((np.ones(ntrain),X_test))start_time = time.time() # measure timegp_model.fit(y=y_train, X=X_train_linear) # add a column of 1's for interceptresults.loc["Linear_ME","Time"] = time.time() - start_timey_pred = gp_model.predict(group_data_pred=group_test, X_pred=X_test_linear)F_pred = X_test_linear.dot(gp_model.get_coef())results.loc["Linear_ME","RMSE"] = np.sqrt(np.mean((y_test - y_pred['mu']) ** 2))# 3. Gradient tree-boosting ignoring the grouping variable ('Boosting_Ign')cvbst = gpb.cv(params=params, train_set=data_train, num_boost_round=100, early_stopping_rounds=5, nfold=4, verbose_eval=True, show_stdv=False, seed=1)best_iter = np.argmin(cvbst['l2-mean'])print("Best number of iterations: " + str(best_iter))# Best number of iterations: 19start_time = time.time() # measure timebst = gpb.train(params=params, train_set=data_train, num_boost_round=best_iter)results.loc["Boosting_Ign","Time"] = time.time() - start_timey_pred = bst.predict(data=X_test)results.loc["Boosting_Ign","RMSE"] = np.sqrt(np.mean((y_test - y_pred) ** 2))# 4. Gradient tree-boosting including the grouping variable as a categorical variable ('Boosting_Cat')X_train_cat = np.column_stack((group_train,X_train))X_test_cat = np.column_stack((group_test,X_test))data_train_cat = gpb.Dataset(X_train_cat, y_train, categorical_feature=[0])cvbst = gpb.cv(params=params, train_set=data_train_cat, num_boost_round=1000, early_stopping_rounds=5, nfold=4, verbose_eval=True, show_stdv=False, seed=1)best_iter = np.argmin(cvbst['l2-mean'])print("Best number of iterations: " + str(best_iter))# Best number of iterations: 49start_time = time.time() # measure timebst = gpb.train(params=params, train_set=data_train_cat, num_boost_round=best_iter)results.loc["Boosting_Cat","Time"] = time.time() - start_timey_pred = bst.predict(data=X_test_cat)results.loc["Boosting_Cat","RMSE"] = np.sqrt(np.mean((y_test - y_pred) ** 2))# 5. Mixed-effects random forest ('MERF')from merf import MERFrf_params={'max_depth': 6, 'n_estimators': 300}merf_model = MERF(max_iterations=100, rf_params=rf_params)print("Warning: the following takes a lot of time")start_time = time.time() # measure timemerf_model.fit(pd.DataFrame(X_train), np.ones(shape=(ntrain,1)), pd.Series(group_train), y_train)results.loc["MERF","Time"] = time.time() - start_timey_pred = merf_model.predict(pd.DataFrame(X_test), np.ones(shape=(ntrain,1)), pd.Series(group_test))results.loc["MERF","RMSE"] = np.sqrt(np.mean((y_test - y_pred) ** 2))print(results.apply(pd.to_numeric).round(3))
[ { "code": null, "e": 627, "s": 172, "text": "This article shows how tree-boosting (sometimes also referred to as “gradient tree-boosting”) can be combined with mixed effects models using the GPBoost algorithm. Background is provided on both the methodology as well as on how to apply the GPBoost libr...
How can I create a stored procedure to delete values from a MySQL table?
We can create a stored procedure with IN operator to delete values from a MySQL table. To make it understand we are taking an example of a table named ‘student_info’ having the following data − mysql> Select * from student_info; +------+---------+------------+------------+ | id | Name | Address | Subject | +------+---------+------------+------------+ | 100 | Aarav | Delhi | Computers | | 101 | YashPal | Amritsar | History | | 105 | Gaurav | Jaipur | Literature | | 110 | Rahul | Chandigarh | History | +------+---------+------------+------------+ 4 rows in set (0.00 sec) Now, by creating the procedure named ‘delete_studentinfo’ as follow, we can delete the values from ‘student_info’ table − mysql> DELIMITER // ; mysql> Create Procedure Delete_studentinfo ( IN p_id INT) -> BEGIN -> DELETE FROM student_info -> WHERE ID=p_id; -> END // Query OK, 0 rows affected (0.11 sec) mysql> DELIMITER ; // Now, invoke the procedure with the values we want to delete from the table as follows − mysql> CALL Delete_studentinfo(100); Query OK, 1 row affected (1.09 sec) mysql> Select * from student_info; +------+---------+------------+------------+ | id | Name | Address | Subject | +------+---------+------------+------------+ | 101 | YashPal | Amritsar | History | | 105 | Gaurav | Jaipur | Literature | | 110 | Rahul | Chandigarh | History | | 125 | Raman | Bangalore | Computers | +------+---------+------------+------------+ 4 rows in set (0.01 sec) The above result set shows that the record having id = 100 deleted from the table.
[ { "code": null, "e": 1256, "s": 1062, "text": "We can create a stored procedure with IN operator to delete values from a MySQL table. To make it understand we are taking an example of a table named ‘student_info’ having the following data −" }, { "code": null, "e": 1676, "s": 1256, ...
Spectral Graph Convolution Explained and Implemented Step By Step | by Boris Knyazev | Towards Data Science
First, let’s recall what is a graph. A graph G is a set of nodes (vertices) connected by directed/undirected edges. In this post, I will assume an undirected graph G with N nodes. Each node in this graph has a C-dimensional feature vector, and features of all nodes are represented as an N×C dimensional matrix X(l). Edges of a graph are represented as an N×N matrix A, where the entry Aij indicates if node i is connected (adjacent) to node j. This matrix is called an adjacency matrix. Spectral analysis of graphs (see lecture notes here and earlier work here) has been useful for graph clustering, community discovery and other mainly unsupervised learning tasks. In this post, I basically describe the work of Bruna et al., 2014, ICLR 2014 who combined spectral analysis with convolutional neural networks (ConvNets) giving rise to spectral graph convolutional networks that can be trained in a supervised way, for example for the graph classification task. Despite that spectral graph convolution is currently less commonly used compared to spatial graph convolution methods, knowing how spectral convolution works is still helpful to understand and avoid potential problems with other methods. Plus, in the conclusion I refer to some recent exciting works making spectral graph convolution more competitive. While “spectral” may sound complicated, for our purpose it’s enough to understand that it simply means decomposing a signal/audio/image/graph into a combination (usually, a sum) of simple elements (wavelets, graphlets). To have some nice properties of such a decomposition, these simple elements are usually orthogonal, i.e. mutually linearly independent, and therefore form a basis. When we talk about “spectral” in signal/image processing, we imply the Fourier Transform, which offers us a particular basis (DFT matrix, e.g. scipy.linalg.dft in Python) of elementary sine and cosine waves of different frequencies, so that we can represent our signal/image as a sum of these waves. But when we talk about graphs and graph neural networks (GNNs), “spectral” implies eigen-decomposition of the graph Laplacian L. You can think of the the graph Laplacian L as an adjacency matrix A normalized in a special way, whereas eigen-decomposition is a way to find those elementary orthogonal components that make up our graph. Intuitively, the graph Laplacian shows in what directions and how smoothly the “energy” will diffuse over a graph if we put some “potential” in node i. A typical use-case of Laplacian in mathematics and physics is to solve how a signal (wave) propagates in a dynamic system. Diffusion is smooth when there is no sudden changes of values between neighbors as in the animation below. In the rest of the post, I’m going to assume “symmetric normalized Laplacian”, which is often used in graph neural networks, because it is normalized so that when you stack many graph layers, the node features propagate in a more smooth way without explosion or vanishing of feature values or gradients. It is computed based only on an adjacency matrix A of a graph, which can be done in a few lines of Python code as follows: # Computing the graph Laplacian# A is an adjacency matrix of some graph Gimport numpy as npN = A.shape[0] # number of nodes in a graphD = np.sum(A, 0) # node degreesD_hat = np.diag((D + 1e-5)**(-0.5)) # normalized node degreesL = np.identity(N) — np.dot(D_hat, A).dot(D_hat) # Laplacian Here, we assume that A is symmetric, i.e. A = AT and our graph is undirected, otherwise node degrees are not well-defined and some assumptions must be made to compute the Laplacian. An interesting property of an adjacency matrix A is that An (matrix product taken n times) exposes n-hop connections between nodes (see here for more details). Let’s generate three graphs and visualize their adjacency matrices and Laplacians as well as their powers. For example, imagine that the star graph above in the middle is made from metal, so that it transfers heat well. Then, if we start to heat up node 0 (dark blue), this heat will propagate to other nodes in a way defined by the Laplacian. In the particular case of a star graph with all edges equal, heat will spread uniformly to all other nodes, which is not true for other graphs due to their structure. In the context of computer vision and machine learning, the graph Laplacian defines how node features will be updated if we stack several graph neural layers. Similarly to the first part of my tutorial, to understand spectral graph convolution from the computer vision perspective, I’m going to use the MNIST dataset, which defines images on a 28×28 regular grid graph. In signal processing, it can be shown that convolution in the spatial domain is multiplication in the frequency domain (a.k.a. convolution theorem). The same theorem can be applied to graphs. In signal processing, to transform a signal to the frequency domain, we use the Discrete Fourier Transform, which is basically matrix multiplication of a signal with a special matrix (basis, DFT matrix). This basis assumes a regular grid, so we cannot use it for irregular graphs, which is a typical case. Instead, we use a more general basis, which is eigenvectors V of the graph Laplacian L, which can be found by eigen-decomposition: L=VΛVT, where Λ are eigenvalues of L. PCA vs eigen-decomposition of the graph Laplacian. To compute spectral graph convolution in practice, it’s enough to use a few eigenvectors corresponding to the smallest eigenvalues. At first glance, it seems to be an opposite strategy compared to frequently used in computer vision Principal component analysis (PCA), where we are more interested in the eigenvectors corresponding to the largest eigenvalues. However, this difference is simply due to the negation used to compute the Laplacian above, therefore eigenvalues computed using PCA are inversely proportional to eigenvalues of the graph Laplcacian (see this paper for a formal analysis). Note also that PCA is applied to the covariance matrix of a dataset for the purpose to extract the largest factors of variation, i.e. the dimensions along which data vary the most, like in Eigenfaces. This variation is measured by eigenvalues, so that the smallest eigenvalues essentially correspond to noisy or “spurious” features, which are assumed to be useless or even harmful in practice. Eigen-decomposition of the graph Laplacian is applied to a single graph for the purpose to extract subgraphs or clusters (communities) of nodes, and eigenvalues tell us a lot about graph connectivity. I will use eigenvectors corresponding to the 20 smallest eigenvalues in our examples below, assuming that 20 is much smaller than the number of nodes N (N=784 in case of MNIST). To find eigenvalues and eigenvectors below on the left, I use a 28×28 regular graph, whereas on the right I follow the experiment of Bruna et al. and construct an irregular graph by sampling 400 random locations on a 28×28 regular grid (see their paper for more details about this experiment). So, given graph Laplacian L, node features X and filters W_spectral, in Python spectral convolution on graphs looks very simple: # Spectral convolution on graphs# X is an N×1 matrix of 1-dimensional node features# L is an N×N graph Laplacian computed above# W_spectral are N×F weights (filters) that we want to trainfrom scipy.sparse.linalg import eigsh # assumes L to be symmetricΛ,V = eigsh(L,k=20,which=’SM’) # eigen-decomposition (i.e. find Λ,V)X_hat = V.T.dot(X) # 20×1 node features in the "spectral" domainW_hat = V.T.dot(W_spectral) # 20×F filters in the "spectral" domainY = V.dot(X_hat * W_hat) # N×F result of convolution Formally: where we assume that our node features X(l) are 1-dimensional, e.g. MNIST pixels, but it can be extended to a C-dimensional case: we will just need to repeat this convolution for each channel and then sum over C as in signal/image convolution. Formula (3) is essentially the same as spectral convolution of signals on regular grids using the Fourier Transform, and so creates a few problems for machine learning: the dimensionality of trainable weights (filters) W_spectral depends on the number of nodes N in a graph; W_spectral also depends on the graph structure encoded in eigenvectors V. These issues prevent scaling to datasets with large graphs of variable structure. Further efforts, summarized below, were focused on resolving these and other issues. Bruna et al. were one of the first to apply spectral graph analysis to learn convolutional filters for the graph classification problem. The filters learned using formula (3) above act on the entire graph, i.e. they have global support. In the computer vision context, this would be the same as training convolutional filters of size 28×28 pixels on MNIST, i.e. filters have the same size as the input (note that we would still slide a filter, but over a zero-padded image). While for MNIST we can actually train such filters, the common wisdom suggests to avoid that, as it makes training much harder due to the potential explosion of the number of parameters and difficulty of training large filters that can capture useful features shared across different images. I actually successfully trained such a model using PyTorch and this code from my GitHub. You should run it using mnist_fc.py --model conv. After training for 100 epochs, the filters look like mixtures of digits: To reiterate, we generally want to make filters smaller and more local (which is not exactly the same as I’ll note below). To enforce that implicitly, they proposed to smooth filters in the spectral domain, which makes them more local in the spatial domain according to the spectral theory. The idea is that you can represent our filter W_spectral from formula (3) as a sum of K predefined functions, such as splines, and instead of learning N values of W, we learn K coefficients α of this sum: While the dimensionality of fk does depend on the number of nodes N, these functions are fixed, so we don’t learn them. The only thing we learn are coefficients α, and so W_spectral is no longer dependent on N. Neat, right? To make our approximation in formula (4) reasonable, we want K<<N to reduce the number of trainable parameters from N to K and, more importantly, make it independent of N, so that our GNN can digest graphs of any size. We can use different bases to perform this “expansion”, depending on which properties we need. For instance, cubic splines shown above are known as very smooth functions (i.e. you cannot see knots, i.e. where the pieces of the piecewise spline polynomial meet). The Chebyshev polynomial, which I discuss in my another post, has the minimum l∞ distance between the approximating function. The Fourier basis is the one that preserves most of the signal energy after transformation. Most bases are orthogonal, because it would be redundant to have terms that can be expressed by each other. Note that filters W_spectral are still as large as the input, but their effective width is small. In case of MNIST images, we would have 28×28 filters, in which only a small fraction of values would have an absolute magnitude larger than 0 and all of them should be located close to each other, i.e. the filter would be local and effectively small, something like the one below (second from the left): To summarize, smoothing in the spectral domain allowed Bruna et al. to learn more local filters. The model with such filters can achieve similar results as the model without smoothing (i.e. using our formula (3)), but with much fewer trainable parameters, because the filter size is independent of the input graph size, which is important to scale the model to datasets with larger graphs. However, learned filters W_spectral still depend on eigenvectors V, which makes it challenging to apply this model to datasets with variable graph structures. Despite the drawbacks of the original spectral graph convolution method, it has been developed a lot and has remained a quite competitive method in some applications, because spectral filters can better capture global complex patterns in graphs, which local methods like GCN (Kipf & Welling, ICLR, 2017) cannot unless stacked in a deep network. For example, two ICLR 2019 papers, of Liao et al. on “LanczosNet” and Xu et al. on “Graph Wavelet Neural Network”, address some shortcomings of spectral graph convolution and show great results in predicting molecule properties and node classification. Another interesting work of Levie et al., 2018 on “CayleyNets” showed strong performance in node classification, matrix completion (recommender systems) and community detection. So, depending on your application and infrastructure, spectral graph convolution can be a good choice. In another part of my Tutorial on Graph Neural Networks for Computer Vision and Beyond I explain Chebyshev spectral graph convolution introduced by Defferrard et al. in 2016, which is still a very strong baseline that has some nice properties and is easy to implement as I demonstrate using PyTorch. Acknowledgement: A large portion of this tutorial was prepared during my internship at SRI International under the supervision of Mohamed Amer (homepage) and my PhD advisor Graham Taylor (homepage). I also thank Carolyn Augusta for useful feedback. Find me on Github, LinkedIn and Twitter. My homepage. If you want to cite this blog post in your paper, please use:@misc{knyazev2019tutorial, title={Tutorial on Graph Neural Networks for Computer Vision and Beyond}, author={Knyazev, Boris and Taylor, Graham W and Amer, Mohamed R}, year={2019}}
[ { "code": null, "e": 660, "s": 172, "text": "First, let’s recall what is a graph. A graph G is a set of nodes (vertices) connected by directed/undirected edges. In this post, I will assume an undirected graph G with N nodes. Each node in this graph has a C-dimensional feature vector, and features of...
Getters and Setters in Scala - GeeksforGeeks
20 Jun, 2019 Getter and Setter in Scala are methods that helps us to get the value of variables and instantiate variables of class/trait respectively. Scala generates a class for the JVM with a private variable field and getter and setter methods. In Scala, the getters and setters are not named getXxx and setXxx, but they are used for the same purpose. At any time, we can redefine the getter and setter methods ourself. Setter are a technique through which we set the value of variables of a class. Setting an variable of class is simple it can be done in two ways :- First if the members of a class are accessible from anywhere. i.e no access modifier specified.Example:// A Scala program to illustrate // Setting the variable of a class // Name of the class is Studentclass Student { // Class variables var student_name: String= " " var student_age: Int= 0 var student_rollno= 0} // Creating objectobject Main { // Main method def main(args: Array[String]) { // Class object var obj = new Student() obj.student_name= "Yash" obj.student_age= 22 obj.student_rollno= 59 println("Student Name: " + obj.student_name) println("Student Age: " + obj.student_age) println("Student Rollno: " + obj.student_rollno) } }Output:Student Name: Yash Student Age: 22 Student Rollno: 59For security reasons it is not recommended. As accessing the members of class directly is not a good a method to initiate and change the value as it will allow others to identify the variable. // A Scala program to illustrate // Setting the variable of a class // Name of the class is Studentclass Student { // Class variables var student_name: String= " " var student_age: Int= 0 var student_rollno= 0} // Creating objectobject Main { // Main method def main(args: Array[String]) { // Class object var obj = new Student() obj.student_name= "Yash" obj.student_age= 22 obj.student_rollno= 59 println("Student Name: " + obj.student_name) println("Student Age: " + obj.student_age) println("Student Rollno: " + obj.student_rollno) } } Output: Student Name: Yash Student Age: 22 Student Rollno: 59 For security reasons it is not recommended. As accessing the members of class directly is not a good a method to initiate and change the value as it will allow others to identify the variable. Second if the members of a class are defined as private. Initiation of the variables is done by passing the variable to public method of that class using the object of the class.Example:// A Scala program to illustrate // Setting the private variable of a class // Name of the class is Studentclass Student { // Class variables var student_name: String= " " var student_age: Int= 0 private var student_rollno= 0 // Class method def set_roll_no(x: Int) { student_rollno= x }} // Creating objectobject GFG{ // Main method def main(args: Array[String]) { // Class object var obj = new Student() obj.student_name= "Yash" obj.student_age= 22 //error: variable student_rollno in class // Student cannot be accessed in Student //obj.student_rollno= 59 obj.set_roll_no(59) // Directly getting the value of variable println("Student Name: "+ obj.student_name) // Directly getting the value of variable println("Student Age: "+obj.student_age) // Through method calling println("Student Rollno: "+obj.student_rollno) } } // A Scala program to illustrate // Setting the private variable of a class // Name of the class is Studentclass Student { // Class variables var student_name: String= " " var student_age: Int= 0 private var student_rollno= 0 // Class method def set_roll_no(x: Int) { student_rollno= x }} // Creating objectobject GFG{ // Main method def main(args: Array[String]) { // Class object var obj = new Student() obj.student_name= "Yash" obj.student_age= 22 //error: variable student_rollno in class // Student cannot be accessed in Student //obj.student_rollno= 59 obj.set_roll_no(59) // Directly getting the value of variable println("Student Name: "+ obj.student_name) // Directly getting the value of variable println("Student Age: "+obj.student_age) // Through method calling println("Student Rollno: "+obj.student_rollno) } } Getters are a technique through which we get the value of the variables of a class. Getting the value of a global variable directly. In which we call specify the name of the variable with the object. Getting the value of a variable through method calling using the object. This technique is good when we don’t have accessibility to class variables but methods are available public.Example:// A Scala program to illustrate // Getting the value of members of a class // Name of the class is Studentclass Student { // Class variables var student_name: String= " " var student_age: Int= 0 // Getter private var student_rollno= 0 // Class method def set_rollno(x: Int){ student_rollno= x } def get_rollno(): Int ={ return student_rollno } } // Creating objectobject Main { // Main method def main(args: Array[String]) { // Class object var obj = new Student() obj.student_name= "Yash" obj.student_age= 22 obj.set_rollno(59) // Directly getting the value of variable println("Student Name: " + obj.student_name) // Directly getting the value of variable println("Student Age: " + obj.student_age) // Through method calling println("Student Rollno: " + obj.get_rollno) } }Output :Student Name: Yash Student Age: 22 Student Rollno: 59 // A Scala program to illustrate // Getting the value of members of a class // Name of the class is Studentclass Student { // Class variables var student_name: String= " " var student_age: Int= 0 // Getter private var student_rollno= 0 // Class method def set_rollno(x: Int){ student_rollno= x } def get_rollno(): Int ={ return student_rollno } } // Creating objectobject Main { // Main method def main(args: Array[String]) { // Class object var obj = new Student() obj.student_name= "Yash" obj.student_age= 22 obj.set_rollno(59) // Directly getting the value of variable println("Student Name: " + obj.student_name) // Directly getting the value of variable println("Student Age: " + obj.student_age) // Through method calling println("Student Rollno: " + obj.get_rollno) } } Output : Student Name: Yash Student Age: 22 Student Rollno: 59 Picked Scala Scala-Method Scala-OOPS Scala Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Scala Tutorial – Learn Scala with Step By Step Guide Scala Map Scala Lists Scala List contains() method with example Scala String replace() method with example Scala | Arrays How to get the first element of List in Scala Anonymous Functions in Scala HashMap in Scala Enumeration in Scala
[ { "code": null, "e": 23455, "s": 23427, "text": "\n20 Jun, 2019" }, { "code": null, "e": 23865, "s": 23455, "text": "Getter and Setter in Scala are methods that helps us to get the value of variables and instantiate variables of class/trait respectively. Scala generates a class f...
An Approach Towards Convolutional Recurrent Neural Networks | by Chandra Churh Chatterjee | Towards Data Science
The Convolutional Recurrent Neural Networks is the combination of two of the most prominent neural networks. The CRNN (convolutional recurrent neural network) involves CNN(convolutional neural network) followed by the RNN(Recurrent neural networks). The proposed network is similar to the CRNN but generates better or optimal results especially towards audio signal processing. The network starts with the traditional 2D convolutional neural network followed by batch normalization, ELU activation, max-pooling and dropout with a dropout rate of 50%. Three such convolution layers are placed in a sequential manner with their corresponding activations. The convolutional layers are followed by the permute and the reshape layer which is very necessary for CRNN as the shape of the feature vector differs from CNN to RNN. The convolutional layers are developed on 3-dimensional feature vectors, whereas the recurrent neural networks are developed on 2-dimensional feature vectors. The permute layers change the direction of the axes of the feature vectors, which is followed by the reshape layers, which convert the feature vector to a 2-dimensional feature vector. The RNN is compatible with the 2-dimensional feature vectors. The proposed network consists of two bidirectional GRU layers with ’n’ no of GRU cells in each layer where ’n’ depends on the no of classes of the classification performed using the corresponding network. The bidirectional GRU (Gated recurrent unit) is used instead of the unidirectional RNN layers because the bidirectional layers take into account not only the future timestamps but also the future timestamp representations as well. Incorporating two-dimensional representations from both the timestamps allows incorporating the time dimensional features in a very optimal manner. Finally, the output of the bidirectional layers is fed to the time distributed dense layers followed by the Fully connected layer. The implementation of the proposed network is as follows: def get_model(data_in, data_out, _cnn_nb_filt, _cnn_pool_size, _rnn_nb, _fc_nb):spec_start = Input(shape=(data_in.shape[-3], data_in.shape[-2], data_in.shape[-1])) spec_x = spec_start for _i, _cnt in enumerate(_cnn_pool_size): spec_x = Conv2D(filters = cnn_nb_filt, kernel_size=(2, 2), padding=’same’)(spec_x) spec_x = BatchNormalization(axis=1)(spec_x) spec_x = Activation(‘relu’)(spec_x) spec_x = MaxPooling2D(pool_size=(1, _cnn_pool_size[_i]))(spec_x) spec_x = Dropout(dropout_rate)(spec_x) spec_x = Permute((2, 1, 3))(spec_x) spec_x = Reshape((data_in.shape[-2], -1))(spec_x)for _r in _rnn_nb: spec_x = Bidirectional( GRU(_r, activation=’tanh’, dropout=dropout_rate, recurrent_dropout=dropout_rate, return_sequences=True), merge_mode=’concat’)(spec_x)for _f in _fc_nb: spec_x = TimeDistributed(Dense(_f))(spec_x) spec_x = Dropout(dropout_rate)(spec_x)spec_x = TimeDistributed(Dense(data_out.shape[-1]))(spec_x) out = Activation(‘sigmoid’, name=’strong_out’)(spec_x)_model = Model(inputs=spec_start, outputs=out) _model.compile(optimizer=’Adam’, loss=’binary_crossentropy’,metrics = [‘accuracy’]) _model.summary() return _model The model summary can be displayed, which is as follows: # Load model”model = get_model(X, Y, cnn_nb_filt, cnn_pool_size, rnn_nb, fc_nb)
[ { "code": null, "e": 425, "s": 47, "text": "The Convolutional Recurrent Neural Networks is the combination of two of the most prominent neural networks. The CRNN (convolutional recurrent neural network) involves CNN(convolutional neural network) followed by the RNN(Recurrent neural networks). The pr...
Array of Strings in C++ (5 Different Ways to Create) - GeeksforGeeks
20 Oct, 2021 In C and C++, a string is a 1-dimensional array of characters and an array of strings in C is a 2-dimensional array of characters. There are many ways to declare them, and a selection of useful ways are given here. We create an array of string literals by creating an array of pointers. This is supported by both C and C++. CPP // C++ program to demonstrate array of strings using// 2D character array#include <iostream> int main(){ // Initialize array of pointer const char *colour[4] = { "Blue", "Red", "Orange", "Yellow" }; // Printing Strings stored in 2D array for (int i = 0; i < 4; i++) std::cout << colour[i] << "\n"; return 0;} Blue Red Orange Yellow The number of strings is fixed, but needn’t be. The 4 may be omitted, and the compiler will compute the correct size. These strings are constants and their contents cannot be changed. Because string literals (literally, the quoted strings) exist in a read-only area of memory, we must specify “const” here to prevent unwanted accesses that may crash the program. This method is useful when the length of all strings is known and a particular memory footprint is desired. Space for strings will be allocated in a single block This is supported in both C and C++. CPP // C++ program to demonstrate array of strings using// 2D character array#include <iostream> int main(){ // Initialize 2D array char colour[4][10] = { "Blue", "Red", "Orange", "Yellow" }; // Printing Strings stored in 2D array for (int i = 0; i < 4; i++) std::cout << colour[i] << "\n"; return 0;} Blue Red Orange Yellow Both the number of strings and the size of strings are fixed. The 4, again, maybe left out, and the appropriate size will be computed by the compiler. The second dimension, however, must be given (in this case, 10), so that the compiler can choose an appropriate memory layout. Each string can be modified but will take up the full space given by the second dimension. Each will be laid out next to each other in memory, and can’t change size. Sometimes, control over the memory footprint is desirable, and this will allocate a region of memory with a fixed, regular layout. The STL string class may be used to create an array of mutable strings. In this method, the size of the string is not fixed, and the strings can be changed. This is supported only in C++, as C does not have classes. CPP // C++ program to demonstrate array of strings using// array of strings.#include <iostream>#include <string> int main(){ // Initialize String Array std::string colour[4] = { "Blue", "Red", "Orange", "Yellow" }; // Print Strings for (int i = 0; i < 4; i++) std::cout << colour[i] << "\n";} Blue Red Orange Yellow The array is of fixed size, but needn’t be. Again, the 4 here may be omitted, and the compiler will determine the appropriate size of the array. The strings are also mutable, allowing them to be changed. The STL container Vector can be used to dynamically allocate an array that can vary in size. This is only usable in C++, as C does not have classes. Note that the initializer-list syntax here requires a compiler that supports the 2011 C++ standard, and though it is quite likely your compiler does, it is something to be aware of. CPP // C++ program to demonstrate vector of strings using#include <iostream>#include <vector>#include <string> int main(){ // Declaring Vector of String type // Values can be added here using initializer-list syntax std::vector<std::string> colour {"Blue", "Red", "Orange"}; // Strings can be added at any time with push_back colour.push_back("Yellow"); // Print Strings stored in Vector for (int i = 0; i < colour.size(); i++) std::cout << colour[i] << "\n";} Blue Red Orange Yellow Vectors are dynamic arrays, and allow you to add and remove items at any time. Any type or class may be used in vectors, but a given vector can only hold one type. The STL container array can be used to allocate a fixed-size array. It may be used very similarly to vector, but the size is always fixed. This is supported only in C++. C++ #include <iostream>#include <array>#include <string> int main(){ // Initialize array std::array<std::string, 4> colour { "Blue", "Red", "Orange", "Yellow" }; // Printing Strings stored in array for (int i = 0; i < 4; i++) std::cout << colour[i] << "\n"; return 0;} Blue Red Orange Yellow These are by no means the only ways to make a collection of strings. C++ offers several container classes, each of which has various tradeoffs and features, and all of them exist to fill requirements that you will have in your projects. Explore and have fun! Conclusion: Out of all the methods, Vector seems to be the best way for creating an array of Strings in C++. This article is contributed by Kartik Ahuja. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. greyfade puja84375 CPP Array and String cpp-string Arrays C++ School Programming Strings Arrays Strings CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Arrays in Java Program for array rotation Stack Data Structure (Introduction and Program) Top 50 Array Coding Problems for Interviews Largest Sum Contiguous Subarray Vector in C++ STL Initialize a vector in C++ (6 different ways) std::sort() in C++ STL Bitwise Operators in C/C++ Socket Programming in C/C++
[ { "code": null, "e": 26647, "s": 26619, "text": "\n20 Oct, 2021" }, { "code": null, "e": 26862, "s": 26647, "text": "In C and C++, a string is a 1-dimensional array of characters and an array of strings in C is a 2-dimensional array of characters. There are many ways to declare t...
C++ String Library - find_first_not_of
It searches the string for the first character that does not match any of the characters specified in its arguments. Following is the declaration for std::string::find_first_not_of. size_t find_first_not_of (const string& str, size_t pos = 0) const; size_t find_first_not_of (const string& str, size_t pos = 0) const noexcept; size_t find_first_not_of (const string& str, size_t pos = 0) const noexcept; str − It is a string object. str − It is a string object. len − It is used to copy the characters. len − It is used to copy the characters. pos − Position of the first character to be copied. pos − Position of the first character to be copied. none if an exception is thrown, there are no changes in the string. In below example for std::string::find_first_not_of. #include <iostream> #include <string> #include <cstddef> int main () { std::string str ("It looks for non-alphabetic characters..."); std::size_t found = str.find_first_not_of("abcdefghijklmnopqrstuvwxyz "); if (found!=std::string::npos) { std::cout << "The first non-alphabetic character is " << str[found]; std::cout << " at position " << found << '\n'; } return 0; } The sample output should be like this − The first non-alphabetic character is I at position 0 Print Add Notes Bookmark this page
[ { "code": null, "e": 2720, "s": 2603, "text": "It searches the string for the first character that does not match any of the characters specified in its arguments." }, { "code": null, "e": 2785, "s": 2720, "text": "Following is the declaration for std::string::find_first_not_of."...
Python Project To Improve Your Productivity For The New Year | by Bharath K | Towards Data Science
The new year is approaching very soon, and I think most people are glad that 2020 is almost over! Most people are excited and are welcoming the next year of 2021. Hence, this upcoming New Year presents all of us an opportunity to build an incredible Python project that will be useful for the rest of the year. With every New Year, there comes some resolution or resolutions that everyone plans they will follow them. And I do follow it for a clean few days and then get back to my usual routine. All my plans go to the bin! And get shifted over to the Next Year instead. However, it would be awesome to develop a Python project that would consistently remind me of my plans and keep telling me to follow my ideas and schedule that I have planned for the rest of the year! In this project, this is exactly what we will be trying to achieve in our article today! We will be developing a simple but useful reminder application that will consistently remind us about the aims and goals that we have planned for our New Year. The main objective of the reminder notification application we will develop today is to consistently remind us about the various things we need to accomplish during the course of the entire day. This project is similar to a To-Do List, where we have a set of goals to achieve. And the reminder app will constantly notify us about the various tasks we need to do and actions to complete throughout the course of the day. For the purpose of this article, I will be alerting myself to take a break after every hour. Since I spend most of my time on the PC, it would be a great idea for me to take a break every now and then. However, your message and alert can be absolutely anything you desire! You can have a list of things you need to complete within the day, week, or month, and the reminder application will constantly remind you about the same. As discussed in the problem statement, our goal is to ensure that the specific action is reminded to us in due time so that it can be completed within the exact duration. For the purpose of completing the task exactly on time, we need to also import the time module, which will serve this purpose. The time module can be imported with the following command — import time The time is a pre-built library module in Python and can be easily accessed with the above command. It is used to handle any kind of task that is related to time. The time module has a lot of functions for performing specific tasks. We will only dwell deeper into the time.sleep() function for the requirements of this article. Suspend execution of the calling thread for the given number of seconds. The argument may be a floating point number to indicate a more precise sleep time. The actual suspension time may be less than that requested because any caught signal will terminate the sleep() following execution of that signal’s catching routine. Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system. Changed in version 3.5: The function now sleeps at least secs even if the sleep is interrupted by a signal, except if the signal handler raises an exception. To learn more about the time module check out the official documentation here. With the time module already discussed, the only other module we will require for the completion of this task is the plyer module, which can be installed with the following simple command — The pip install plyer command should install the second and final module required for performing this task. Plyer is a Python library for accessing features of your hardware or other platforms. Once the installation of the plyer module is complete, you can make use of the notification class in this library for the creation of our reminder application. This notification option allows us to run the script with the task scheduler. The final imports of all the libraries required for this task should look similar to the code block below: import timefrom plyer import notification On the Windows platform, we have an option available to us called the task scheduler. Task Scheduler is a component of Microsoft Windows that provides the ability to schedule the launch of programs or scripts at pre-defined times or after specified time intervals like job scheduling. Extra Information: The Task Scheduler service works by managing Tasks; Task refers to the action (or actions) taken in response to trigger(s). A task is defined by associating a set of actions, which can include launching an application or taking some custom-defined action, to a set of triggers, which can either be time-based or event-based. In addition, a task also can contain metadata that defines how the actions will be executed, such as the security context the task will run in. Tasks are serialized to .job files and are stored in the special folder titled Task Folder, organized in subdirectories. Programmatically, the task folder is accessed using the ITaskFolder interface or the TaskFolder scripting object and individual tasks using the IRegisteredTask interface or RegisteredTask object. Building the simple code block: We already discussed about the notification class. This class offers us the notify function. The information and syntax of the notify method is as discussed below. notify (title='', message='', app_name='', app_icon='', timeout=10, ticker='', toast=False) title (str) — Title of the notification message (str) — Message of the notification app_name (str) — Name of the app launching this notification app_icon (str) — Icon to be displayed along with the message timeout (int) — time to display the message for, defaults to 10 ticker (str) — text to display on status bar as the notification arrives toast (bool) — simple Android message instead of full notification Now that we have a brief idea of the function in the notification class, let us understand how we can implement this. The below code block is the main procedure to define your tasks at hand. if __name__ == "__main__": while True: notification.notify( title = "ALERT!!!", message = "Take a break! It has been an hour!", timeout = 10 ) time.sleep(3600) Now, we will proceed to understand each line in the above code block step by step. The if __name__ == “__main__”: is an extremely important concept in Python. Before executing code, Python interpreter reads source file and define few special variables/global variables. If the python interpreter is running that module (the source file) as the main program, it sets the special __name__ variable to have a value “__main__”. If this file is being imported from another module, __name__ will be set to the module’s name. Module’s name is available as value to __name__ global variable. Refer to the following website for further information of this concept in more detail. We will then assign a while loop to run the code as long as it is True. This basically means that the code block will run for the time interval that we have set for running the program. Here, I have set the time required to run the program as 3600 seconds i.e. a total of one hour. After each hour, I will receive my reminder notification. Finally, we have the notification class with the notify method defined. We will set all the required parameters, as suggested in the code block. You are free to change the contents of the parameters according to your schedule or the tasks you have to perform. You can also determine how long you want the notification to remain on the display screen with the timeout instance. You can have multiple scripts running on your Desktop background that will consistently remind and alert you about the various tasks you need to perform throughout the course of the day, week, month, or year. To Run the script just run the following command in the command prompt as shown in the below code line. Make sure to replace the main.py with the name of your script file. pythonw .\main.py You can also simply use the python main.py command as well to run the script. After an hour you should receive the following alert. You will continuously receive these alerts every hour or until the time stipulation that you have set because of the while loop that is defined in the code. In the next section, you can analyze and copy and paste the entire code with the GitHub Gist provided. Final Code Block: In this article, we discussed how to build a reminder app that will help to timely notify us about the various commitments we have as well as will act as a successful means of reminding us about these tasks that we need to complete. The project discussed in this article is quite simple but useful nonetheless. It should be a great way for beginners who are interested in the Python programming language to immerse themselves into this field and build tons of cool projects related to the same. With the year 2020 finally coming to an end, I wish you all good luck for the epic future year of 2021. I hope all of you will have a wonderful upcoming year and will build lots of incredible and cool Python and Data Science projects! Let us all make some resolutions to have a productive and innovative New Year! If you guys liked this article, please feel free to check out some of my other stories that you might enjoy also reading! towardsdatascience.com towardsdatascience.com towardsdatascience.com towardsdatascience.com towardsdatascience.com Thank you all for sticking on till the end. I hope you guys enjoyed reading this article. I wish you all have a wonderful day ahead!
[ { "code": null, "e": 358, "s": 47, "text": "The new year is approaching very soon, and I think most people are glad that 2020 is almost over! Most people are excited and are welcoming the next year of 2021. Hence, this upcoming New Year presents all of us an opportunity to build an incredible Python...
Shortest possible combination of two strings - GeeksforGeeks
14 Mar, 2022 Compute the shortest string for a combination of two given strings such that the new string consist of both the strings as its subsequences.Examples: Input : a = "pear" b = "peach" Output : pearch pearch is the shorted string such that both pear and peach are its subsequences. Input : a = "geek" b = "code" Output : gecodek We have discussed a solution to find length of the shortest supersequence in below post. Shortest Common SupersequenceIn this post, printing of supersequence is discussed. The solution is based on below recursive approach discussed in above post as an alternate method. Let a[0..m-1] and b[0..n-1] be two strings and m and be respective lengths. if (m == 0) return n; if (n == 0) return m; // If last characters are same, then add 1 to // result and recur for a[] if (a[m-1] == b[n-1]) return 1 + SCS(a, b, m-1, n-1); // Else find shortest of following two // a) Remove last character from X and recur // b) Remove last character from Y and recur else return 1 + min( SCS(a, b, m-1, n), SCS(a, b, m, n-1) ); We build a DP array to store lengths. After building the DP array, we traverse from bottom right most position. The approach of printing is similar to printing LCS. C++ Java Python3 C# Javascript /* C++ program to print supersequence of two strings */#include<bits/stdc++.h>using namespace std; /* Prints super sequence of a[0..m-1] and b[0..n-1] */void printSuperSeq(string &a, string &b){ int m = a.length(), n = b.length(); int dp[m+1][n+1]; // Fill table in bottom up manner for (int i = 0; i <= m; i++) { for (int j = 0; j <= n; j++) { // Below steps follow above recurrence if (!i) dp[i][j] = j; else if (!j) dp[i][j] = i; else if (a[i-1] == b[j-1]) dp[i][j] = 1 + dp[i-1][j-1]; else dp[i][j] = 1 + min(dp[i-1][j], dp[i][j-1]); } } // Following code is used to print supersequence int index = dp[m][n]; // Create a string of size index+1 to store the result string res(index+1, '\0'); // Start from the right-most-bottom-most corner and // one by one store characters in res[] int i = m, j = n; while (i > 0 && j > 0) { // If current character in a[] and b are same, // then current character is part of LCS if (a[i-1] == b[j-1]) { // Put current character in result res[index-1] = a[i-1]; // reduce values of i, j and indexs i--; j--; index--; } // If not same, then find the smaller of two and // go in the direction of smaller value else if (dp[i-1][j] < dp[i][j-1]) { res[index-1] = a[i-1]; i--; index--; } else { res[index-1] = b[j-1]; j--; index--; } } // Copy remaining characters of string 'a' while (i > 0) { res[index-1] = a[i-1]; i--; index--; } // Copy remaining characters of string 'b' while (j > 0) { res[index-1] = b[j-1]; j--; index--; } // Print the result cout << res;} /* Driver program to test above function */int main(){ string a = "algorithm", b = "rhythm"; printSuperSeq(a, b); return 0;} // Java program to print supersequence of two// stringspublic class GFG_1 { String a , b; // Prints super sequence of a[0..m-1] and b[0..n-1] static void printSuperSeq(String a, String b) { int m = a.length(), n = b.length(); int[][] dp = new int[m+1][n+1]; // Fill table in bottom up manner for (int i = 0; i <= m; i++) { for (int j = 0; j <= n; j++) { // Below steps follow above recurrence if (i == 0) dp[i][j] = j; else if (j == 0 ) dp[i][j] = i; else if (a.charAt(i-1) == b.charAt(j-1)) dp[i][j] = 1 + dp[i-1][j-1]; else dp[i][j] = 1 + Math.min(dp[i-1][j], dp[i][j-1]); } } // Create a string of size index+1 to store the result String res = ""; // Start from the right-most-bottom-most corner and // one by one store characters in res[] int i = m, j = n; while (i > 0 && j > 0) { // If current character in a[] and b are same, // then current character is part of LCS if (a.charAt(i-1) == b.charAt(j-1)) { // Put current character in result res = a.charAt(i-1) + res; // reduce values of i, j and indexs i--; j--; } // If not same, then find the larger of two and // go in the direction of larger value else if (dp[i-1][j] < dp[i][j-1]) { res = a.charAt(i-1) + res; i--; } else { res = b.charAt(j-1) + res; j--; } } // Copy remaining characters of string 'a' while (i > 0) { res = a.charAt(i-1) + res; i--; } // Copy remaining characters of string 'b' while (j > 0) { res = b.charAt(j-1) + res; j--; } // Print the result System.out.println(res); } /* Driver program to test above function */ public static void main(String args[]) { String a = "algorithm"; String b = "rhythm"; printSuperSeq(a, b); }}// This article is contributed by Sumit Ghosh # Python3 program to print supersequence of two strings # Prints super sequence of a[0..m-1] and b[0..n-1]def printSuperSeq(a, b): m = len(a) n = len(b) dp = [[0] * (n + 1) for i in range(m + 1)] # Fill table in bottom up manner for i in range(0, m + 1): for j in range(0, n + 1): # Below steps follow above recurrence if not i: dp[i][j] = j; else if not j: dp[i][j] = i; else if (a[i - 1] == b[j - 1]): dp[i][j] = 1 + dp[i - 1][j - 1]; else: dp[i][j] = 1 + min(dp[i - 1][j], dp[i][j - 1]); # Following code is used to print supersequence index = dp[m][n]; # Create a string of size index+1 # to store the result res = [""] * (index) # Start from the right-most-bottom-most corner # and one by one store characters in res[] i = m j = n; while (i > 0 and j > 0): # If current character in a[] and b are same, # then current character is part of LCS if (a[i - 1] == b[j - 1]): # Put current character in result res[index - 1] = a[i - 1]; # reduce values of i, j and indexs i -= 1 j -= 1 index -= 1 # If not same, then find the larger of two and # go in the direction of larger value else if (dp[i - 1][j] < dp[i][j - 1]): res[index - 1] = a[i - 1] i -= 1 index -= 1 else: res[index - 1] = b[j - 1] j -= 1 index -= 1 # Copy remaining characters of string 'a' while (i > 0): res[index - 1] = a[i - 1] i -= 1 index -= 1 # Copy remaining characters of string 'b' while (j > 0): res[index - 1] = b[j - 1] j -= 1 index -= 1 # Print the result print("".join(res)) # Driver Codeif __name__ == '__main__': a = "algorithm" b = "rhythm" printSuperSeq(a, b) # This code is contributed by ashutosh450 // C# program to print supersequence of two// stringsusing System;public class GFG_1 { // Prints super sequence of a[0..m-1] and b[0..n-1] static void printSuperSeq(string a, string b) { int m = a.Length, n = b.Length; int[,] dp = new int[m+1,n+1]; // Fill table in bottom up manner for (int i = 0; i <= m; i++) { for (int j = 0; j <= n; j++) { // Below steps follow above recurrence if (i == 0) dp[i,j] = j; else if (j == 0 ) dp[i,j] = i; else if (a[i-1] == b[j-1]) dp[i,j] = 1 + dp[i-1,j-1]; else dp[i,j] = 1 + Math.Min(dp[i-1,j], dp[i,j-1]); } } // Create a string of size index+1 to store the result string res = ""; // Start from the right-most-bottom-most corner and // one by one store characters in res[] int k = m, l = n; while (k > 0 && l > 0) { // If current character in a[] and b are same, // then current character is part of LCS if (a[k-1] == b[l-1]) { // Put current character in result res = a[k-1] + res; // reduce values of i, j and indexs k--; l--; } // If not same, then find the larger of two and // go in the direction of larger value else if (dp[k-1,l] < dp[k,l-1]) { res = a[k-1] + res; k--; } else { res = b[l-1] + res; l--; } } // Copy remaining characters of string 'a' while (k > 0) { res = a[k-1] + res; k--; } // Copy remaining characters of string 'b' while (l > 0) { res = b[l-1] + res; l--; } // Print the result Console.WriteLine(res); } /* Driver program to test above function */ public static void Main() { string a = "algorithm"; string b = "rhythm"; printSuperSeq(a, b); }}// This article is contributed by Ita_c. <script>// Javascript program to print supersequence of two// strings // Prints super sequence of a[0..m-1] and b[0..n-1]function printSuperSeq(a,b){ let m = a.length, n = b.length; let dp = new Array(m+1); for(let i=0;i<m+1;i++) dp[i]=new Array(n+1); // Fill table in bottom up manner for (let i = 0; i <= m; i++) { for (let j = 0; j <= n; j++) { // Below steps follow above recurrence if (i == 0) dp[i][j] = j; else if (j == 0 ) dp[i][j] = i; else if (a[i-1] == b[j-1]) dp[i][j] = 1 + dp[i-1][j-1]; else dp[i][j] = 1 + Math.min(dp[i-1][j], dp[i][j-1]); } } // Create a string of size index+1 to store the result let res = ""; // Start from the right-most-bottom-most corner and // one by one store characters in res[] let i = m, j = n; while (i > 0 && j > 0) { // If current character in a[] and b are same, // then current character is part of LCS if (a[i-1] == b[j-1]) { // Put current character in result res = a[i-1] + res; // reduce values of i, j and indexs i--; j--; } // If not same, then find the larger of two and // go in the direction of larger value else if (dp[i-1][j] < dp[i][j-1]) { res = a[i-1] + res; i--; } else { res = b[j-1] + res; j--; } } // Copy remaining characters of string 'a' while (i > 0) { res = a[i-1] + res; i--; } // Copy remaining characters of string 'b' while (j > 0) { res = b[j-1] + res; j--; } // Print the result document.write(res);} /* Driver program to test above function */let a = "algorithm";let b = "rhythm";printSuperSeq(a, b); // This code is contributed by ab2127</script> Output: algorihythm Solution based on LCS: We build the 2D array using LCS solution. If the character at the two pointer positions is equal, we increment the length by 1, else we store the maximum of the adjacent positions. Finally, we backtrack the matrix to find the index vector traversing which would yield the shortest possible combination. C++ Java Python3 C# Javascript // C++ implementation to find shortest string for// a combination of two strings#include <bits/stdc++.h>using namespace std; // Vector that store the index of string a and bvector<int> index_a;vector<int> index_b; // Subroutine to Backtrack the dp matrix to// find the index vector traversing which would// yield the shortest possible combinationvoid index(int dp[][100], string a, string b, int size_a, int size_b){ // Clear the index vectors index_a.clear(); index_b.clear(); // Return if either of a or b is reduced // to 0 if (size_a == 0 || size_b == 0) return; // Push both to index_a and index_b with // the respective a and b index if (a[size_a - 1] == b[size_b - 1]) { index(dp, a, b, size_a - 1, size_b - 1); index_a.push_back(size_a - 1); index_b.push_back(size_b - 1); } else { if (dp[size_a - 1][size_b] > dp[size_a] [size_b - 1]) { index(dp, a, b, size_a - 1, size_b); } else { index(dp, a, b, size_a, size_b - 1); } }} // function to combine the strings to form// the shortest stringvoid combine(string a, string b, int size_a, int size_b){ int dp[100][100]; string ans = ""; int k = 0; // Initialize the matrix to 0 memset(dp, 0, sizeof(dp)); // Store the increment of diagonally // previous value if a[i-1] and b[j-1] are // equal, else store the max of dp[i][j-1] // and dp[i-1][j] for (int i = 1; i <= size_a; i++) { for (int j = 1; j <= size_b; j++) { if (a[i - 1] == b[j - 1]) { dp[i][j] = dp[i - 1][j - 1] + 1; } else { dp[i][j] = max(dp[i][j - 1], dp[i - 1][j]); } } } // Get the Lowest Common Subsequence int lcs = dp[size_a][size_b]; // Backtrack the dp array to get the index // vectors of two strings, used to find // the shortest possible combination. index(dp, a, b, size_a, size_b); int i, j = i = k; // Build the string combination using the // index found by backtracking while (k < lcs) { while (i < size_a && i < index_a[k]) { ans += a[i++]; } while (j < size_b && j < index_b[k]) { ans += b[j++]; } ans = ans + a[index_a[k]]; k++; i++; j++; } // Append the remaining characters in a // to answer while (i < size_a) { ans += a[i++]; } // Append the remaining characters in b // to answer while (j < size_b) { ans += b[j++]; } cout << ans;} // Driver codeint main(){ string a = "algorithm"; string b = "rhythm"; // Store the length of string int size_a = a.size(); int size_b = b.size(); combine(a, b, size_a, size_b); return 0;} // Java implementation to find shortest string for// a combination of two stringsimport java.util.ArrayList;public class GFG_2 { // Vector that store the index of string a and b static ArrayList<Integer> index_a = new ArrayList<>(); static ArrayList<Integer> index_b = new ArrayList<>(); // Subroutine to Backtrack the dp matrix to // find the index vector traversing which would // yield the shortest possible combination static void index(int dp[][], String a, String b, int size_a, int size_b) { // Clear the index vectors index_a.clear(); index_b.clear(); // Return if either of a or b is reduced // to 0 if (size_a == 0 || size_b == 0) return; // Push both to index_a and index_b with // the respective a and b index if (a.charAt(size_a - 1) == b.charAt(size_b - 1)) { index(dp, a, b, size_a - 1, size_b - 1); index_a.add(size_a - 1); index_b.add(size_b - 1); } else { if (dp[size_a - 1][size_b] > dp[size_a] [size_b - 1]) { index(dp, a, b, size_a - 1, size_b); } else { index(dp, a, b, size_a, size_b - 1); } } } // function to combine the strings to form // the shortest string static void combine(String a, String b, int size_a, int size_b) { int[][] dp = new int[100][100]; String ans = ""; int k = 0; // Store the increment of diagonally // previous value if a[i-1] and b[j-1] are // equal, else store the max of dp[i][j-1] // and dp[i-1][j] for (int i = 1; i <= size_a; i++) { for (int j = 1; j <= size_b; j++) { if (a.charAt(i - 1) == b.charAt(j - 1)) { dp[i][j] = dp[i - 1][j - 1] + 1; } else { dp[i][j] = Math.max(dp[i][j - 1], dp[i - 1][j]); } } } // Get the Lowest Common Subsequence int lcs = dp[size_a][size_b]; // Backtrack the dp array to get the index // vectors of two strings, used to find // the shortest possible combination. index(dp, a, b, size_a, size_b); int i, j = i = k; // Build the string combination using the // index found by backtracking while (k < lcs) { while (i < size_a && i < index_a.get(k)) { ans += a.charAt(i++); } while (j < size_b && j < index_b.get(k)) { ans += b.charAt(j++); } ans = ans + a.charAt(index_a.get(k)); k++; i++; j++; } // Append the remaining characters in a // to answer while (i < size_a) { ans += a.charAt(i++); } // Append the remaining characters in b // to answer while (j < size_b) { ans += b.charAt(j++); } System.out.println(ans); } /* Driver program to test above function */ public static void main(String args[]) { String a = "algorithm"; String b = "rhythm"; combine(a, b, a.length(),b.length()); }}// This article is contributed by Sumit Ghosh # Python implementation to find shortest string for# a combination of two stringsindex_a = []index_b = [] def index(dp, a, b, size_a, size_b): if (size_a == 0 or size_b == 0): return if (a[size_a - 1] == b[size_b - 1]): index(dp, a, b, size_a - 1, size_b - 1) index_a.append(size_a - 1) index_b.append(size_b - 1) else: if(dp[size_a - 1][size_b] > dp[size_a][size_b - 1]): index(dp, a, b, size_a - 1, size_b) else: index(dp, a, b, size_a, size_b - 1) def combine(a, b, size_a, size_b): dp = [[0 for i in range(100)] for j in range(100)] ans = "" k = 0 for i in range(1, size_a + 1): for j in range(1, size_b + 1): if(a[i - 1] == b[j - 1]): dp[i][j] = dp[i - 1][j - 1] + 1 else: dp[i][j] = max(dp[i][j - 1], dp[i - 1][j]) lcs = dp[size_a][size_b] index(dp, a, b, size_a, size_b) j = i = k while (k < lcs): while (i < size_a and i < index_a[k]): ans += a[i]; i += 1 while (j < size_b and j < index_b[k]): ans += b[j] j += 1 ans = ans + a[index_a[k]] k += 1 i += 1 j += 1 while (i < size_a): ans += a[i] i += 1 while (j < size_b): ans += b[j] j += 1 print(ans) # Driver codea = "algorithm"b = "rhythm"size_a = len(a)size_b = len(b)combine(a, b, size_a, size_b) # This code is contributed by avanitrachhadiya2155 // C# implementation to find shortest string for// a combination of two stringsusing System;using System.Collections.Generic; class GFG{ // Vector that store the index of string a and b static List<int> index_a = new List<int>(); static List<int> index_b = new List<int>(); // Subroutine to Backtrack the dp matrix to // find the index vector traversing which would // yield the shortest possible combination static void index(int [,]dp, String a, String b, int size_a, int size_b) { // Clear the index vectors index_a.Clear(); index_b.Clear(); // Return if either of a or b is reduced // to 0 if (size_a == 0 || size_b == 0) return; // Push both to index_a and index_b with // the respective a and b index if (a[size_a - 1] == b[size_b - 1]) { index(dp, a, b, size_a - 1, size_b - 1); index_a.Add(size_a - 1); index_b.Add(size_b - 1); } else { if (dp[size_a - 1,size_b] > dp[size_a, size_b - 1]) { index(dp, a, b, size_a - 1, size_b); } else { index(dp, a, b, size_a, size_b - 1); } } } // function to combine the strings to form // the shortest string static void combine(String a, String b, int size_a,int size_b) { int[,] dp = new int[100, 100]; String ans = ""; int k = 0, i, j; // Store the increment of diagonally // previous value if a[i-1] and b[j-1] are // equal, else store the max of dp[i,j-1] // and dp[i-1,j] for (i = 1; i <= size_a; i++) { for (j = 1; j <= size_b; j++) { if (a[i-1] == b[j - 1]) { dp[i, j] = dp[i - 1, j - 1] + 1; } else { dp[i, j] = Math.Max(dp[i, j - 1], dp[i - 1, j]); } } } // Get the Lowest Common Subsequence int lcs = dp[size_a, size_b]; // Backtrack the dp array to get the index // vectors of two strings, used to find // the shortest possible combination. index(dp, a, b, size_a, size_b); i = j = k; // Build the string combination using the // index found by backtracking while (k < lcs) { while (i < size_a && i < index_a[k]) { ans += a[i++]; } while (j < size_b && j < index_b[k]) { ans += b[j++]; } ans = ans + a[index_a[k]]; k++; i++; j++; } // Append the remaining characters in a // to answer while (i < size_a) { ans += a[i++]; } // Append the remaining characters in b // to answer while (j < size_b) { ans += b[j++]; } Console.WriteLine(ans); } // Driver Code public static void Main(String []args) { String a = "algorithm"; String b = "rhythm"; combine(a, b, a.Length,b.Length); }} // This code is contributed by Princi Singh <script> // JavaScript implementation to find shortest string for// a combination of two strings // Vector that store the index of string a and blet index_a =[];let index_b = []; // Subroutine to Backtrack the dp matrix to// find the index vector traversing which would// yield the shortest possible combinationfunction index(dp,a,b,size_a,size_b){ // Clear the index vectors index_a=[]; index_b=[]; // Return if either of a or b is reduced // to 0 if (size_a == 0 || size_b == 0) return; // Push both to index_a and index_b with // the respective a and b index if (a[size_a - 1] == b[size_b - 1]) { index(dp, a, b, size_a - 1, size_b - 1); index_a.push(size_a - 1); index_b.push(size_b - 1); } else { if (dp[size_a - 1][size_b] > dp[size_a] [size_b - 1]) { index(dp, a, b, size_a - 1, size_b); } else { index(dp, a, b, size_a, size_b - 1); } }} // function to combine the strings to form// the shortest stringfunction combine(a,b,size_a,size_b){ let dp = new Array(100); for(let i=0;i<100;i++) { dp[i]=new Array(100); for(let j=0;j<100;j++) { dp[i][j]=0; } } let ans = ""; let k = 0; // Store the increment of diagonally // previous value if a[i-1] and b[j-1] are // equal, else store the max of dp[i][j-1] // and dp[i-1][j] for (let i = 1; i <= size_a; i++) { for (let j = 1; j <= size_b; j++) { if (a[i - 1] == b[j - 1]) { dp[i][j] = dp[i - 1][j - 1] + 1; } else { dp[i][j] = Math.max(dp[i][j - 1], dp[i - 1][j]); } } } // Get the Lowest Common Subsequence let lcs = dp[size_a][size_b]; // Backtrack the dp array to get the index // vectors of two strings, used to find // the shortest possible combination. index(dp, a, b, size_a, size_b); let i, j = i = k; // Build the string combination using the // index found by backtracking while (k < lcs) { while (i < size_a && i < index_a[k]) { ans += a[i++]; } while (j < size_b && j < index_b[k]) { ans += b[j++]; } ans = ans + a[index_a[k]]; k++; i++; j++; } // Append the remaining characters in a // to answer while (i < size_a) { ans += a[i++]; } // Append the remaining characters in b // to answer while (j < size_b) { ans += b[j++]; } document.write(ans+"<br>");} /* Driver program to test above function */let a = "algorithm";let b = "rhythm"; combine(a, b, a.length,b.length); // This code is contributed by patel2127 </script> Output: algorihythm This article is contributed by Raghav Jajodia. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. ukasp princi singh ashutosh450 AshishKumar134 avanitrachhadiya2155 utkarshbhardwaj2 ab2127 patel2127 surinderdawra388 LCS Dynamic Programming Strings Strings Dynamic Programming LCS Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Optimal Substructure Property in Dynamic Programming | DP-2 Maximum sum such that no two elements are adjacent Min Cost Path | DP-6 Maximum Subarray Sum using Divide and Conquer algorithm Gold Mine Problem Reverse a string in Java Write a program to reverse an array or string Write a program to print all permutations of a given string C++ Data Types Python program to check if a string is palindrome or not
[ { "code": null, "e": 24305, "s": 24277, "text": "\n14 Mar, 2022" }, { "code": null, "e": 24457, "s": 24305, "text": "Compute the shortest string for a combination of two given strings such that the new string consist of both the strings as its subsequences.Examples: " }, { ...
How to Multiply Two Matrices using Python?
Multiplication of two matrices is possible only when number of columns in first matrix equals number of rows in second matrix. Multiplication can be done using nested loops. Following program has two matrices x and y each with 3 rows and 3 columns. The resultant z matrix will also have 3X3 structure. Element of each row of first matrix is multiplied by corresponding element in column of second matrix. X = [[1,2,3], [4,5,6], [7,8,9]] Y = [[10,11,12], [13,14,15], [16,17,18]] result = [[0,0,0], [0,0,0], [0,0,0]] # iterate through rows of X for i in range(len(X)): for j in range(len(Y[0])): for k in range(len(Y)): result[i][j] += X[i][k] * Y[k][j] for r in result: print(r) The result: [84, 90, 96] [201, 216, 231] [318, 342, 366]
[ { "code": null, "e": 1189, "s": 1062, "text": "Multiplication of two matrices is possible only when number of columns in first matrix equals number of rows in second matrix." }, { "code": null, "e": 1467, "s": 1189, "text": "Multiplication can be done using nested loops. Followin...
Select last day of current year in MySQL?
In order to get last day of current year, you can use LAST_DAY() from MySQL. The syntax is as follows− SELECT LAST_DAY(DATE_ADD(CURDATE(), INTERVAL 12-MONTH(CURDATE()) MONTH)); Let us implement the above syntax to know the last day of current year− mysql> SELECT LAST_DAY(DATE_ADD(CURDATE(), INTERVAL 12-MONTH(CURDATE()) MONTH)); This will produce the following output − +-------------------------------------------------------------------+ | LAST_DAY(DATE_ADD(CURDATE(), INTERVAL 12-MONTH(CURDATE()) MONTH)) | +-------------------------------------------------------------------+ | 2019-12-31 | +-------------------------------------------------------------------+ 1 row in set (0.00 sec)
[ { "code": null, "e": 1165, "s": 1062, "text": "In order to get last day of current year, you can use LAST_DAY() from MySQL. The syntax is as follows−" }, { "code": null, "e": 1239, "s": 1165, "text": "SELECT LAST_DAY(DATE_ADD(CURDATE(), INTERVAL 12-MONTH(CURDATE()) MONTH));" },...
Spring JDBC - ResultSetExtractor Interface
The org.springframework.jdbc.core.ResultSetExtractor interface is a callback interface used by JdbcTemplate's query methods. Implementations of this interface perform the actual work of extracting results from a ResultSet, but don't need to worry about exception handling. SQLExceptions will be caught and handled by the calling JdbcTemplate. This interface is mainly used within the JDBC framework itself. A RowMapper is usually a simpler choice for ResultSet processing, mapping one result object per row instead of one result object for the entire ResultSet. Following is the declaration for org.springframework.jdbc.core.ResultSetExtractor interface − public interface ResultSetExtractor Step 1 − Create a JdbcTemplate object using a configured datasource. Step 1 − Create a JdbcTemplate object using a configured datasource. Step 2 − Use JdbcTemplate object methods to make database operations while parsing the resultset using ResultSetExtractor. Step 2 − Use JdbcTemplate object methods to make database operations while parsing the resultset using ResultSetExtractor. Following example will demonstrate how to read a query using JdbcTemplate class and ResultSetExtractor interface. We'll read available record of a student in Student Table. public List<Student> listStudents() { String SQL = "select * from Student"; List <Student> students = jdbcTemplateObject.query( SQL, new ResultSetExtractor<List<Student>>(){ public List<Student> extractData(ResultSet rs) throws SQLException, DataAccessException { List<Student> list = new ArrayList<Student>(); while(rs.next()){ Student student = new Student(); student.setId(rs.getInt("id")); student.setName(rs.getString("name")); student.setAge(rs.getInt("age")); student.setDescription(rs.getString("description")); student.setImage(rs.getBytes("image")); list.add(student); } return list; } } ); return students; } Where, SQL − Select query to read students. SQL − Select query to read students. jdbcTemplateObject − StudentJDBCTemplate object to read student object from database. jdbcTemplateObject − StudentJDBCTemplate object to read student object from database. ResultSetExtractor − ResultSetExtractor object to parse resultset object. ResultSetExtractor − ResultSetExtractor object to parse resultset object. To understand the above-mentioned concepts related to Spring JDBC, let us write an example which will select a query. To write our example, let us have a working Eclipse IDE in place and use the following steps to create a Spring application. Following is the content of the Data Access Object interface file StudentDAO.java. package com.tutorialspoint; import java.util.List; import javax.sql.DataSource; public interface StudentDAO { /** * This is the method to be used to initialize * database resources ie. connection. */ public void setDataSource(DataSource ds); /** * This is the method to be used to list down * all the records from the Student table. */ public List<Student> listStudents(); } Following is the content of the Student.java file. package com.tutorialspoint; public class Student { private Integer age; private String name; private Integer id; public void setAge(Integer age) { this.age = age; } public Integer getAge() { return age; } public void setName(String name) { this.name = name; } public String getName() { return name; } public void setId(Integer id) { this.id = id; } public Integer getId() { return id; } } Following is the implementation class file StudentJDBCTemplate.java for the defined DAO interface StudentDAO. package com.tutorialspoint; import java.util.List; import java.util.ArrayList; import javax.sql.DataSource; import org.springframework.jdbc.core.JdbcTemplate; public class StudentJDBCTemplate implements StudentDAO { private DataSource dataSource; private JdbcTemplate jdbcTemplateObject; public void setDataSource(DataSource dataSource) { this.dataSource = dataSource; this.jdbcTemplateObject = new JdbcTemplate(dataSource); } public List<Student> listStudents() { String SQL = "select * from Student"; List <Student> students = jdbcTemplateObject.query(SQL, new ResultSetExtractor<List<Student>>(){ public List<Student> extractData(ResultSet rs) throws SQLException, DataAccessException { List<Student> list = new ArrayList<Student>(); while(rs.next()){ Student student = new Student(); student.setId(rs.getInt("id")); student.setName(rs.getString("name")); student.setAge(rs.getInt("age")); student.setDescription(rs.getString("description")); student.setImage(rs.getBytes("image")); list.add(student); } return list; } }); return students; } } Following is the content of the MainApp.java file. package com.tutorialspoint; import java.util.List; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; public class MainApp { public static void main(String[] args) { ApplicationContext context = new ClassPathXmlApplicationContext("Beans.xml"); StudentJDBCTemplate studentJDBCTemplate = (StudentJDBCTemplate)context.getBean("studentJDBCTemplate"); List<Student> students = studentJDBCTemplate.listStudents(); for(Student student: students){ System.out.print("ID : " + student.getId() ); System.out.println(", Age : " + student.getAge()); } } } Following is the configuration file Beans.xml. <?xml version = "1.0" encoding = "UTF-8"?> <beans xmlns = "http://www.springframework.org/schema/beans" xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation = "http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd "> <!-- Initialization for data source --> <bean id = "dataSource" class = "org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name = "driverClassName" value = "com.mysql.cj.jdbc.Driver"/> <property name = "url" value = "jdbc:mysql://localhost:3306/TEST"/> <property name = "username" value = "root"/> <property name = "password" value = "admin"/> </bean> <!-- Definition for studentJDBCTemplate bean --> <bean id = "studentJDBCTemplate" class = "com.tutorialspoint.StudentJDBCTemplate"> <property name = "dataSource" ref = "dataSource" /> </bean> </beans> Once you are done creating the source and bean configuration files, let us run the application. If everything is fine with your application, it will print the following message. ID : 1, Age : 17 ID : 3, Age : 18 Print Add Notes Bookmark this page
[ { "code": null, "e": 2669, "s": 2396, "text": "The org.springframework.jdbc.core.ResultSetExtractor interface is a callback interface used by JdbcTemplate's query methods. Implementations of this interface perform the actual work of extracting results from a ResultSet, but don't need to worry about ...
Python Program for 0-1 Knapsack Problem
In this article, we will learn about the solution to the problem statement given below. Problem statement − We are given weights and values of n items, we need to put these items in a bag of capacity W up to the maximum capacity w. We need to carry a maximum number of items and return its value. Now let’s observe the solution in the implementation below − # Brute-force approach Live Demo #Returns the maximum value that can be stored by the bag def knapSack(W, wt, val, n): # initial conditions if n == 0 or W == 0 : return 0 # If weight is higher than capacity then it is not included if (wt[n-1] > W): return knapSack(W, wt, val, n-1) # return either nth item being included or not else: return max(val[n-1] + knapSack(W-wt[n-1], wt, val, n-1), knapSack(W, wt, val, n-1)) # To test above function val = [50,100,150,200] wt = [8,16,32,40] W = 64 n = len(val) print (knapSack(W, wt, val, n)) 350 #dynamic approach Live Demo # a dynamic approach # Returns the maximum value that can be stored by the bag def knapSack(W, wt, val, n): K = [[0 for x in range(W + 1)] for x in range(n + 1)] #Table in bottom up manner for i in range(n + 1): for w in range(W + 1): if i == 0 or w == 0: K[i][w] = 0 elif wt[i-1] <= w: K[i][w] = max(val[i-1] + K[i-1][w-wt[i-1]], K[i-1][w]) else: K[i][w] = K[i-1][w] return K[n][W] #Main val = [50,100,150,200] wt = [8,16,32,40] W = 64 n = len(val) print(knapSack(W, wt, val, n)) 350 All the variables are declared in the local scope and their references are seen in the figure above. In this article, we have learned about how we can make a Python Program for 0-1 Knapsack Problem
[ { "code": null, "e": 1150, "s": 1062, "text": "In this article, we will learn about the solution to the problem statement given below." }, { "code": null, "e": 1359, "s": 1150, "text": "Problem statement − We are given weights and values of n items, we need to put these items in ...
How to find the length of an array in JavaScript?
To find the length of an array, use the JavaScript length property. JavaScript array length property returns an unsigned, 32-bit integer that specifies the number of elements in an array. You can try to run the following code to find the length of an array − Live Demo <html> <head> <title>JavaScript Array length Property</title> </head> <body> <script> var arr = new Array( 50, 60, 70, 80 ); document.write("arr.length is : " + arr.length); </script> </body> </html> arr.length is : 4
[ { "code": null, "e": 1250, "s": 1062, "text": "To find the length of an array, use the JavaScript length property. JavaScript array length property returns an unsigned, 32-bit integer that specifies the number of elements in an array." }, { "code": null, "e": 1321, "s": 1250, "te...
SIP - Messaging
SIP messages are of two types − requests and responses. The opening line of a request contains a method that defines the request, and a Request-URI that defines where the request is to be sent. The opening line of a request contains a method that defines the request, and a Request-URI that defines where the request is to be sent. Similarly, the opening line of a response contains a response code. Similarly, the opening line of a response contains a response code. SIP requests are the codes used to establish a communication. To complement them, there are SIP responses that generally indicate whether a request succeeded or failed. These SIP requests which are known as METHODS make SIP message workable. METHODS can be regarded as SIP requests, since they request a specific action to be taken by another user agent or server. METHODS can be regarded as SIP requests, since they request a specific action to be taken by another user agent or server. METHODS are distinguished into two types − Core Methods Extension Methods METHODS are distinguished into two types − Core Methods Core Methods Extension Methods Extension Methods There are six core methods as discussed below. INVITE is used to initiate a session with a user agent. In other words, an INVITE method is used to establish a media session between the user agents. INVITE can contain the media information of the caller in the message body. INVITE can contain the media information of the caller in the message body. A session is considered established if an INVITE has received a success response(2xx) or an ACK has been sent. A session is considered established if an INVITE has received a success response(2xx) or an ACK has been sent. A successful INVITE request establishes a dialog between the two user agents which continues until a BYE is sent to terminate the session. A successful INVITE request establishes a dialog between the two user agents which continues until a BYE is sent to terminate the session. An INVITE sent within an established dialog is known as a re-INVITE. An INVITE sent within an established dialog is known as a re-INVITE. Re-INVITE is used to change the session characteristics or refresh the state of a dialog. Re-INVITE is used to change the session characteristics or refresh the state of a dialog. The following code shows how INVITE is used. INVITE sips:Bob@TMC.com SIP/2.0 Via: SIP/2.0/TLS client.ANC.com:5061;branch = z9hG4bK74bf9 Max-Forwards: 70 From: Alice<sips:Alice@TTP.com>;tag = 1234567 To: Bob<sips:Bob@TMC.com> Call-ID: 12345601@192.168.2.1 CSeq: 1 INVITE Contact: <sips:Alice@client.ANC.com> Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, NOTIFY Supported: replaces Content-Type: application/sdp Content-Length: ... v = 0 o = Alice 2890844526 2890844526 IN IP4 client.ANC.com s = Session SDP c = IN IP4 client.ANC.com t = 3034423619 0 m = audio 49170 RTP/AVP 0 a = rtpmap:0 PCMU/8000 BYE is the method used to terminate an established session. This is a SIP request that can be sent by either the caller or the callee to end a session. It cannot be sent by a proxy server. It cannot be sent by a proxy server. BYE request normally routes end to end, bypassing the proxy server. BYE request normally routes end to end, bypassing the proxy server. BYE cannot be sent to a pending an INVITE or an unestablished session. BYE cannot be sent to a pending an INVITE or an unestablished session. REGISTER request performs the registration of a user agent. This request is sent by a user agent to a registrar server. The REGISTER request may be forwarded or proxied until it reaches an authoritative registrar of the specified domain. The REGISTER request may be forwarded or proxied until it reaches an authoritative registrar of the specified domain. It carries the AOR (Address of Record) in the To header of the user that is being registered. It carries the AOR (Address of Record) in the To header of the user that is being registered. REGISTER request contains the time period (3600sec). REGISTER request contains the time period (3600sec). One user agent can send a REGISTER request on behalf of another user agent. This is known as third-party registration. Here, the From tag contains the URI of the party submitting the registration on behalf of the party identified in the To header. One user agent can send a REGISTER request on behalf of another user agent. This is known as third-party registration. Here, the From tag contains the URI of the party submitting the registration on behalf of the party identified in the To header. CANCEL is used to terminate a session which is not established. User agents use this request to cancel a pending call attempt initiated earlier. It can be sent either by a user agent or a proxy server. It can be sent either by a user agent or a proxy server. CANCEL is a hop by hop request, i.e., it goes through the elements between the user agent and receives the response generated by the next stateful element. CANCEL is a hop by hop request, i.e., it goes through the elements between the user agent and receives the response generated by the next stateful element. ACK is used to acknowledge the final responses to an INVITE method. An ACK always goes in the direction of INVITE.ACK may contain SDP body (media characteristics), if it is not available in INVITE. ACK may not be used to modify the media description that has already been sent in the initial INVITE. ACK may not be used to modify the media description that has already been sent in the initial INVITE. A stateful proxy receiving an ACK must determine whether or not the ACK should be forwarded downstream to another proxy or user agent. A stateful proxy receiving an ACK must determine whether or not the ACK should be forwarded downstream to another proxy or user agent. For 2xx responses, ACK is end to end, but for all other final responses, it works on hop by hop basis when stateful proxies are involved. For 2xx responses, ACK is end to end, but for all other final responses, it works on hop by hop basis when stateful proxies are involved. OPTIONS method is used to query a user agent or a proxy server about its capabilities and discover its current availability. The response to a request lists the capabilities of the user agent or server. A proxy never generates an OPTIONS request. SUBSCRIBE is used by user agents to establish a subscription for the purpose of getting notification about a particular event. It contains an Expires header field that indicates the duration of a subscription. It contains an Expires header field that indicates the duration of a subscription. After the time period passes, the subscription will automatically terminate. After the time period passes, the subscription will automatically terminate. Subscription establishes a dialog between the user agents. Subscription establishes a dialog between the user agents. You can re-subscription again by sending another SUBSCRIBE within the dialog before the expiration time. You can re-subscription again by sending another SUBSCRIBE within the dialog before the expiration time. A 200 OK will be received for a subscription from User. A 200 OK will be received for a subscription from User. Users can unsubscribe by sending another SUBSCRIBE method with Expires value 0(zero). Users can unsubscribe by sending another SUBSCRIBE method with Expires value 0(zero). NOTIFY is used by user agents to get the occurrence of a particular event. Usually a NOTIFY will trigger within a dialog when a subscription exists between the subscriber and the notifier. Every NOTIFY will get 200 OK response if it is received by notifier. Every NOTIFY will get 200 OK response if it is received by notifier. NOTIFY contain an Event header field indicating the event and a subscriptionstate header field indicating the current state of the subscription. NOTIFY contain an Event header field indicating the event and a subscriptionstate header field indicating the current state of the subscription. A NOTIFY is always sent at the start and termination of a subscription. A NOTIFY is always sent at the start and termination of a subscription. PUBLISH is used by a user agent to send event state information to a server. PUBLISH is mostly useful when there are multiple sources of event information. PUBLISH is mostly useful when there are multiple sources of event information. A PUBLISH request is similar to a NOTIFY, except that it is not sent in a dialog. A PUBLISH request is similar to a NOTIFY, except that it is not sent in a dialog. A PUBLISH request must contain an Expires header field and a Min-Expires header field. A PUBLISH request must contain an Expires header field and a Min-Expires header field. REFER is used by a user agent to refer another user agent to access a URI for the dialog. REFER must contain a Refer-To header. This is a mandatory header for REFER. REFER must contain a Refer-To header. This is a mandatory header for REFER. REFER can be sent inside or outside a dialog. REFER can be sent inside or outside a dialog. A 202 Accepted will trigger a REFER request which indicates that other user agent has accepted the reference. A 202 Accepted will trigger a REFER request which indicates that other user agent has accepted the reference. INFO is used by a user agent to send call signalling information to another user agent with which it has established a media session. This is an end-to-end request. This is an end-to-end request. A proxy will always forward an INFO request. A proxy will always forward an INFO request. UPDATE is used to modify the state of a session if a session is not established. User could change the codec with UPDATE. IF a session is established, a re-Invite is used to change/update the session. PRACK is used to acknowledge the receipt of a reliable transfer of provisional response (1XX). Generally PRACK is generated by a client when it receive a provisional response containing an RSeq reliable sequence number and a supported:100rel header. Generally PRACK is generated by a client when it receive a provisional response containing an RSeq reliable sequence number and a supported:100rel header. PRACK contains (RSeq + CSeq) value in the rack header. PRACK contains (RSeq + CSeq) value in the rack header. The PRACK method applies to all provisional responses except the 100 Trying response, which is never reliably transported. The PRACK method applies to all provisional responses except the 100 Trying response, which is never reliably transported. A PRACK may contain a message body; it may be used for offer/answer exchange. A PRACK may contain a message body; it may be used for offer/answer exchange. It is used to send an instant message using SIP. An IM usually consists of short messages exchanged in real time by participants engaged in text conversation. MESSAGE can be sent within a dialog or outside a dialog. MESSAGE can be sent within a dialog or outside a dialog. The contents of a MESSAGE are carried in the message body as a MIME attachment. The contents of a MESSAGE are carried in the message body as a MIME attachment. A 200 OK response is normally received to indicate that the message has been delivered at its destination. A 200 OK response is normally received to indicate that the message has been delivered at its destination. 27 Lectures 2.5 hours Bernie Raffe Print Add Notes Bookmark this page
[ { "code": null, "e": 1906, "s": 1850, "text": "SIP messages are of two types − requests and responses." }, { "code": null, "e": 2044, "s": 1906, "text": "The opening line of a request contains a method that defines the request, and a Request-URI that defines where the request is ...
A simple intro to Regex with Python | by Tirthajyoti Sarkar | Towards Data Science
Text mining is a hot topic in data science these days. The volume, variety, and complexity of textual data are increasing at an astounding space. As per this article, the global text analytics market was valued at USD 5.46 billion in 2019 and is expected to reach a value of USD 14.84 billion by 2025. Regular expressions are used to identify whether a pattern exists in a given sequence of characters (string) or not and also to locate the position of the pattern in a corpus of text. They help in manipulating textual data, which is often a pre-requisite for data science projects that involve text analytics. It is, therefore, important for budding data scientists, to have a preliminary knowledge of this powerful tool, for future projects and analysis tasks. In Python, there is a built-in module called re, which needs to be imported for working with Regex. import re This is the starting point of the official documentation page. In this short review, we will go through the basics of Regex usage in simple text processing with some practical examples in Python. We use thematch method to check if a pattern matches a string/sequence. It is case-sensitive. Instead of repeating the code, we can use compile to create a regex program and use built-in methods. So, compiled programs return special object e.g. match objects. But if they don’t match it will return None, and that means we can still run our conditional loop! We can easily use additional parameters in the match object to check for positional matching of a string pattern. Above, we notice that once we created a program prog with the pattern thon, we can use it any number of times with various strings. Also, note that the pos argument is used to indicate where the matching should be looked into. For the last two code snippets, we change the starting position and get different results in terms of the match. although the string is identical. Let’s see a use case. We want to find out how many words in a list have the last three letters with ‘ing’. For solving the problem above, we could have used a simple string method. What’s so powerful about regex? The answer is that it can match a very complex pattern. But to see such advanced examples, let’s first explore the search method. Note, how the match method returns None (because we did not specify the proper starting position of the pattern in the text) but the search method finds the position of the match (by scanning through the text). Naturally, we can use the span() method of the match object, returned by search, to locate the position of the matched pattern. The search is powerful but it is also limited to finding the first occurring match in the text. To discover all the matches in a long text, we can use findall and finditer methods. The findall method returns a list with the matching pattern. You can count the number of items to understand the frequency of the searched term in the text. The finditer method produces an iterator. We can use this to see more information, as shown below. Now, we gently enter the arena where Regex shines through. The most common use of Regex is related to ‘wildcard matching’ or ‘fuzzy matching’. This is where you don’t have the full pattern but a portion of it and you still want to find where in a given text, something similar appears. Here are various examples. Here we will also apply thegroup() method on the object returned by search to essentially return the matched string. Dot . matches any single character except the newline character. DOT is limited to alphabetical characters, so we need to expand the repertoire with other tools. There are symbols other than letter, digits, and underscore. We use \W to catch them. \s (lowercase s) matches a single whitespace character like space, newline, tab, return. Naturally, this is used to search for a pattern with whitespace inside it e.g. a pair of words. Here is an example. And here is an example of a practical application. Suppose, we have a text describing scores of some students in a test. Scores can range from 10–99 i.e. 2 digits. One of the scores is typed wrongly as a 3-digit number (Romie got 72 but it was typed as 721). The following simple code snippet catches it using \d wildcard matching. The ^(caret) matches pattern at the beginning of a string (but not anywhere else). The $ (dollar sign) matches a pattern at the end of the string. Following is a practical example where we are only interested in pulling out the patent information of Apple and discard other companies. We check the end of the text for ‘Apple’ and only if it matches, we pull out the patent number using the numerical digit matching code we showed earlier. Now, we can move on to more complex wildcard matching with multiple characters, which allows us much more power and flexibility. * matches 0 or more repetitions of the preceding regular expression. + causes the resulting RE to match 1 or more repetitions of the preceding RE. ? causes the resulting RE to match precisely 0 or 1 repetitions of the preceding RE. {m} specifies exactly m copies of RE to match. Fewer matches cause a non-match and returns None. {m,n} specifies exactly m to n copies of RE to match. Omitting m specifies a lower bound of zero, and omitting n specifies an infinite upper bound. {m,n}? specifies m to n copies of RE to match in a non-greedy fashion. [x,y,z] matches x, y, or z. A range of characters can be matched inside the set. This is one of the most widely used regex techniques. We denote range by using a -. For example, a-z or A-Z will match anything between a and z or A and Z i.e. the entire English alphabet. Let’s suppose, we want to extract an email id. We put in a pattern matching regex with alphabetical characters + @ + .com. But it cannot catch an email id with some numerical digits in it. So, we expand the regex a little bit. But we are only extracting email ids with the domain name ‘.com’. So, it cannot catch the emails with other domains. It is quite easy to expand on that but clever manipulation of email may prevent extraction of with such a regex. Like any other good computable objects, Regex supports boolean operation to expand its reach and power. OR-ing of individual Regex patterns is particularly interesting. For example, if we are interested to find phone numbers containing ‘312’ area code, the following code fails to extract it from the second string. We can create a combination of Regex objects as follows to expand the power, Now, we show an example of extracting valid phone numbers from a text using findall() and the multi-character matching tricks we learned so far. Note that a valid phone number with 312 area code is of the pattern 312-xxx-xxxx or 312.xxx.xxxx. Finally, we talk about a method that can be used in creative ways to extract meaningful text from an irregular corpus. A simple example is shown below, where we build a Regex pattern with the extrinsic characters which are messing up a regular sentence and use the split() method to get rid of those characters from the sentence. We reviewed the essentials of defining Regex objects and search patterns with Python and how to use them for extracting patterns from a text corpus. Regex is a vast topic, with almost being a small programming language in itself. Readers, particularly those who are interested in text analytics, are encouraged to explore this topic more from other authoritative sources. Here are a few links, medium.com For JavaScript enthusiasts, codeburst.io Top 10 most wanted Regex expressions, ready-made for you, medium.com Also, you can check the author’s GitHub repositories for code, ideas, and resources in machine learning and data science. If you are, like me, passionate about AI/machine learning/data science, please feel free to add me on LinkedIn or follow me on Twitter.
[ { "code": null, "e": 318, "s": 172, "text": "Text mining is a hot topic in data science these days. The volume, variety, and complexity of textual data are increasing at an astounding space." }, { "code": null, "e": 474, "s": 318, "text": "As per this article, the global text ana...
Minimum element in BST | Practice | GeeksforGeeks
Given a Binary Search Tree. The task is to find the minimum element in this given BST. Example 1: Input: 5 / \ 4 6 / \ 3 7 / 1 Output: 1 Example 2: Input: 9 \ 10 \ 11 Output: 9 Your Task: The task is to complete the function minValue() which takes root as the argument and returns the minimum element of BST. If the tree is empty, there is no minimum elemnt, so retutn -1 in that case. Expected Time Complexity: O(Height of the BST) Expected Auxiliary Space: O(Height of the BST). Constraints: 0 <= N <= 104 0 sikkusaurav1236 days ago using inorder traversal : vector<int>v;void solve(Node *root){ if(root!=NULL) { solve(root->left); v.push_back(root->data); return; }}int minValue(Node* root) { // Code here if(root==NULL) { return -1; } v.clear(); solve(root); return v[0];} 0 deepeshupadhyay853 weeks ago Easy to memorisze class Tree { // Function to find the minimum element in the given BST. int minValue(Node node) { if(node == null) return -1; if(node.left!=null){ return minValue(node.left); }else{ return node.data; } }} 0 00thirt13n3 weeks ago Easy C++ Solution: int minValue(Node* root) { if(!root) return -1; if(!root->left) return root->data; return minValue(root->left); } 0 jaydhurat3 weeks ago class Tree { // Function to find the minimum element in the given BST. int minValue(Node root) { if(root==null) { return -1; } if(root.left==null) { return root.data; } int min=root.data; while(root.left!=null) { min=root.left.data; root=root.left; } return min; } } 0 rajpateriya3 weeks ago JAVA solution if(root==null) return -1; while(root.left != null) root=root.left; return root.data; 0 shivanshchandra23 weeks ago CPP code int minValue(Node* root) { // Code here if(root == NULL){ return -1; } if(root->left == NULL){ return root->data; } return minValue(root->left);} 0 kritikasinha2564 weeks ago PYTHON SOLUTION def minValue(root): if root is None: return -1 elif root.left is not None: return minValue(root.left) else: return root.data 0 durgeshagrharibst12341 month ago int minValue(Node* root) { if(root==NULL) return -1; if(root->left==NULL) return root->data; else minValue(root->left);} 0 prabhakarsati294261 month ago #User function Template for python3 #Function to find the minimum element in the given BST.def minValue(root): ##Your code here if root is None: return -1 curr = root while(curr.left !=None): curr = curr.left return curr.data #{ # Driver Code Starts#Initial Template for Python 3 from collections import deque# Tree Nodeclass Node: def __init__(self, val): self.right = None self.data = val self.left = None # Function to Build Tree def buildTree(s): #Corner Case if(len(s)==0 or s[0]=="N"): return None # Creating list of strings from input # string after spliting by space ip=list(map(str,s.split())) # Create the root of the tree root=Node(int(ip[0])) size=0 q=deque() # Push the root to the queue q.append(root) size=size+1 # Starting from the second element i=1 while(size>0 and i<len(ip)): # Get and remove the front of the queue currNode=q[0] q.popleft() size=size-1 # Get the current node's value from the string currVal=ip[i] # If the left child is not null if(currVal!="N"): # Create the left child for the current node currNode.left=Node(int(currVal)) # Push it to the queue q.append(currNode.left) size=size+1 # For the right child i=i+1 if(i>=len(ip)): break currVal=ip[i] # If the right child is not null if(currVal!="N"): # Create the right child for the current node currNode.right=Node(int(currVal)) # Push it to the queue q.append(currNode.right) size=size+1 i=i+1 return root if __name__=="__main__": t=int(input()) for _ in range(0,t): s=input() root=buildTree(s) print(minValue(root))# } Driver Code Ends 0 ramyan18141 month ago //c++ solution int minValue(Node* root) { // Code here Node* temp=root; if(root==NULL) return -1; while(temp->left!=NULL){ temp=temp->left; } return temp->data; } We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 326, "s": 238, "text": "Given a Binary Search Tree. The task is to find the minimum element in this given BST. " }, { "code": null, "e": 337, "s": 326, "text": "Example 1:" }, { "code": null, "e": 451, "s": 337, "text": "Input:\n ...
C program to generate an electricity bill
Based on the units consumed by the user, the electricity bill is generated. If number of units consumed are more then, the rate of a unit charge will also increase. The logic applied if minimum units are consumed by the user is as follows − if (units < 50){ amt = units * 3.50; unitcharg = 25; } The logic applied if units are in between 50 to 100 is given below − else if (units <= 100){ amt = 130 + ((units - 50 ) * 4.25); unitcharg = 35; } The logic applied if units are in between 100 to 200 is as stated below − else if (units <= 200){ amt = 130 + 162.50 + ((units - 100 ) * 5.26); unitcharg = 45; } The logic applied if number of units are more than 200 is mentioned below − amt = 130 + 162.50 + 526 + ((units - 200 ) * 7.75); unitcharg = 55; Therefore, the final amount will be generated with the logic as given below − total= amt+ unitcharg; Following is the C Program to generate an electricity bill − Live Demo #include <stdio.h> int main(){ int units; float amt, unitcharg, total; printf(" Enter no of units consumed : "); scanf("%d", &units); if (units < 50){ amt = units * 3.50; unitcharg = 25; }else if (units <= 100){ amt = 130 + ((units - 50 ) * 4.25); unitcharg = 35; }else if (units <= 200){ amt = 130 + 162.50 + ((units - 100 ) * 5.26); unitcharg = 45; }else{ amt = 130 + 162.50 + 526 + ((units - 200 ) * 7.75); unitcharg = 55; } total= amt+ unitcharg; printf("electricity bill = %.2f", total); return 0; } When the above program is executed, it produces the following result − Enter no of units consumed: 280 electricity bill = 1493.50
[ { "code": null, "e": 1227, "s": 1062, "text": "Based on the units consumed by the user, the electricity bill is generated. If number of units consumed are more then, the rate of a unit charge will also increase." }, { "code": null, "e": 1303, "s": 1227, "text": "The logic applied...
C# - Namespaces
A namespace is designed for providing a way to keep one set of names separate from another. The class names declared in one namespace does not conflict with the same class names declared in another. A namespace definition begins with the keyword namespace followed by the namespace name as follows − namespace namespace_name { // code declarations } To call the namespace-enabled version of either function or variable, prepend the namespace name as follows − namespace_name.item_name; The following program demonstrates use of namespaces − using System; namespace first_space { class namespace_cl { public void func() { Console.WriteLine("Inside first_space"); } } } namespace second_space { class namespace_cl { public void func() { Console.WriteLine("Inside second_space"); } } } class TestClass { static void Main(string[] args) { first_space.namespace_cl fc = new first_space.namespace_cl(); second_space.namespace_cl sc = new second_space.namespace_cl(); fc.func(); sc.func(); Console.ReadKey(); } } When the above code is compiled and executed, it produces the following result − Inside first_space Inside second_space The using keyword states that the program is using the names in the given namespace. For example, we are using the System namespace in our programs. The class Console is defined there. We just write − Console.WriteLine ("Hello there"); We could have written the fully qualified name as − System.Console.WriteLine("Hello there"); You can also avoid prepending of namespaces with the using namespace directive. This directive tells the compiler that the subsequent code is making use of names in the specified namespace. The namespace is thus implied for the following code − Let us rewrite our preceding example, with using directive − using System; using first_space; using second_space; namespace first_space { class abc { public void func() { Console.WriteLine("Inside first_space"); } } } namespace second_space { class efg { public void func() { Console.WriteLine("Inside second_space"); } } } class TestClass { static void Main(string[] args) { abc fc = new abc(); efg sc = new efg(); fc.func(); sc.func(); Console.ReadKey(); } } When the above code is compiled and executed, it produces the following result − Inside first_space Inside second_space You can define one namespace inside another namespace as follows − namespace namespace_name1 { // code declarations namespace namespace_name2 { // code declarations } } You can access members of nested namespace by using the dot (.) operator as follows − using System; using first_space; using first_space.second_space; namespace first_space { class abc { public void func() { Console.WriteLine("Inside first_space"); } } namespace second_space { class efg { public void func() { Console.WriteLine("Inside second_space"); } } } } class TestClass { static void Main(string[] args) { abc fc = new abc(); efg sc = new efg(); fc.func(); sc.func(); Console.ReadKey(); } } When the above code is compiled and executed, it produces the following result − Inside first_space Inside second_space 119 Lectures 23.5 hours Raja Biswas 37 Lectures 13 hours Trevoir Williams 16 Lectures 1 hours Peter Jepson 159 Lectures 21.5 hours Ebenezer Ogbu 193 Lectures 17 hours Arnold Higuit 24 Lectures 2.5 hours Eric Frick Print Add Notes Bookmark this page
[ { "code": null, "e": 2469, "s": 2270, "text": "A namespace is designed for providing a way to keep one set of names separate from another. The class names declared in one namespace does not conflict with the same class names declared in another." }, { "code": null, "e": 2570, "s": 24...
AWT Arc2D Class
The Arc2D class is the superclass for all objects that store a 2D arc defined by a framing rectangle, start angle, angular extent (length of the arc), and a closure type (OPEN, CHORD, or PIE). Following is the declaration for java.awt.Arc2D class: public abstract class Arc2D extends RectangularShape Following are the fields for java.awt.geom.Arc2D class: static int CHORD -- The closure type for an arc closed by drawing a straight line segment from the start of the arc segment to the end of the arc segment. static int CHORD -- The closure type for an arc closed by drawing a straight line segment from the start of the arc segment to the end of the arc segment. static int OPEN -- The closure type for an open arc with no path segments connecting the two ends of the arc segment. static int OPEN -- The closure type for an open arc with no path segments connecting the two ends of the arc segment. static int PIE -- The closure type for an arc closed by drawing straight line segments from the start of the arc segment to the center of the full ellipse and from that point to the end of the arc segment. static int PIE -- The closure type for an arc closed by drawing straight line segments from the start of the arc segment to the center of the full ellipse and from that point to the end of the arc segment. protected Arc2D(int type) This is an abstract class that cannot be instantiated directly. boolean contains(double x, double y) Determines whether or not the specified point is inside the boundary of the arc. boolean contains(double x, double y, double w, double h) Determines whether or not the interior of the arc entirely contains the specified rectangle. boolean contains(Rectangle2D r) Determines whether or not the interior of the arc entirely contains the specified rectangle. boolean containsAngle(double angle) Determines whether or not the specified angle is within the angular extents of the arc. boolean equals(Object obj) Determines whether or not the specified Object is equal to this Arc2D. abstract double getAngleExtent() Returns the angular extent of the arc. abstract double getAngleStart() Returns the starting angle of the arc. int getArcType() Returns the arc closure type of the arc: OPEN, CHORD, or PIE. Rectangle2D getBounds2D() Returns the high-precision framing rectangle of the arc. Point2D getEndPoint() Returns the ending point of the arc. PathIterator getPathIterator(AffineTransform at) Returns an iteration object that defines the boundary of the arc. Point2D getStartPoint() Returns the starting point of the arc. int hashCode() Returns the hashcode for this Arc2D. boolean intersects(double x, double y, double w, double h) Determines whether or not the interior of the arc intersects the interior of the specified rectangle. protected abstract Rectangle2D makeBounds(double x, double y, double w, double h) Constructs a Rectangle2D of the appropriate precision to hold the parameters calculated to be the framing rectangle of this arc. abstract void setAngleExtent(double angExt) Sets the angular extent of this arc to the specified double value. void setAngles(double x1, double y1, double x2, double y2) Sets the starting angle and angular extent of this arc using two sets of coordinates. void setAngles(Point2D p1, Point2D p2) Sets the starting angle and angular extent of this arc using two points. abstract void setAngleStart(double angSt) Sets the starting angle of this arc to the specified double value. void setAngleStart(Point2D p) Sets the starting angle of this arc to the angle that the specified point defines relative to the center of this arc. void setArc(Arc2D a) Sets this arc to be the same as the specified arc. abstract void setArc(double x, double y, double w, double h, double angSt, double angExt, int closure) Sets the location, size, angular extents, and closure type of this arc to the specified double values. void setArc(Point2D loc, Dimension2D size, double angSt, double angExt, int closure) Sets the location, size, angular extents, and closure type of this arc to the specified values. void setArc(Rectangle2D rect, double angSt, double angExt, int closure) Sets the location, size, angular extents, and closure type of this arc to the specified values. void setArcByCenter(double x, double y, double radius, double angSt, double angExt, int closure) Sets the position, bounds, angular extents, and closure type of this arc to the specified values. void setArcByTangent(Point2D p1, Point2D p2, Point2D p3, double radius) Sets the position, bounds, and angular extents of this arc to the specified value. void setArcType(int type) Sets the closure type of this arc to the specified value: OPEN, CHORD, or PIE. void setFrame(double x, double y, double w, double h) Sets the location and size of the framing rectangle of this Shape to the specified rectangular values. This class inherits methods from the following classes: java.awt.geom.RectangularShape java.awt.geom.RectangularShape java.lang.Object java.lang.Object Create the following java program using any editor of your choice in say D:/ > AWT > com > tutorialspoint > gui > package com.tutorialspoint.gui; import java.awt.*; import java.awt.event.*; import java.awt.geom.*; public class AWTGraphicsDemo extends Frame { public AWTGraphicsDemo(){ super("Java AWT Examples"); prepareGUI(); } public static void main(String[] args){ AWTGraphicsDemo awtGraphicsDemo = new AWTGraphicsDemo(); awtGraphicsDemo.setVisible(true); } private void prepareGUI(){ setSize(400,400); addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent windowEvent){ System.exit(0); } }); } @Override public void paint(Graphics g) { Arc2D.Float arc = new Arc2D.Float(Arc2D.PIE); arc.setFrame(70, 200, 150, 150); arc.setAngleStart(0); arc.setAngleExtent(145); Graphics2D g2 = (Graphics2D) g; g2.setColor(Color.gray); g2.draw(arc); g2.setColor(Color.red); g2.fill(arc); g2.setColor(Color.black); Font font = new Font("Serif", Font.PLAIN, 24); g2.setFont(font); g.drawString("Welcome to TutorialsPoint", 50, 70); g2.drawString("Arc2D.PIE", 100, 120); } } Compile the program using command prompt. Go to D:/ > AWT and type the following command. D:\AWT>javac com\tutorialspoint\gui\AwtGraphicsDemo.java If no error comes that means compilation is successful. Run the program using following command. D:\AWT>java com.tutorialspoint.gui.AwtGraphicsDemo Verify the following output 13 Lectures 2 hours EduOLC Print Add Notes Bookmark this page
[ { "code": null, "e": 1940, "s": 1747, "text": "The Arc2D class is the superclass for all objects that store a 2D arc defined by a framing rectangle, start angle, angular extent (length of the arc), and a closure type (OPEN, CHORD, or PIE)." }, { "code": null, "e": 1995, "s": 1940, ...
Regression Analysis for Beginners — Part 2 | by Gurami Keretchashvili | Towards Data Science
· Introduction· Part 2.1 Build Machine learning Pipeline ∘ Step 1: Collect the data ∘ Step 2: Visualize the data (Ask yourself these questions and answer) ∘ Step 3: Clean the data ∘ Step 4: Train the model ∘ Step 5: Evaluate ∘ Step 6: Hyperparameter tuning using hyperopt ∘ Step 7: Choose the best model and prediction· Part 2.2 Analyze ML algorithms ∘ What is a Decision Tree? ∘ What is Random Forest? ∘ What is Extreme Gradient Boosting? (XGBoost) ∘ Decision Tree vs Random Forest vs XGBoost ∘ Linear Models vs Tree-Based models.· Conclusion As I explained in my previous post, a real data scientist thinks from a problem/application perspective and finds an approach to solve it with the help of programming languages or frameworks. In Part 1 fish weight estimating problem was solved using linear ML models however, today I will introduce tree-based algorithms such as Decision Tree, Random Forest, XGBoost to solve the same problem. In the first half of the article Part 2.1, I will build a model and in the second half Part 2.2, I will explain each algorithm theoretically, compare them to each other and find its advantages and disadvantages. To build an ML model we need to follow the pipeline steps below for almost all kinds of models. Since the problem that we are solving is the same as before, some pipeline steps will be the same such as 1. collect the data and 2. visualize the data. However, there will be some modifications to other steps. The data is the public dataset that can be downloaded from the Kaggle. import pandas as pdimport seaborn as snsimport matplotlib.pyplot as pltfrom itertools import combinationsimport numpy as npdata = pd.read_csv("Fish.csv") How does the data look like? data.head() Does the data have missing values? data.isna().sum() What is the distribution of the numerical features? data_num = data.drop(columns=["Species"])fig, axes = plt.subplots(len(data_num.columns)//3, 3, figsize=(15, 6))i = 0for triaxis in axes: for axis in triaxis: data_num.hist(column = data_num.columns[i], ax=axis) i = i+1 What is the distribution of the target variable(Weight) with respect to fish Species? sns.displot( data=data, x="Weight", hue="Species", kind="hist", height=6, aspect=1.4, bins=15)plt.show() Target variable distribution with respect to species shows that there are some species such as Pike that have huge weight compared to others. This visualization gives us additional information on how the “species” feature can be used for prediction. from sklearn.model_selection import train_test_splitfrom sklearn.preprocessing import LabelEncoderfrom sklearn.tree import DecisionTreeRegressorfrom sklearn.ensemble import RandomForestRegressorimport xgboost as xgbfrom sklearn.metrics import mean_squared_error, mean_absolute_error, r2_scoredata_cleaned = data.drop("Weight", axis=1)y = data['Weight']x_train, x_test, y_train, y_test = train_test_split(data_cleaned,y, test_size=0.2, random_state=42)print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)# label encoderlabel_encoder = LabelEncoder()x_train['Species'] = label_encoder.fit_transform(x_train['Species'].values)x_test['Species'] = label_encoder.transform(x_test['Species'].values) We are using tree-based models, therefore we do not need feature Scaling. In addition, to convert text into numbers, I just assigned unique numerical values to each fish species using LabelEncoder. def evauation_model(pred, y_val): score_MSE = round(mean_squared_error(pred, y_val),2) score_MAE = round(mean_absolute_error(pred, y_val),2) score_r2score = round(r2_score(pred, y_val),2) return score_MSE, score_MAE, score_r2scoredef models_score(model_name, train_data, y_train, val_data,y_val): model_list = ["Decision_Tree","Random_Forest","XGboost_Regressor"] #model_1 if model_name=="Decision_Tree": reg = DecisionTreeRegressor(random_state=42) #model_2 elif model_name=="Random_Forest": reg = RandomForestRegressor(random_state=42) #model_3 elif model_name=="XGboost_Regressor": reg = xgb.XGBRegressor(objective="reg:squarederror",random_state=42,) else: print("please enter correct regressor name") if model_name in model_list: reg.fit(train_data,y_train) pred = reg.predict(val_data) score_MSE, score_MAE, score_r2score = evauation_model(pred,y_val) return round(score_MSE,2), round(score_MAE,2), round(score_r2score,2)model_list = ["Decision_Tree","Random_Forest","XGboost_Regressor"]result_scores = []for model in model_list: score = models_score(model, x_train, y_train, x_test, y_test) result_scores.append((model, score[0], score[1],score[2])) print(model,score) I trained decision trees, random forest XGboost and stored all the evaluation scores. df_result_scores = pd.DataFrame(result_scores,columns ["model","mse","mae","r2score"])df_result_scores The result is really fascinating, as you remember linear models achieved much lower results (also shown below). So before we do any kind of hyperparameter tuning we can say that all tree-based models outperform linear models in this kind of dataset. Today we use hyperopt to tune hyperparameters using the TPE algorithm. Instead of taking random values from the search space, the TPE algorithm takes into account that some hyper-parameter assignments (x) are known to be irrelevant given particular values of other elements. In this case, the search is effective than random search and faster than greed search. from hyperopt import hpfrom hyperopt import fmin, tpe, STATUS_OK, STATUS_FAIL, Trialsfrom sklearn.model_selection import cross_val_scorenum_estimator = [100,150,200,250]space= {'max_depth': hp.quniform("max_depth", 3, 18, 1), 'gamma': hp.uniform ('gamma', 1,9), 'reg_alpha' : hp.quniform('reg_alpha', 30,180,1), 'reg_lambda' : hp.uniform('reg_lambda', 0,1), 'colsample_bytree' : hp.uniform('colsample_bytree', 0.5,1), 'min_child_weight' : hp.quniform('min_child_weight', 0, 10, 1), 'n_estimators': hp.choice("n_estimators", num_estimator), }def hyperparameter_tuning(space): model=xgb.XGBRegressor(n_estimators = space['n_estimators'], max_depth = int(space['max_depth']), gamma = space['gamma'], reg_alpha = int(space['reg_alpha']) , min_child_weight=space['min_child_weight'], colsample_bytree=space['colsample_bytree'], objective="reg:squarederror") score_cv = cross_val_score(model, x_train, y_train, cv=5, scoring="neg_mean_absolute_error").mean() return {'loss':-score_cv, 'status': STATUS_OK, 'model': model}trials = Trials()best = fmin(fn=hyperparameter_tuning, space=space, algo=tpe.suggest, max_evals=200, trials=trials)print(best) Here is the result of the best hyperparameters found by the algorithm after 200 trials. However, if the dataset is too large, the number of trials can be reduced accordingly. best['max_depth'] = int(best['max_depth']) # convert to intbest["n_estimators"] = num_estimator[best["n_estimators"]] #assing value based on indexreg = xgb.XGBRegressor(**best)reg.fit(x_train,y_train)pred = reg.predict(x_test)score_MSE, score_MAE, score_r2score = evauation_model(pred,y_test) to_append = ["XGboost_hyper_tuned",score_MSE, score_MAE, score_r2score]df_result_scores.loc[len(df_result_scores)] = to_appenddf_result_scores The result is fantastic! The hyper tuned model is really good, compared to other algorithms. For instance, XGboost improved the result of MAE from 41.65 to 36.33. It is a great illustration, how powerful hyperparameter tuning is. # winnerreg = xgb.XGBRegressor(**best)reg.fit(x_train,y_train)pred = reg.predict(x_test)plt.figure(figsize=(18,7))plt.subplot(1, 2, 1) # row 1, col 2 index 1plt.scatter(range(0,len(x_test)), pred,color="green",label="predicted")plt.scatter(range(0,len(x_test)), y_test,color="red",label="True value")plt.legend()plt.subplot(1, 2, 2) # index 2plt.plot(range(0,len(x_test)), pred,color="green",label="predicted")plt.plot(range(0,len(x_test)), y_test,color="red",label="True value")plt.legend()plt.show() The visualization is a clear illustration of how close the predicted and true values are to each other and how well the tuned XGBoost performed. A Decision tree is a supervised ML algorithm that is good at capturing non-linear relationships between the features and the target variable. The intuition behind the algorithms is similar to human logic. In each node, the algorithm finds the feature and threshold on which the data is split into two parts. Below is the illustration of a Decision tree. First, let's see what each variable represents in the figure. let's take the first node as an example. width≤5.154: Feature and value threshold on which the algorithm decided to split the data samples. samples = 127: There are 127 data points before splitting. value = 386.794: Average value of the predicted feature(fish weight). Squared_error = 122928.22: Same as MSE(true, pred) — where pred is the same as value(average fish weight of the samples). So the algorithm based on width≤5.154 threshold split data into two parts. But the question is how did the algorithm find this threshold? There are several splitting criteria, for the regression task CART algorithm tries to find a threshold by searching in a greedy fashion such that the weighted average of MSE of both subgroups is minimized. For instance, in our case after the first split, the weighted average MSE of both subgroups was the minimum compared to other splits. J(k,t_k) = 88/127 *20583.394 + 39/127 *75630.727 = 37487.69 Problem with a Decision Tree: Trees are very sensitive to small variations in the training data. A small change in the data can result in a major change in the structure of the decision tree. The solution to this limitation is a random forest. Random Forest is an ensemble of Decision Trees. The intuition behind Random Forest is to build multiple decision trees and in each decision tree, instead of searching the best feature to split the data, it searched for the best feature among a subset of features, therefore this improves tree diversity. However, it is less interpretable than a simple decision tree. Also, it needs a large number of trees to build which makes the algorithm slow for real-time applications. Generally, algorithms are fast to train but slow to create predictions. An improved version of the Decision tree is also XGBoost. XGBoost is also a tree-based ensemble supervised learning algorithm, that uses a gradient boosting framework. The intuition behind this algorithm is that it tries to fit the new predictor to residual errors made by the previous predictor. It is extremely fast, scalable, and portable. As a result, in our experiment, XGboost outperformed others in terms of performance. Also theoretically, we can conclude that Decision Tree is the simplest tree-based algorithm, which has the limitation of unstable nature - the variation in the data can cause a big change of tree structure, however, it has perfect interpretability nature. Random Forest and XGboost are more complex. One of the differences is that Random Forest combines results at the end of the process(majority rules), while XGboost combines the result along the way. In general, XGboost has better performance than random forest, However, XGBoost can not be a good choice when we have a lot of noise in the data, it will result in overfitting and it will be harder to tune than random forest. Linear models capture linear relationships between independent and dependent variables, which is not the case in most cases of real-world scenarios. However, Tree-based models capture more complex relationships. Linear models majority of times need feature scaling, however tree-based models do not. The performance of tree-based models is majority times better than linear models. Our experiment is a good illustration of that, the best-hyper tuned linear model achieved 66.20 MAE, and the best tree-based model achieved 36.33 which is a big improvement. Tree-based algorithms are easily interpretable than linear models. As discussed before, there is no ready-made receipt for which type of algorithm will work best, everything depends on the data and the task. That is why several algorithms should be tested and evaluated. However, it is beneficial to know the intuition behind each algorithm, what are their advantages and disadvantages and how to cope with its limitations. Here is the full code in my GitHub. You can follow me on medium to keep updated for upcoming articles. medium.com [1] Stephanie Glen Decision Tree vs Random Forest vs Gradient Boosting Machines: Explained Simply (2018) [2] Vishal Morde XGBoost Algorithm: Long May She Reign! (2019) [3] GAURAV SHARMA, 5 Regression Algorithms you should know — Introductory Guide! [4] Aarshay Jain, Complete Guide to Parameter Tuning in XGBoost with codes in Python [5] scikit-learn.org, Decision Trees, Understanding the decision tree structure¶ [6] Hyperopt: Distributed Asynchronous Hyper-parameter Optimization [7] XGboost, XGBoost Parameters [8] TINU ROHITH D, HyperParameter Tuning — Hyperopt Bayesian Optimization for (Xgboost and Neural network) (2019) [9] Jobs Admin, Alternative Hyperparameter Optimization Technique You need to Know — Hyperopt (2020) [10] Aurelien Geron, hands-on machine learning with Scikit-learn and Tensorflow (2019)
[ { "code": null, "e": 716, "s": 172, "text": "· Introduction· Part 2.1 Build Machine learning Pipeline ∘ Step 1: Collect the data ∘ Step 2: Visualize the data (Ask yourself these questions and answer) ∘ Step 3: Clean the data ∘ Step 4: Train the model ∘ Step 5: Evaluate ∘ Step 6: Hyperparameter tunin...
Overloading in Java
17 Feb, 2021 Overloading allows different methods to have the same name, but different signatures where the signature can differ by the number of input parameters or type of input parameters or both. Overloading is related to compile-time (or static) polymorphism. // Java program to demonstrate working of method// overloading in Java. public class Sum { // Overloaded sum(). This sum takes two int parameters public int sum(int x, int y) { return (x + y); } // Overloaded sum(). This sum takes three int parameters public int sum(int x, int y, int z) { return (x + y + z); } // Overloaded sum(). This sum takes two double parameters public double sum(double x, double y) { return (x + y); } // Driver code public static void main(String args[]) { Sum s = new Sum(); System.out.println(s.sum(10, 20)); System.out.println(s.sum(10, 20, 30)); System.out.println(s.sum(10.5, 20.5)); }} Output : 30 60 31.0 Question Arises:Q. What if the exact prototype does not match with arguments.Ans.Priority wise, compiler take these steps: Type Conversion but to higher type(in terms of range) in same family.Type conversion to next higher family(suppose if there is no long data type available for an int data type, then it will search for the float data type). Type Conversion but to higher type(in terms of range) in same family. Type conversion to next higher family(suppose if there is no long data type available for an int data type, then it will search for the float data type). Let’s take an example to clear the concept:- class Demo { public void show(int x) { System.out.println("In int" + x); } public void show(String s) { System.out.println("In String" + s); } public void show(byte b) { System.out.println("In byte" + b); }}class UseDemo { public static void main(String[] args) { byte a = 25; Demo obj = new Demo(); obj.show(a); // it will go to // byte argument obj.show("hello"); // String obj.show(250); // Int obj.show('A'); // Since char is // not available, so the datatype // higher than char in terms of // range is int. obj.show("A"); // String obj.show(7.5); // since float datatype// is not available and so it's higher// datatype, so at this step their// will be an error.}} What is the advantage?We don’t have to create and remember different names for functions doing the same thing. For example, in our code, if overloading was not supported by Java, we would have to create method names like sum1, sum2, ... or sum2Int, sum3Int, ... etc. Can we overload methods on return type?We cannot overload by return type. This behavior is same in C++. Refer this for details public class Main { public int foo() { return 10; } // compiler error: foo() is already defined public char foo() { return 'a'; } public static void main(String args[]) { }} However, Overloading methods on return type are possible in cases where the data type of the function being called is explicitly specified. Look at the examples below : // Java program to demonstrate the working of method// overloading in static methodspublic class Main { public static int foo(int a) { return 10; } public static char foo(int a, int b) { return 'a'; } public static void main(String args[]) { System.out.println(foo(1)); System.out.println(foo(1, 2)); }} Output: 10 a // Java program to demonstrate working of method// overloading in methodsclass A { public int foo(int a) { return 10; } public char foo(int a, int b) { return 'a'; }} public class Main { public static void main(String args[]) { A a = new A(); System.out.println(a.foo(1)); System.out.println(a.foo(1, 2)); }} Output: 10 a Can we overload static methods?The answer is ‘Yes’. We can have two ore more static methods with same name, but differences in input parameters. For example, consider the following Java program. Refer this for details. Can we overload methods that differ only by static keyword?We cannot overload two methods in Java if they differ only by static keyword (number of parameters and types of parameters is same). See following Java program for example. Refer this for details. Can we overload main() in Java?Like other static methods, we can overload main() in Java. Refer overloading main() in Java for more details. // A Java program with overloaded main()import java.io.*; public class Test { // Normal main() public static void main(String[] args) { System.out.println("Hi Geek (from main)"); Test.main("Geek"); } // Overloaded main methods public static void main(String arg1) { System.out.println("Hi, " + arg1); Test.main("Dear Geek", "My Geek"); } public static void main(String arg1, String arg2) { System.out.println("Hi, " + arg1 + ", " + arg2); }} Output : Hi Geek (from main) Hi, Geek Hi, Dear Geek, My Geek Does Java support Operator Overloading?Unlike C++, Java doesn’t allow user-defined overloaded operators. Internally Java overloads operators, for example, + is overloaded for concatenation. What is the difference between Overloading and Overriding? Overloading is about same function have different signatures. Overriding is about same function, same signature but different classes connected through inheritance. Overloading is an example of compiler time polymorphism and overriding is an example of run time polymorphism. Method Overloading (Java Programming Language)| GeeksforGeeks - YouTubeGeeksforGeeks529K subscribersMethod Overloading (Java Programming Language)| GeeksforGeeksWatch laterShareCopy link12/17InfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 14:23•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=MHns-oaIHIs" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>Related Articles: Different ways of Method Overloading in Java Method Overloading and Null error in Java Can we Overload or Override static methods in java ? This article is contributed by Shubham Agrawal. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Gagan Ganapathy AnshulVaidya pratikghodke41 Java-Overloading Java School Programming Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Split() String method in Java with examples Arrays.sort() in Java with examples Reverse a string in Java How to iterate any Map in Java Stream In Java Python Dictionary Reverse a string in Java Arrays in C/C++ Introduction To PYTHON Inheritance in C++
[ { "code": null, "e": 52, "s": 24, "text": "\n17 Feb, 2021" }, { "code": null, "e": 304, "s": 52, "text": "Overloading allows different methods to have the same name, but different signatures where the signature can differ by the number of input parameters or type of input paramet...
Visualisation in Julia
04 Mar, 2021 Data visualization is the process of representing the available data diagrammatically. There are packages that can be installed to visualize the data in languages like Python and Julia. Some of the reasons that make the visualization of data important are listed below: Larger data can be analyzed easily. Trends and patterns can be discovered. Interpretations can be done from the visualizations easily The package that is used widely with Julia is Plots.jl. However, it is a meta-package that can be used for plotting. This package interprets the commands are given and plots are generated using some other libraries and these libraries are referred to as backend. The backend libraries available in Julia are : Plotly/PlotlyJSPyPlotPGFPlotsXUnicodePlotsInspectDRHDF5 Plotly/PlotlyJS PyPlot PGFPlotsX UnicodePlots InspectDR HDF5 One can make plots using these Plots.jl alone. For that, the package has to be installed. Open the Julia terminal and type the following command : Pkg.add(“Plots”) The backend packages can also be installed in the same way. This article shows how to plot data using Plots.jl for two vectors of numerals and two different datasets. For using the datasets, packages like RDatasets and CSV has to be installed. The command used for installation is given below. Pkg.add(“RDatasets”) Pkg.add(“CSV”) The single-dimensional vectors can be plotted using simple line plots. A Line plot needs two axes (x and y). For example consider a vector that has values from 1 to 10 which forms the x-axis. Using rand(), ten random values have been generated and that forms the y-axis. The plotting can be done using the plot() function as follows. Julia # generating vectors# x-axisx = 1:10 # y-axisy = rand(10) # simple plottingplot(x, y) Output: The graphs can be stylized using various attributes. Some of the attributes are explained with the example below: Julia # x-axisx = 1:10 # y-axisy = rand(10) # styling the graphplot(x, y, linecolor =:green, bg_inside =:pink, line =:solid, label = "Y") Output: linecolor: to set the color of the line. The color name should be preceded by a colon (‘:’). bg_inside: to set the background color of the plot. line: to set the line type. The possible values are :solid, :dash, :dashsot, :dashdotdot label: to set the label name for the line plotted. title: to set the title of the plot. In Julia, one can plot another vector on the previously plotted graph. The function that allows this plot!(). In Jupyter notebook, one has to write the code snippet of the previous example first and then can write the on given below even in a new cell. Julia # another vector in y-axisz = rand(10) # to plot on previous graphplot!(z, linecolor =:red, line =:dashdot, labels = "Z") Output : Here one can find the dashed line plotted on the previous graph. The styles written here are applied for the new line alone and it does not affect the line plotted previously. The datasets used for example here are mtcars and Prisma-Indian Diabetes datasets. To understand each attribute, in each of the example new attribute will be added and explained below it. However, all the styling attributes can be used with any plotting. The mtcars dataset contains information on 32 automobiles models that were available in 1973-74. The attributes in this dataset are listed and explained below: mpg – Miles/Gallon(US).cyl – Number of cylinders.disp – Displacement (cu.in).hp – Gross HorsePower.drat – Rear axle Ratio.wt – Weightqsec – 1/4 mile timevs – Engine shape, where 0 denotes V-shaped and 1 denotes straight.am – Transmission. 0 indicates automatic and 1 indicates manual.gears – Number of forward gearscarb – Number of carburetors. mpg – Miles/Gallon(US). cyl – Number of cylinders. disp – Displacement (cu.in). hp – Gross HorsePower. drat – Rear axle Ratio. wt – Weight qsec – 1/4 mile time vs – Engine shape, where 0 denotes V-shaped and 1 denotes straight. am – Transmission. 0 indicates automatic and 1 indicates manual. gears – Number of forward gears carb – Number of carburetors. First this dataset can be visualized. A simple bar graph can be used to compare two things or represent changes over time or to represent the relationship between two items. Now in this dataset there are 32 unique models which are to be used as the x-axis for plotting the bar graph. The MPG (Miles Per Gallon) is taken in the y-axis for plotting. Look at the code snippet and its output below. Julia # loading the datasetusing RDatasets cars = dataset("datasets", "mtcars") # plotting bar graphbar(cars.Model, cars.MPG, label = "Miles/Gallon", title = "Models and Miles/Gallon", xticks =:all, xrotation = 45, size = [600, 400], legend =:topleft) Output : Attribute explanation: cars.Model – x-axis that contains the attribute car model names. cars.MPG – y-axis that contains the attribute miles per gallon. xticks=:all – This decides whether to show all values int the x-axis or not. For the example taken above, if xticks=:all is not given, then some of the values won’t appear in the x-axis but their values will be plotted then the output will be as follows. xrotation : to specify the angle in which the values of the attribute in x-axis mus be rotated. By default, the value is ‘0’ and they appear horizontally. the value 45 slightly tilts it and the value 90 rotates it vertically. size : to specify the height and width of the plotting. legend : to specify where the label of the plotted value must appear. Here the box containing Miles/Gallon and the blue shaded box is the legend. To plot two attributes of the dataset in the same graph Julia bar(cars.Model, [cars.MPG,cars.QSec], # plotting two attributes label = ["Miles/Gallon" "QSec"], title = "Models-Miles/Gallon and Qsec", xrotation = 45, size = [600, 400], legend =:topleft, xticks =:all, yticks =0:35) Output: The line plot which was previously explained with vectors can be used along with the datasets also. If three attributes were passed, then the graph will be plotted with 3 dimensions. For example: Plotting the Number of gears in the x-axis, displacement in the y-axis, and Horse Power in the z-axis. Julia # importing packageusing RDatasets # loading dataetscars = dataset("datasets", "mtcars") # 3-dimensions plottingplot(cars.Gear, # x-axis cars.Disp, # yaxis cars.HP, # z-axis title = "3-dimensional plot", xlabel = "no.of.gears", ylabel = "displacement", zlabel = "HP") Output: Attribute explanation: The attributes xlabel, ylabel, zlabel contains the values that should be displayed as the label for each axis. Here the labels are no.of.gears, displacement and HP. The dataset Prisma-Indian Diabetes dataset is shown here to give a understanding of how to read the dataset which is in the local storage. This dataset has the attributes namely, pregnancies (no.of.times pregnant), glucose, bp (blood pressure), skinThickness, insulin, bmi, dpf (diabetes pedigree function), age, outcome (0 – non diabetic, 1-diabetic). Download this dataset from internet initially. It is used to reveal a pattern in the plotted data. For example, we can use this plot to analyze the diabetes pedigree function form any pattern with age. Look at the code snippet below. Julia # import packagesusing DataFramesusing CSV # loading datasetdf = CSV.read("path\\pima-indians-diabetes.csv"); # scatter plotscatter(df.age, # x-axis df.dpf, # y-axis xticks = 20:90, xrotation = 90, label = "dpf", title = "Age and Diabetes Pedigree Function") Mostly used for displaying the frequency of data. The bars are called bins. Taller the bins, more number of data falls under that range. For example, let us check the number of people in each age group in the taken dataset using a histogram. Look at the code snippet below: Julia # import packagesusing DataFramesusing CSV # loading dataset# semicolon at the end# prevents printingdf = CSV.read("path\\pima-indians-diabetes.csv"); # plot histogramhistogram(df.age, bg_inside = "black", title = "Frequency of age groups", label = "Age" xlabel = "age", ylabel = "freq", xticks = 20:85, xrotation = 90) Output: Pie chart is a statistical graph used to represent proportions. In this example let us represent the number of diabetic and non-diabetic people in the dataset. For this, the count has to be taken first and then plotted in the graph. Look at the code snippet below. Julia using DataFramesusing CSV # load datasetdf = CSV.read("path\\pima-indians-diabetes.csv"); # dataframe to get countsaxis = DataFrame()axis = by(df, :outcome, nrow)println(axis) # dataframe to form# x-axis and y-axisitems = DataFrame() # x-axisitems[:x] = ["Diabetic", "Non-Diabetic"] # y-axis (count)items[:y] = axis.nrow pie(items.x, items.y) Output: sparklinfire Julia Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Exception handling in Julia Get array dimensions and size of a dimension in Julia - size() Method Get number of elements of array in Julia - length() Method Decision Making in Julia (if, if-else, Nested-if, if-elseif-else ladder) NamedTuple in Julia Find maximum element along with its index in Julia - findmax() Method Searching in Array for a given element in Julia Getting the maximum value from a list in Julia - max() Method Join an array of strings into a single string in Julia - join() Method Difference Between MATLAB and Julia
[ { "code": null, "e": 28, "s": 0, "text": "\n04 Mar, 2021" }, { "code": null, "e": 298, "s": 28, "text": "Data visualization is the process of representing the available data diagrammatically. There are packages that can be installed to visualize the data in languages like Python ...
Perl | substr() function
25 Jun, 2019 substr() in Perl returns a substring out of the string passed to the function starting from a given index up to the length specified. This function by default returns the remaining part of the string starting from the given index if the length is not specified. A replacement string can also be passed to the substr() function if you want to replace that part of the string with some other substring.This index and length value might be negative as well which changes the direction of index count in the string.For example, if a negative index is passed then substring will be returned from the right end of the string and if we pass the negative length then the function will leave that much of the characters from the rear end of the string. Syntax: substr(string, index, length, replacement)Parameters: string: string from which substring is to be extracted index: starting index of the substring length: length of the substring replacement: replacement substring(if any) Returns: the substring of the required length Note: The parameters ‘length’ and ‘replacement’ can be omitted. Example 1 #!/usr/bin/perl # String to be passed$string = "GeeksForGeeks"; # Calling substr() to find string # without passing length$sub_string1 = substr($string, 4); # Printing the substringprint "Substring 1 : $sub_string1\n"; # Calling substr() to find the # substring of a fixed length$sub_string2 = substr($string, 4, 5); # Printing the substringprint "Substring 2 : $sub_string2 "; Output : Substring 1 : sForGeeks Substring 2 : sForG Example 2 #!/usr/bin/perl # String to be passed$string = "GeeksForGeeks"; # Calling substr() to find string # by passing negative index$sub_string1 = substr($string, -4); # Printing the substringprint "Substring 1 : $sub_string1\n"; # Calling substr() to find the # substring by passing negative length$sub_string2 = substr($string, 4, -2); # Printing the substringprint "Substring 2 : $sub_string2 "; Output : Substring 1 : eeks Substring 2 : sForGee Perl-function Perl-String-Functions Picked Perl Perl Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n25 Jun, 2019" }, { "code": null, "e": 772, "s": 28, "text": "substr() in Perl returns a substring out of the string passed to the function starting from a given index up to the length specified. This function by default returns the rema...
Remove repeated elements from ArrayList in Java
11 Dec, 2018 Prerequisite: ArrayList in Java Given an ArrayList, the task is to remove repeated elements of the ArrayList in Java. Examples: Input: ArrayList = [1, 2, 2, 3, 4, 4, 4] Output: [1, 2, 3, 4] Input: ArrayList = [12, 23, 23, 34, 45, 45, 45, 45, 57, 67, 89] Output: [12, 23, 34, 45, 57, 67, 89] Below are the various methods to remove repeated elements an ArrayList in Java: Using a Set: Since Set is a collection which do not includes any duplicate elements. Hence the solution can be achieved with the help of a Set.Approach:Get the ArrayList with repeated elements.Convert the ArrayList to Set.Now convert the Set back to ArrayList. This will remove all the repeated elements.Below is the implementation of the above approach:// Java code to illustrate remove duolicate// of ArrayList using hashSet<> method import java.util.*; public class GFG { public static void main(String args[]) { // create a ArrayList String type ArrayList<String> gfg = new ArrayList<String>(); // Initialize an ArrayList gfg.add("Geeks"); gfg.add("for"); gfg.add("Geeks"); // print ArrayList System.out.println("Original ArrayList : " + gfg); // -----Using LinkedHashSet----- System.out.println("\nUsing LinkedHashSet:\n"); // create a set and copy all value of list Set<String> set = new LinkedHashSet<>(gfg); // create a list and copy all value of set List<String> gfg1 = new ArrayList<>(set); // print ArrayList System.out.println("Modified ArrayList : " + gfg1); // -----Using HashSet----- System.out.println("\nUsing HashSet:\n"); // create a set and copy all value of list Set<String> set1 = new HashSet<>(gfg); // create a list and copy all value of set List<String> gfg2 = new ArrayList<>(set); // print ArrayList System.out.println("Modified ArrayList : " + gfg2); }}Output:Original ArrayList : [Geeks, for, Geeks] Using LinkedHashSet: Modified ArrayList : [Geeks, for] Using HashSet: Modified ArrayList : [Geeks, for] Approach: Get the ArrayList with repeated elements.Convert the ArrayList to Set.Now convert the Set back to ArrayList. This will remove all the repeated elements. Get the ArrayList with repeated elements. Convert the ArrayList to Set. Now convert the Set back to ArrayList. This will remove all the repeated elements. Below is the implementation of the above approach: // Java code to illustrate remove duolicate// of ArrayList using hashSet<> method import java.util.*; public class GFG { public static void main(String args[]) { // create a ArrayList String type ArrayList<String> gfg = new ArrayList<String>(); // Initialize an ArrayList gfg.add("Geeks"); gfg.add("for"); gfg.add("Geeks"); // print ArrayList System.out.println("Original ArrayList : " + gfg); // -----Using LinkedHashSet----- System.out.println("\nUsing LinkedHashSet:\n"); // create a set and copy all value of list Set<String> set = new LinkedHashSet<>(gfg); // create a list and copy all value of set List<String> gfg1 = new ArrayList<>(set); // print ArrayList System.out.println("Modified ArrayList : " + gfg1); // -----Using HashSet----- System.out.println("\nUsing HashSet:\n"); // create a set and copy all value of list Set<String> set1 = new HashSet<>(gfg); // create a list and copy all value of set List<String> gfg2 = new ArrayList<>(set); // print ArrayList System.out.println("Modified ArrayList : " + gfg2); }} Original ArrayList : [Geeks, for, Geeks] Using LinkedHashSet: Modified ArrayList : [Geeks, for] Using HashSet: Modified ArrayList : [Geeks, for] Using Java 8 Lambdas:Approach:Get the ArrayList with repeated elements.Convert the ArrayList to Stream using stream() method.Set the filter condition to be distinct using distinct() method.Collect the filtered values as List using collect() method. This list will be with repeated elements removedBelow is the implementation of the above approach:Example:// Java code to illustrate remove duolicate// of ArrayList using hashSet<> method import java.util.*;import java.util.stream.Collectors; public class GFG { public static void main(String args[]) { // create a ArrayList String type ArrayList<String> gfg = new ArrayList<String>(); // Initialize an ArrayList gfg.add("Geeks"); gfg.add("for"); gfg.add("Geeks"); // print ArrayList System.out.println("Original ArrayList : " + gfg); // create a list and copy all distinct value of list List<String> gfg1 = gfg.stream() .distinct() .collect(Collectors.toList()); // print modified list System.out.println("Modified List : " + gfg1); }}Output:Original ArrayList : [Geeks, for, Geeks] Modified List : [Geeks, for] Approach: Get the ArrayList with repeated elements.Convert the ArrayList to Stream using stream() method.Set the filter condition to be distinct using distinct() method.Collect the filtered values as List using collect() method. This list will be with repeated elements removed Get the ArrayList with repeated elements. Convert the ArrayList to Stream using stream() method. Set the filter condition to be distinct using distinct() method. Collect the filtered values as List using collect() method. This list will be with repeated elements removed Below is the implementation of the above approach: Example: // Java code to illustrate remove duolicate// of ArrayList using hashSet<> method import java.util.*;import java.util.stream.Collectors; public class GFG { public static void main(String args[]) { // create a ArrayList String type ArrayList<String> gfg = new ArrayList<String>(); // Initialize an ArrayList gfg.add("Geeks"); gfg.add("for"); gfg.add("Geeks"); // print ArrayList System.out.println("Original ArrayList : " + gfg); // create a list and copy all distinct value of list List<String> gfg1 = gfg.stream() .distinct() .collect(Collectors.toList()); // print modified list System.out.println("Modified List : " + gfg1); }} Original ArrayList : [Geeks, for, Geeks] Modified List : [Geeks, for] Java-ArrayList Java-Collections java-hashset java-lambda Java-List-Programs Technical Scripter 2018 Java Technical Scripter Java Java-Collections Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n11 Dec, 2018" }, { "code": null, "e": 60, "s": 28, "text": "Prerequisite: ArrayList in Java" }, { "code": null, "e": 146, "s": 60, "text": "Given an ArrayList, the task is to remove repeated elements of the ArrayList...
Zoho Interview | Set 2 (On-Campus)
22 Jul, 2019 Recently Zoho visited for the campus placement. I would like to share my experience to geeksforgeeks because of which I got this offer. Thank you geekforgeeks Zoho On Campus Placement Process 1) First Round : Written 40 C output questions. 2 Hours.30 1Mark and 10 2Mark questions. IT WAS NOT MCQ. The questions were challenging and covered all C concepts. 2) Second Round : Coding Around 150 students shortlisted for this round. It was a local machine coding round. A staff will be assigned to a group of 5 students. He made note of the time took for solving each question. There was totally 7 questions and I solved 4 questions and did not complete the 5th question. 1) Alternate sorting: Given an array of integers, rearrange the array in such a way that the first element is first maximum and second element is first minimum. Eg.) Input : {1, 2, 3, 4, 5, 6, 7} Output : {7, 1, 6, 2, 5, 3, 4} 2) Remove unbalanced parentheses in a given expression. Eg.) Input : ((abc)((de)) Output : ((abc)(de)) Input : (((ab) Output : (ab) 3) Form a number system with only 3 and 4. Find the nth number of the number system.Eg.) The numbers are: 3, 4, 33, 34, 43, 44, 333, 334, 343, 344, 433, 434, 443, 444, 3333, 3334, 3343, 3344, 3433, 3434, 3443, 3444 .... 4) Check whether a given mathematical expression is valid. Eg.) Input : (a+b)(a*b) Output : Valid Input : (ab)(ab+) Output : Invalid Input : ((a+b) Output : Invalid I don’t remember the 5th question. 3) Third Round : Advanced CodingA matrix game was given with 5 rules. We were asked to implement each of the rules separately. R3 | - - - | R2 | - - - | R1 | - - - | C1 C2 C3 Each of the 9 cells can either be empty or filled with an atom. R3, R2, R1 are the rays that originate from the left. C1, C2, C3 are the rays that originate from the bottom of the box. Input : Position of the atoms and the rays that gets originated from the outside of the box. Eg.) 3 3 1 2 2 1 3 3 R3 C1 C3 Output : Print the box. Rule 1:A ray that has an atom in its path should print ‘H’ (Hit) If it does not have any atoms in its path, the ray should pass to the other side. C1 C3 R3 | - - - | R3 H | - X - | R1 | - - - | R1 C1 H C3 Rule 2 & 3:A ray that has an atom in its diagonal adjacent position should refract. H | - - - | H | X - - | R | - X - | R H R Input rays: R1, R2, C3 H | - X - | R2 | - - - | C3 | - - - | R2 C3 Rule 4:A ray that has atoms in both of the diagonal adjacent positions should reflect back. Input ray: C2 | - - - | | X - X | | - - - | R Input ray: R2 | - X - | R | - - - | | - X - | Rule 5:The deflection of rays should happen in the order of the input rays. Input Rays: R3, R2, C1, C3 H | - X - | R2 | - - - | C3 | - - - | R2 C3 The final task was to implement these rules for dynamic matrix size. Input : no of rows, no of columns Eg.) 4 4 (row & column) 2 (No of atoms) 4 4 (Position of atom) 2 2 (Position of atom) 2 (No of rays) R4 C2 (Ray number) H | - - - X | | - - - - | | - X - - | | - - - - | H The final task was very confusing and it had to handle all the cases. There are chances for a ray to end at the starting position if the number of rows and columns are more than 5. 4) Fourth Round : Technical InterviewBasic questions from hashing, searching, sorting, JVM, OS, Threads. In-depth questions from the projects that I mentioned in my resume. So don’t just add projects that you are not thorough enough to answer all questions. And a simple puzzle : (x-a)(x-b)(x-c)....(x-z) = ? 5) Fifth Round : HRGeneral HR questions like why zoho, how do you see yourself after 5 years, why did you choose CS/IT stream, tell me about your leadership skills etc. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Zoho Interview Experiences Zoho Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n22 Jul, 2019" }, { "code": null, "e": 214, "s": 54, "text": "Recently Zoho visited for the campus placement. I would like to share my experience to geeksforgeeks because of which I got this offer. Thank you geekforgeeks " }, { ...
Convert DataFrame Column to Numeric in R
16 May, 2021 In this article, we are going to see how to convert DataFrame Column to Numeric in R Programming Language. All dataframe column is associated with a class which is an indicator of the data type to which the elements of that column belong to. Therefore, in order to simulate the data type conversion, the data elements have to be converted to the desired data type in this case, that is all the elements of that column should be eligible to become numerical values. sapply() method can be used to retrieve the data type of the column variables in the form of a vector. The dataframe that is used for the operations below is as follows : R # declare a dataframe# different data type have been # indicated for different colsdata_frame <- data.frame( col1 = as.character(1:4), col2 = factor(4:7), col3 = letters[2:5], col4 = 97:100, stringsAsFactors = FALSE) print("Original DataFrame")print (data_frame) # indicating the data type of # each variable sapply(data_frame, class) Output: [1] "Original DataFrame" col1 col2 col3 col4 1 1 4 b 97 2 2 5 c 98 3 3 6 d 99 4 4 7 e 100 col1 col2 col3 col4 "character" "factor" "character" "integer" transform() method can be used to simulate modification in the data object specified in the argument list of this method. The changes have to be explicitly saved into either the same dataframe or a new one. It can be used to either add new variables to the data or modify the existing ones. Syntax: transform(data, value) Arguments : data – The data object to be modified value – The value to be added Example 1: Converting factor type columns to numeric The data may not be preserved while making these conversions. There may be loss or tampering of the data. The result of the transform operation has to be saved in some variable in order to work further with it. The following code snippet illustrates this : R # declare a dataframe# different data type have been# indicated for different colsdata_frame <- data.frame( col1 = as.character(1:4), col2 = factor(4:7), col3 = letters[2:5], col4 = 97:100, stringsAsFactors = FALSE) print("Original DataFrame")print (data_frame) # indicating the data type of each # variable sapply(data_frame, class) # converting factor type column to # numeric data_frame_mod <- transform( data_frame,col2 = as.numeric(col2)) print("Modified DataFrame")print (data_frame_mod) # indicating the data type of each variable sapply(data_frame_mod, class) Output: [1] "Original DataFrame" col1 col2 col3 col4 1 1 4 b 97 2 2 5 c 98 3 3 6 d 99 4 4 7 e 100 col1 col2 col3 col4 "character" "factor" "character" "integer" [1] "Modified DataFrame" col1 col2 col3 col4 1 1 1 b 97 2 2 2 c 98 3 3 3 d 99 4 4 4 e 100 col1 col2 col3 col4 "character" "numeric" "character" "integer" Explanation: The original dataframe values in col2 range from 4 to 7, while in modified they are integers beginning with 1. This means during direct conversion of factor to numeric, the data may not be preserved. In order to preserve the data, the type of the columns needs to be explicitly first cast to as.character(col-name). R # declare a dataframe# different data type have been # indicated for different colsdata_frame <- data.frame( col1 = as.character(1:4), col2 = factor(4:7), col3 = letters[2:5], col4 = 97:100, stringsAsFactors = FALSE) print("Original DataFrame")print (data_frame) # indicating the data type of each# variable sapply(data_frame, class) # converting factor type column to # numeric data_frame_mod <- transform( data_frame, col2 = as.numeric(as.character(col2))) print("Modified DataFrame")print (data_frame_mod) # indicating the data type of each# variable sapply(data_frame_mod, class) Output: [1] "Original DataFrame" col1 col2 col3 col4 1 1 4 b 97 2 2 5 c 98 3 3 6 d 99 4 4 7 e 100 col1 col2 col3 col4 "character" "factor" "character" "integer" [1] "Modified DataFrame" col1 col2 col3 col4 1 1 4 b 97 2 2 5 c 98 3 3 6 d 99 4 4 7 e 100 col1 col2 col3 col4 "character" "numeric" "character" "integer" Explanation: In order to maintain uniformity of data, the data type of col2 is first changed to as.character and then to numerical values, which displays the data as it is. Example 2: Converting character type columns to numeric The character type columns, be single characters or strings can be converted into numeric values only if these conversions are possible. Otherwise, the data is lost and coerced into missing or NA values by the compiler upon execution. This approach depicts the data loss due to the insertion of missing or NA values in place of characters. These NA values are introduced since interconversion is not directly possible. R # declare a dataframe# different data type have been # indicated for different colsdata_frame <- data.frame( col1 = as.character(6:9), col2 = factor(4:7), col3 = letters[2:5], col4 = 97:100, stringsAsFactors = FALSE) print("Original DataFrame")print (data_frame) # indicating the data type of each # variable sapply(data_frame, class) # converting character type column# to numeric data_frame_col1 <- transform( data_frame,col1 = as.numeric(col1)) print("Modified col1 DataFrame")print (data_frame_col1) # indicating the data type of each # variable sapply(data_frame_col1, class) # converting character type column # to numeric data_frame_col3 <- transform( data_frame,col3 = as.numeric(col3)) print("Modified col3 DataFrame")print (data_frame_col3) # indicating the data type of each# variable sapply(data_frame_col3, class) Output: [1] "Original DataFrame" col1 col2 col3 col4 1 6 4 b 97 2 7 5 c 98 3 8 6 d 99 4 9 7 e 100 col1 col2 col3 col4 "character" "factor" "character" "integer" [1] "Modified col1 DataFrame" col1 col2 col3 col4 1 6 4 b 97 2 7 5 c 98 3 8 6 d 99 4 9 7 e 100 col1 col2 col3 col4 "numeric" "factor" "character" "integer" [1] "Modified col3 DataFrame" col1 col2 col3 col4 1 6 4 NA 97 2 7 5 NA 98 3 8 6 NA 99 4 9 7 NA 100 col1 col2 col3 col4 "character" "factor" "numeric" "integer" Warning message: In eval(substitute(list(...)), `_data`, parent.frame()) : NAs introduced by coercion Explanation: Using the sapply() method, the class of the col3 of the dataframe is character, that is it consists of single byte character values, but on application of transform() method, these character values are converted to missing or NA values, because the character is not directly convertible to numeric data. So, this leads to data loss. The conversion can be made by not using stringAsFactors=FALSE and then first implicitly converting the character to factor using as.factor() and then to numeric data type using as.numeric(). The information about the actual strings is completely lost even in this case. However, the data becomes ambiguous and may lead to actual data loss. The data is simply assigned numeric values based on the lexicographic sorting result of the column values. R # declare a dataframe# different data type have been # indicated for different colsdata_frame <- data.frame( col1 = as.character(6:9), col2 = factor(4:7), col3 = c("Geeks","For","Geeks","Gooks"), col4 = 97:100) print("Original DataFrame")print (data_frame) # indicating the data type of each# variable sapply(data_frame, class) # converting character type column # to numeric data_frame_col3 <- transform( data_frame,col3 = as.numeric(as.factor(col3))) print("Modified col3 DataFrame")print (data_frame_col3) # indicating the data type of each# variable sapply(data_frame_col3, class) Output: [1] "Original DataFrame" col1 col2 col3 col4 1 6 4 Geeks 97 2 7 5 For 98 3 8 6 Geeks 99 4 9 7 Gooks 100 col1 col2 col3 col4 "factor" "factor" "factor" "integer" [1] "Modified col3 DataFrame" col1 col2 col3 col4 1 6 4 2 97 2 7 5 1 98 3 8 6 2 99 4 9 7 3 100 col1 col2 col3 col4 "factor" "factor" "numeric" "integer" Explanation : The first and third string in col3 are the same, therefore, assigned the same numeric value. And in total, the values are sorted in ascending order and then assigned corresponding integer values. “For” is the smallest string appearing in lexicographic order, therefore, assigned numeric value of 1, then “Geeks”, both instances of which are mapped to 2 and “Gooks” is assigned a numeric value of 3. Thus, the col3 type changes to numeric. Example 3: Converting logical type columns to numeric The truth boolean value is assigned a numerical value equivalent to 2 and false is assigned a numeric value of 1. The conversion can be easily carried out while maintaining data uniformity. In order, to preserve the data, the column consisting these logical values is first transformed to factor type values using as.factor and then these values are assigned a numerical value using as.numeric(), which simply assigns integer identifiers to these two values. R # declare a dataframe# different data type have been# indicated for different colsdata_frame <- data.frame( col1 = as.character(6:9), col2 = factor(4:7), col3 = c("Geeks","For","Geeks","Gooks"), col4 = 97:100, col5 = c(TRUE,FALSE,TRUE,FALSE)) print("Original DataFrame")print (data_frame) # indicating the data type of each # variable sapply(data_frame, class) # converting character type column # to numeric data_frame_col5 <- transform( data_frame,col5 = as.numeric(as.factor(col5)))print("Modified col5 DataFrame")print (data_frame_col5) # indicating the data type of each # variable sapply(data_frame_col5, class) Output: [1] "Original DataFrame" col1 col2 col3 col4 col5 1 6 4 Geeks 97 TRUE 2 7 5 For 98 FALSE 3 8 6 Geeks 99 TRUE 4 9 7 Gooks 100 FALSE col1 col2 col3 col4 col5 "factor" "factor" "factor" "integer" "logical" [1] "Modified col5 DataFrame" col1 col2 col3 col4 col5 1 6 4 Geeks 97 2 2 7 5 For 98 1 3 8 6 Geeks 99 2 4 9 7 Gooks 100 1 col1 col2 col3 col4 col5 "factor" "factor" "factor" "integer" "numeric" Explanation : Using the sapply() method, the class of the col5 of the dataframe is logical, that is it consists of TRUE and FALSE boolean values, but on the application of transform() method, these logical values are mapped to integers, and the class of col5 is converted to numeric. Picked R DataFrame-Programs R-DataFrame R Language R Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n16 May, 2021" }, { "code": null, "e": 136, "s": 28, "text": "In this article, we are going to see how to convert DataFrame Column to Numeric in R Programming Language. " }, { "code": null, "e": 495, "s": 136, "text":...
Hilbert Matrix
09 Nov, 2020 A Hilbert Matrix is a square matrix whose each element is a unit fraction.Properties: It is a symmetric matrix.Its determinant value is always positive.Examples:Input : N = 2 Output : 1 0.5 0.5 0.33 Input : N = 3 Output : 1.0000 0.5000 0.3333 0.5000 0.3333 0.2500 0.3333 0.2500 0.2000 Recommended: Please try your approach on {IDE} first, before moving on to the solution.Mathematically, Hilbert Matrix can be formed by the given formula: Let H be a Hilbert Matrix of NxN. Then H(i, j) = 1/(i+j-1) Below is the basic implementation of the above formula.C++JavaC#C++// C++ program for Hilbert Matrix#include <bits/stdc++.h>using namespace std; // Function that generates a Hilbert matrixvoid printMatrix(int n){ float H[n][n]; for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { // using the formula to generate // hilbert matrix H[i][j] = (float)1.0 / ((i + 1) + (j + 1) - 1.0); } } for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) cout << H[i][j] << " "; cout << endl; }} // driver functionint main(){ int n = 3; printMatrix(n); return 0;}Java// Java program for// Hilbert Matriximport java.io.*; class GFG { // Function that generates // a Hilbert matrixstatic void printMatrix(int n){ float H[][] = new float[n][n]; for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { // using the formula // to generate // hilbert matrix H[i][j] = (float)1.0 / ((i + 1) + (j + 1) - (float)1.0); } } for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) System.out.print(H[i][j] + " "); System.out.println(); }} // Driver codepublic static void main (String[] args) { int n = 3; printMatrix(n);}} // This code is contributed // by anuj_67.C#// C# program for Hilbert Matrixusing System; class GFG { // Function that generates // a Hilbert matrixstatic void printMatrix(int n){ float[,] H = new float[n, n]; for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { // using the formula to generate // hilbert matrix H[i, j] = (float)1.0 / ((i + 1) + (j + 1) - (float)1.0); } } for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) Console.Write(H[i, j] + " "); Console.WriteLine(""); }} // Driver codepublic static void Main() { int n = 3; printMatrix(n);}} // This code is contributed // by mitsOutput:1 0.5 0.333333 0.5 0.333333 0.25 0.333333 0.25 0.2 YouTube<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=Rnp62zKoSb4" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>My Personal Notes arrow_drop_upSave It is a symmetric matrix. Its determinant value is always positive. Examples: Input : N = 2 Output : 1 0.5 0.5 0.33 Input : N = 3 Output : 1.0000 0.5000 0.3333 0.5000 0.3333 0.2500 0.3333 0.2500 0.2000 Mathematically, Hilbert Matrix can be formed by the given formula: Let H be a Hilbert Matrix of NxN. Then H(i, j) = 1/(i+j-1) Below is the basic implementation of the above formula. C++ Java C# // C++ program for Hilbert Matrix#include <bits/stdc++.h>using namespace std; // Function that generates a Hilbert matrixvoid printMatrix(int n){ float H[n][n]; for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { // using the formula to generate // hilbert matrix H[i][j] = (float)1.0 / ((i + 1) + (j + 1) - 1.0); } } for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) cout << H[i][j] << " "; cout << endl; }} // driver functionint main(){ int n = 3; printMatrix(n); return 0;} // Java program for// Hilbert Matriximport java.io.*; class GFG { // Function that generates // a Hilbert matrixstatic void printMatrix(int n){ float H[][] = new float[n][n]; for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { // using the formula // to generate // hilbert matrix H[i][j] = (float)1.0 / ((i + 1) + (j + 1) - (float)1.0); } } for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) System.out.print(H[i][j] + " "); System.out.println(); }} // Driver codepublic static void main (String[] args) { int n = 3; printMatrix(n);}} // This code is contributed // by anuj_67. // C# program for Hilbert Matrixusing System; class GFG { // Function that generates // a Hilbert matrixstatic void printMatrix(int n){ float[,] H = new float[n, n]; for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { // using the formula to generate // hilbert matrix H[i, j] = (float)1.0 / ((i + 1) + (j + 1) - (float)1.0); } } for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) Console.Write(H[i, j] + " "); Console.WriteLine(""); }} // Driver codepublic static void Main() { int n = 3; printMatrix(n);}} // This code is contributed // by mits 1 0.5 0.333333 0.5 0.333333 0.25 0.333333 0.25 0.2 YouTube<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=Rnp62zKoSb4" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div> vt_m Mithun Kumar Mathematical Matrix Mathematical Matrix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n09 Nov, 2020" }, { "code": null, "e": 138, "s": 52, "text": "A Hilbert Matrix is a square matrix whose each element is a unit fraction.Properties:" }, { "code": null, "e": 3207, "s": 138, "text": "It is a symmetric ...
How to Use Go With MongoDB?
21 Apr, 2022 MongoDB is an open-source NoSQL database. It is a document-oriented database, uses a JSON-like structure called BSON to store documents(i.e. key-value pairs). MongoDB provides the concept of collection to group documents. In this article, we will discuss who connects MongoDB with Golang. Prerequisite: You need to install MongoDB and start it on the default port(i.e. 27017). Installation: Package mongo provides a MongoDB Driver API for Go, which can be used to interact with MongoDB API. Use the below command to install package mongo. go get go.mongodb.org/mongo-driver/mongo Package context: Package context is the context type, which holds deadlines, cancellation signals, and other request-scoped values across API boundaries and between processes. Now to connect the Go driver with MongoDB you need to follow the following steps: Create mongo.Client with mongo.Connect function. The mongo.Client handles the connection with the MongoDB.mongo.Client has a method called Ping which returns pong on the successful connection.Finally, use mongo.Client.Disconnect to close the Database connection. Create mongo.Client with mongo.Connect function. The mongo.Client handles the connection with the MongoDB. mongo.Client has a method called Ping which returns pong on the successful connection. Finally, use mongo.Client.Disconnect to close the Database connection. Go package main import ( "context" "fmt" "time" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" "go.mongodb.org/mongo-driver/mongo/readpref") // This is a user defined method to close resources.// This method closes mongoDB connection and cancel context.func close(client *mongo.Client, ctx context.Context, cancel context.CancelFunc){ // CancelFunc to cancel to context defer cancel() // client provides a method to close // a mongoDB connection. defer func(){ // client.Disconnect method also has deadline. // returns error if any, if err := client.Disconnect(ctx); err != nil{ panic(err) } }()} // This is a user defined method that returns mongo.Client,// context.Context, context.CancelFunc and error.// mongo.Client will be used for further database operation.// context.Context will be used set deadlines for process.// context.CancelFunc will be used to cancel context and// resource associated with it. func connect(uri string)(*mongo.Client, context.Context, context.CancelFunc, error) { // ctx will be used to set deadline for process, here // deadline will of 30 seconds. ctx, cancel := context.WithTimeout(context.Background(), 30 * time.Second) // mongo.Connect return mongo.Client method client, err := mongo.Connect(ctx, options.Client().ApplyURI(uri)) return client, ctx, cancel, err} // This is a user defined method that accepts// mongo.Client and context.Context// This method used to ping the mongoDB, return error if any.func ping(client *mongo.Client, ctx context.Context) error{ // mongo.Client has Ping to ping mongoDB, deadline of // the Ping method will be determined by cxt // Ping method return error if any occurred, then // the error can be handled. if err := client.Ping(ctx, readpref.Primary()); err != nil { return err } fmt.Println("connected successfully") return nil} func main(){ // Get Client, Context, CancelFunc and // err from connect method. client, ctx, cancel, err := connect("mongodb://localhost:27017") if err != nil { panic(err) } // Release resource when the main // function is returned. defer close(client, ctx, cancel) // Ping mongoDB with Ping method ping(client, ctx)} Output: To insert documents you need to follow the following steps: Create mongo.Client with mongo.Connect function. The mongo.Client handles the connection with the MongoDB.mongo.Client.Database returns a pointer type to the database.Pointer to the database has method collection to select a collection to work with.Collection type provides two methods to insert a document into MongoDB.Collection.InsertOne() method can insert one document into the database.Collection.InsertMany() method can insert a list of documents.Then finally use mongo.Client.Disconnect to close the Database connection. Create mongo.Client with mongo.Connect function. The mongo.Client handles the connection with the MongoDB. mongo.Client.Database returns a pointer type to the database. Pointer to the database has method collection to select a collection to work with. Collection type provides two methods to insert a document into MongoDB. Collection.InsertOne() method can insert one document into the database. Collection.InsertMany() method can insert a list of documents. Then finally use mongo.Client.Disconnect to close the Database connection. Go package main import ( "context" "fmt" "time" "go.mongodb.org/mongo-driver/bson" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" "go.mongodb.org/mongo-driver/mongo/readpref") // This is a user defined method to close resources.// This method closes mongoDB connection and cancel context.func close(client *mongo.Client, ctx context.Context, cancel context.CancelFunc){ defer cancel() defer func() { if err := client.Disconnect(ctx); err != nil { panic(err) } }()} // This is a user defined method that returns mongo.Client,// context.Context, context.CancelFunc and error.// mongo.Client will be used for further database operation.// context.Context will be used set deadlines for process.// context.CancelFunc will be used to cancel context and// resource associated with it.func connect(uri string)(*mongo.Client, context.Context, context.CancelFunc, error) { ctx, cancel := context.WithTimeout(context.Background(), 30 * time.Second) client, err := mongo.Connect(ctx, options.Client().ApplyURI(uri)) return client, ctx, cancel, err} // insertOne is a user defined method, used to insert// documents into collection returns result of InsertOne// and error if any.func insertOne(client *mongo.Client, ctx context.Context, dataBase, col string, doc interface{})(*mongo.InsertOneResult, error) { // select database and collection ith Client.Database method // and Database.Collection method collection := client.Database(dataBase).Collection(col) // InsertOne accept two argument of type Context // and of empty interface result, err := collection.InsertOne(ctx, doc) return result, err} // insertMany is a user defined method, used to insert// documents into collection returns result of// InsertMany and error if any.func insertMany(client *mongo.Client, ctx context.Context, dataBase, col string, docs []interface{})(*mongo.InsertManyResult, error) { // select database and collection ith Client.Database // method and Database.Collection method collection := client.Database(dataBase).Collection(col) // InsertMany accept two argument of type Context // and of empty interface result, err := collection.InsertMany(ctx, docs) return result, err} func main() { // get Client, Context, CancelFunc and err from connect method. client, ctx, cancel, err := connect("mongodb://localhost:27017") if err != nil { panic(err) } // Release resource when main function is returned. defer close(client, ctx, cancel) // Create a object of type interface to store // the bson values, that we are inserting into database. var document interface{} document = bson.D{ {"rollNo", 175}, {"maths", 80}, {"science", 90}, {"computer", 95}, } // insertOne accepts client , context, database // name collection name and an interface that // will be inserted into the collection. // insertOne returns an error and a result of // insert in a single document into the collection. insertOneResult, err := insertOne(client, ctx, "gfg", "marks", document) // handle the error if err != nil { panic(err) } // print the insertion id of the document, // if it is inserted. fmt.Println("Result of InsertOne") fmt.Println(insertOneResult.InsertedID) // Now will be inserting multiple documents into // the collection. create a object of type slice // of interface to store multiple documents var documents []interface{} // Storing into interface list. documents = []interface{}{ bson.D{ {"rollNo", 153}, {"maths", 65}, {"science", 59}, {"computer", 55}, }, bson.D{ {"rollNo", 162}, {"maths", 86}, {"science", 80}, {"computer", 69}, }, } // insertMany insert a list of documents into // the collection. insertMany accepts client, // context, database name collection name // and slice of interface. returns error // if any and result of multi document insertion. insertManyResult, err := insertMany(client, ctx, "gfg", "marks", documents) // handle the error if err != nil { panic(err) } fmt.Println("Result of InsertMany") // print the insertion ids of the multiple // documents, if they are inserted. for id := range insertManyResult.InsertedIDs { fmt.Println(id) }} Output: Fig 1.2 To find documents you need to follow the following steps: Create mongo.Client with mongo.Connect function. The mongo.Client handles the connection with the MongoDB.mongo.Client.Database returns a pointer type to the database.Pointer to the database has method collection to select a collection to work with.The collection provides the Find() method to query the database.Then finally use mongo.Client.Disconnect to close the Database connection. Create mongo.Client with mongo.Connect function. The mongo.Client handles the connection with the MongoDB. mongo.Client.Database returns a pointer type to the database. Pointer to the database has method collection to select a collection to work with. The collection provides the Find() method to query the database. Then finally use mongo.Client.Disconnect to close the Database connection. Go package main import ( "context" "fmt" "time" "go.mongodb.org/mongo-driver/bson" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" "go.mongodb.org/mongo-driver/mongo/readpref") // This is a user defined method to close resources.// This method closes mongoDB connection and cancel context.func close(client *mongo.Client, ctx context.Context, cancel context.CancelFunc) { defer cancel() defer func() { if err := client.Disconnect(ctx); err != nil { panic(err) } }()} // This is a user defined method that returns// a mongo.Client, context.Context,// context.CancelFunc and error.// mongo.Client will be used for further database// operation. context.Context will be used set// deadlines for process. context.CancelFunc will// be used to cancel context and resource// associated with it.func connect(uri string) (*mongo.Client, context.Context, context.CancelFunc, error) { ctx, cancel := context.WithTimeout(context.Background(), 30 * time.Second) client, err := mongo.Connect(ctx, options.Client().ApplyURI(uri)) return client, ctx, cancel, err} // query is user defined method used to query MongoDB,// that accepts mongo.client,context, database name,// collection name, a query and field. // database name and collection name is of type// string. query is of type interface.// field is of type interface, which limits// the field being returned. // query method returns a cursor and error.func query(client *mongo.Client, ctx context.Context,dataBase, col string, query, field interface{})(result *mongo.Cursor, err error) { // select database and collection. collection := client.Database(dataBase).Collection(col) // collection has an method Find, // that returns a mongo.cursor // based on query and field. result, err = collection.Find(ctx, query, options.Find().SetProjection(field)) return} func main() { // Get Client, Context, CancelFunc and err from connect method. client, ctx, cancel, err := connect("mongodb://localhost:27017") if err != nil { panic(err) } // Free the resource when main function is returned defer close(client, ctx, cancel) // create a filter an option of type interface, // that stores bjson objects. var filter, option interface{} // filter gets all document, // with maths field greater that 70 filter = bson.D{ {"maths", bson.D{{"$gt", 70}}}, } // option remove id field from all documents option = bson.D{{"_id", 0}} // call the query method with client, context, // database name, collection name, filter and option // This method returns momngo.cursor and error if any. cursor, err := query(client, ctx, "gfg", "marks", filter, option) // handle the errors. if err != nil { panic(err) } var results []bson.D // to get bson object from cursor, // returns error if any. if err := cursor.All(ctx, &results); err != nil { // handle the error panic(err) } // printing the result of query. fmt.Println("Query Result") for _, doc := range results { fmt.Println(doc) }} Output: Fig 1.3 To update documents you need to follow the following steps: Create mongo.Client with mongo.Connect function. The mongo.Client handles the connection with the MongoDB.mongo.Client.Database returns a pointer type to the database.Pointer to the database has method collection to select a collection to work with.The collection provides two methods to update the documents.UpdateOne() method modify a single document that matching the queryUpdateMany() method modifies every document that matching the query.Then finally use mongo.Client.Disconnect to close the Database connection. Create mongo.Client with mongo.Connect function. The mongo.Client handles the connection with the MongoDB. mongo.Client.Database returns a pointer type to the database. Pointer to the database has method collection to select a collection to work with. The collection provides two methods to update the documents. UpdateOne() method modify a single document that matching the query UpdateMany() method modifies every document that matching the query. Then finally use mongo.Client.Disconnect to close the Database connection. Go package main import ( "context" "fmt" "time" "go.mongodb.org/mongo-driver/bson" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" "go.mongodb.org/mongo-driver/mongo/readpref") // This is a user defined method to close resources.// This method closes mongoDB connection and cancel context.func close(client *mongo.Client, ctx context.Context, cancel context.CancelFunc) { defer cancel() defer func() { if err := client.Disconnect(ctx); err != nil { panic(err) } }()} // This is a user defined method that returns// mongo.Client, context.Context,// context.CancelFunc and error.// mongo.Client will be used for further database// operation.context.Context will be used set// deadlines for process. context.CancelFunc will// be used to cancel context and resource// associated with it.func connect(uri string)(*mongo.Client, context.Context, context.CancelFunc, error) { ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) client, err := mongo.Connect(ctx, options.Client().ApplyURI(uri)) return client, ctx, cancel, err} // UpdateOne is a user defined method, that update// a single document matching the filter.// This methods accepts client, context, database,// collection, filter and update filter and update// is of type interface this method returns// UpdateResult and an error if any.func UpdateOne(client *mongo.Client, ctx context.Context, dataBase, col string, filter, update interface{}) (result *mongo.UpdateResult, err error) { // select the database and the collection collection := client.Database(dataBase).Collection(col) // A single document that match with the // filter will get updated. // update contains the filed which should get updated. result, err = collection.UpdateOne(ctx, filter, update) return} // UpdateMany is a user defined method, that update// a multiple document matching the filter.// This methods accepts client, context, database,// collection, filter and update filter and update// is of type interface this method returns// UpdateResult and an error if any.func UpdateMany(client *mongo.Client, ctx context.Context, dataBase, col string, filter, update interface{}) (result *mongo.UpdateResult, err error) { // select the database and the collection collection := client.Database(dataBase).Collection(col) // All the documents that match with the filter will // get updated. // update contains the filed which should get updated. result, err = collection.UpdateMany(ctx, filter, update) return} func main() { // get Client, Context, CancelFunc and err from connect method. client, ctx, cancel, err := connect("mongodb://localhost:27017") if err != nil { panic(err) } // Free the resource when main function in returned defer close(client, ctx, cancel) // filter object is used to select a single // document matching that matches. filter := bson.D{ {"maths", bson.D{{"$lt", 100}}}, } // The field of the document that need to updated. update := bson.D{ {"$set", bson.D{ {"maths", 100}, }}, } // Returns result of updated document and a error. result, err := UpdateOne(client, ctx, "gfg", "marks", filter, update) // handle error if err != nil { panic(err) } // print count of documents that affected fmt.Println("update single document") fmt.Println(result.ModifiedCount) filter = bson.D{ {"computer", bson.D{{"$lt", 100}}}, } update = bson.D{ {"$set", bson.D{ {"computer", 100}, }}, } // Returns result of updated document and a error. result, err = Update(client, ctx, "gfg", "marks", filter, update) // handle error if err != nil { panic(err) } // print count of documents that affected fmt.Println("update multiple document") fmt.Println(result.ModifiedCount)} Output: Fig 1.4 To delete documents you need to follow the following steps: Create mongo.Client with mongo.Connect function. The mongo.Client handles the connection with the MongoDB.mongo.Client.Database returns a pointer type to the database.Pointer to the database has method collection to select a collection to work with.The collection provides two methods to delete documents in a collection.DeleteOne() function removes a single document matching the query.DeleteMany() function removes every document that matches the query.Then finally use mongo.Client.Disconnect to close the Database connection. Create mongo.Client with mongo.Connect function. The mongo.Client handles the connection with the MongoDB. mongo.Client.Database returns a pointer type to the database. Pointer to the database has method collection to select a collection to work with. The collection provides two methods to delete documents in a collection. DeleteOne() function removes a single document matching the query. DeleteMany() function removes every document that matches the query. Then finally use mongo.Client.Disconnect to close the Database connection. Go package main import ( "context" "fmt" "time" "go.mongodb.org/mongo-driver/bson" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" "go.mongodb.org/mongo-driver/mongo/readpref") // This is a user defined method to close resources.// This method closes mongoDB connection and cancel context.func close(client *mongo.Client, ctx context.Context, cancel context.CancelFunc) { defer cancel() defer func() { if err := client.Disconnect(ctx); err != nil { panic(err) } }()} // This is a user defined method that returns// mongo.Client, context.Context,//context.CancelFunc and error.// mongo.Client will be used for further// database operation. context.Context will be// used set deadlines for process.// context.CancelFunc will be used to cancel// context and resource associated with it.func connect(uri string) (*mongo.Client, context.Context, context.CancelFunc, error) { ctx, cancel := context.WithTimeout(context.Background(), 30 * time.Second) client, err := mongo.Connect(ctx, options.Client().ApplyURI(uri)) return client, ctx, cancel, err} // deleteOne is a user defined function that delete,// a single document from the collection.// Returns DeleteResult and an error if any.func deleteOne(client *mongo.Client, ctx context.Context,dataBase, col string, query interface{})(result *mongo.DeleteResult, err error) { // select document and collection collection := client.Database(dataBase).Collection(col) // query is used to match a document from the collection. result, err = collection.DeleteOne(ctx, query) return} // deleteMany is a user defined function that delete,// multiple documents from the collection.// Returns DeleteResult and an error if any.func deleteMany(client *mongo.Client, ctx context.Context,dataBase, col string, query interface{})(result *mongo.DeleteResult, err error) { // select document and collection collection := client.Database(dataBase).Collection(col) // query is used to match documents from the collection. result, err = collection.DeleteMany(ctx, query) return} func main() { // get Client, Context, CancelFunc and err from connect method. client, ctx, cancel, err := connect("mongodb://localhost:27017") if err != nil { panic(err) } // free resource when main function is returned defer close(client, ctx, cancel) // This query delete document when the maths // field is greater than 60 query := bson.D{ {"maths", bson.D{{"$gt", 60}}}, } // Returns result of deletion and error result, err := deleteOne(client, ctx, "gfg", "marks", query) // print the count of affected documents fmt.Println("No.of rows affected by DeleteOne()") fmt.Println(result.DeletedCount) // This query deletes documents that has // science field greater that 0 query = bson.D{ {"science", bson.D{{"$gt", 0}}}, } // Returns result of deletion and error result, err = deleteMany(client, ctx, "gfg", "marks", query) // print the count of affected documents fmt.Println("No.of rows affected by DeleteMany()") fmt.Println(result.DeletedCount)} Output: Fig 1.5 sweetyty anikakapoor surindertarika1234 surinderdawra388 akshaysingh98088 sumitgumber28 saurabh1990aror gabaa406 nnr223442 simranarora5sos Go Language MongoDB Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n21 Apr, 2022" }, { "code": null, "e": 342, "s": 52, "text": "MongoDB is an open-source NoSQL database. It is a document-oriented database, uses a JSON-like structure called BSON to store documents(i.e. key-value pairs). MongoDB provide...
re.MatchObject.span() Method in Python – regex
02 Sep, 2020 re.MatchObject.span() method returns a tuple containing starting and ending index of the matched string. If group did not contribute to the match it returns(-1,-1). Syntax: re.MatchObject.span() Parameters: group (optional) By default this is 0. Return: A tuple containing starting and ending index of the matched string. If group did not contribute to the match it returns(-1,-1). AttributeError: If a matching pattern is not found then it raises AttributeError. Consider the below example: Example 1: Python3 # import libraryimport re """We create a re.MatchObject and store it in match_object variable, '()' parenthesis are used to define a specific group"""match_object = re.match(r'(\d+)', '128935') """ d in above pattern stands for numerical character+ is used to match a consecutive set of characters satisfying a given condition so d+ will match aconsecutive set of numerical characters""" # generating the tuple with the # starting and ending indexprint(match_object.span()) Output: (0, 6) It’s time to understand the above program. We use a re.match() method to find a match in the given string(‘128935‘) the ‘d‘ indicates that we are searching for a numerical character and the ‘+‘ indicates that we are searching for continuous numerical characters in the given string. Note the use of ‘()‘ the parenthesis is used to define different subgroups. Example 2: If a match object is not found then it raises AttributeError. Python3 # import libraryimport re """We create a re.MatchObject and store it in match_object variable,'()' parenthesis are used to define a specific group""" match_object = re.match(r'(\d+)', 'geeks') """ d in above pattern stands for numerical character+ is used to match a consecutive set of characters satisfying a given condition so d+ will match aconsecutive set of numerical characters""" # generating the tuple with the# starting and ending indexprint(match_object.span()) Output: Traceback (most recent call last): File "/home/18a058de83529572f8d50dc9f8bbd34b.py", line 17, in print(match_object.span()) AttributeError: 'NoneType' object has no attribute 'span' python-regex Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Python Classes and Objects Python | os.path.join() method Python OOPs Concepts How to drop one or multiple columns in Pandas Dataframe Introduction To PYTHON How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | datetime.timedelta() function Python | Get unique values from a list
[ { "code": null, "e": 53, "s": 25, "text": "\n02 Sep, 2020" }, { "code": null, "e": 218, "s": 53, "text": "re.MatchObject.span() method returns a tuple containing starting and ending index of the matched string. If group did not contribute to the match it returns(-1,-1)." }, {...
How to read the XML file in PowerShell?
Reading the XML file in PowerShell is easy. We have the below XML file for our example, <Animals> <Animal Name="Elephant" Type="Wild"> <Residence>Forest</Residence> <Color>Brown</Color> </Animal> <Animal Name="Dog" Type="Pet"> <Residence>Street</Residence> <color>Multi</color> </Animal> <Animal Name="Tiger" Type="Wild"> <Residence>Forest</Residence> <color>Yellow</color> </Animal> </Animals> Suppose this file is saved as Animals.xml to our current path and to read this XML file we will first get the content of the file using Get-Content command and then we will perform type conversion into XML. For example, [XML]$xmlfile = Get-Content .\Animals.xml When you check the $xmlfile output, PS E:\scripts\Powershell> $xmlfile Animals ------- Animals Animals tag is called element here and to get the attributes from the element, we need to use that element, for example, $xmlfile.Animals PS E:\scripts\Powershell> $xmlfile.Animals Animal ------ {Elephant, Dog, Tiger} Similarly, you can use Animal elements to expand further attributes, and so on. For example, $xmlfile.Animals.Animal PS E:\scripts\Powershell> $xmlfile.Animals.Animal Name Type Residence Color ---- ---- --------- ----- Elephant Wild Forest Brown Dog Pet Street Multi Tiger Wild Forest Yellow To get a specific attribute like Name, Type, etc. $xmlfile.Animals.Animal.name PS E:\scripts\Powershell> $xmlfile.Animals.Animal.name Elephant Dog Tiger To get the Type of the animal. $xmlfile.Animals.Animal.Type PS E:\scripts\Powershell> $xmlfile.Animals.Animal.Type Wild Pet Wild If you want two or more attributes together in the table format then you can use PowerShell traditional Select or Format-Table command. For example, $xmlfile.Animals.Animal | Select Name, Type PS E:\scripts\Powershell> $xmlfile.Animals.Animal | Select Name, Type Name Type ---- ---- Elephant Wild Dog Pet Tiger Wild
[ { "code": null, "e": 1275, "s": 1187, "text": "Reading the XML file in PowerShell is easy. We have the below XML file for our example," }, { "code": null, "e": 1687, "s": 1275, "text": "<Animals>\n <Animal Name=\"Elephant\" Type=\"Wild\">\n <Residence>Forest</Resid...
Transpose a matrix in Java
A transpose of a matrix is the matrix flipped over its diagonal i.e. the row and column indices of the matrix are switched. An example of this is given as follows − Matrix = 1 2 3 4 5 6 7 8 9 Transpose = 1 4 7 2 5 8 3 6 9 A program that demonstrates this is given as follows. Live Demo public class Example { public static void main(String args[]) { int i, j; int row = 3; int col = 2; int arr[][] = {{2, 5}, {1, 8}, {6, 9} }; System.out.println("The original matrix is: "); for(i = 0; i < row; i++) { for(j = 0; j < col; j++) { System.out.print(arr[i][j] + " "); } System.out.print("\n"); } System.out.println("The matrix transpose is: "); for(i = 0; i < col; i++) { for(j = 0; j < row; j++) { System.out.print(arr[j][i] + " "); } System.out.print("\n"); } } } The original matrix is: 2 5 1 8 6 9 The matrix transpose is: 2 1 6 5 8 9
[ { "code": null, "e": 1352, "s": 1187, "text": "A transpose of a matrix is the matrix flipped over its diagonal i.e. the row and column indices of the matrix are switched. An example of this is given as follows −" }, { "code": null, "e": 1411, "s": 1352, "text": "Matrix = \n1 2 3\...
Java Program For Array Rotation
03 Mar, 2021 Write a function rotate(ar[], d, n) that rotates arr[] of size n by d elements. Rotation of the above array by 2 will make array METHOD 1 (Using temp array) Input arr[] = [1, 2, 3, 4, 5, 6, 7], d = 2, n =7 1) Store d elements in a temp array temp[] = [1, 2] 2) Shift rest of the arr[] arr[] = [3, 4, 5, 6, 7, 6, 7] 3) Store back the d elements arr[] = [3, 4, 5, 6, 7, 1, 2] Time complexity : O(n)Auxiliary Space : O(d) METHOD 2 (Rotate one by one) leftRotate(arr[], d, n) start For i = 0 to i < d Left rotate all elements of arr[] by one end To rotate by one, store arr[0] in a temporary variable temp, move arr[1] to arr[0], arr[2] to arr[1] ...and finally temp to arr[n-1] Let us take the same example arr[] = [1, 2, 3, 4, 5, 6, 7], d = 2Rotate arr[] by one 2 timesWe get [2, 3, 4, 5, 6, 7, 1] after first rotation and [ 3, 4, 5, 6, 7, 1, 2] after second rotation. class RotateArray { /*Function to left rotate arr[] of size n by d*/ void leftRotate(int arr[], int d, int n) { int i; for (i = 0; i < d; i++) leftRotatebyOne(arr, n); } void leftRotatebyOne(int arr[], int n) { int i, temp; temp = arr[0]; for (i = 0; i < n - 1; i++) arr[i] = arr[i + 1]; arr[i] = temp; } /* utility function to print an array */ void printArray(int arr[], int size) { int i; for (i = 0; i < size; i++) System.out.print(arr[i] + " "); } // Driver program to test above functions public static void main(String[] args) { RotateArray rotate = new RotateArray(); int arr[] = {1, 2, 3, 4, 5, 6, 7}; rotate.leftRotate(arr, 2, 7); rotate.printArray(arr, 7); }} // This code has been contributed by Mayank Jaiswal Output : 3 4 5 6 7 1 2 Time complexity : O(n * d)Auxiliary Space : O(1)METHOD 3 (A Juggling Algorithm)This is an extension of method 2. Instead of moving one by one, divide the array in different setswhere number of sets is equal to GCD of n and d and move the elements within sets.If GCD is 1 as is for the above example array (n = 7 and d =2), then elements will be moved within one set only, we just start with temp = arr[0] and keep moving arr[I+d] to arr[I] and finally store temp at the right place. Here is an example for n =12 and d = 3. GCD is 3 and Let arr[] be {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12} a) Elements are first moved in first set – (See below diagram for this movement) arr[] after this step --> {4 2 3 7 5 6 10 8 9 1 11 12} b) Then in second set. arr[] after this step --> {4 5 3 7 8 6 10 11 9 1 2 12} c) Finally in third set. arr[] after this step --> {4 5 6 7 8 9 10 11 12 1 2 3} class RotateArray { /*Function to left rotate arr[] of size n by d*/ void leftRotate(int arr[], int d, int n) { int i, j, k, temp; for (i = 0; i < gcd(d, n); i++) { /* move i-th values of blocks */ temp = arr[i]; j = i; while (true) { k = j + d; if (k >= n) k = k - n; if (k == i) break; arr[j] = arr[k]; j = k; } arr[j] = temp; } } /*UTILITY FUNCTIONS*/ /* function to print an array */ void printArray(int arr[], int size) { int i; for (i = 0; i < size; i++) System.out.print(arr[i] + " "); } /*Function to get gcd of a and b*/ int gcd(int a, int b) { if (b == 0) return a; else return gcd(b, a % b); } // Driver program to test above functions public static void main(String[] args) { RotateArray rotate = new RotateArray(); int arr[] = {1, 2, 3, 4, 5, 6, 7}; rotate.leftRotate(arr, 2, 7); rotate.printArray(arr, 7); }} // This code has been contributed by Mayank Jaiswal Output : 3 4 5 6 7 1 2 Time complexity : O(n)Auxiliary Space : O(1) Please refer complete article on Program for array rotation for more details! ManasChhabra2 Akanksha_Rai Java Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n03 Mar, 2021" }, { "code": null, "e": 108, "s": 28, "text": "Write a function rotate(ar[], d, n) that rotates arr[] of size n by d elements." }, { "code": null, "e": 157, "s": 108, "text": "Rotation of the above arra...
A Complete Machine Learning Project Walk-Through in Python: Part One | by Will Koehrsen | Towards Data Science
Reading through a data science book or taking a course, it can feel like you have the individual pieces, but don’t quite know how to put them together. Taking the next step and solving a complete machine learning problem can be daunting, but preserving and completing a first project will give you the confidence to tackle any data science problem. This series of articles will walk through a complete machine learning solution with a real-world dataset to let you see how all the pieces come together. We’ll follow the general machine learning workflow step-by-step: Data cleaning and formattingExploratory data analysisFeature engineering and selectionCompare several machine learning models on a performance metricPerform hyperparameter tuning on the best modelEvaluate the best model on the testing setInterpret the model resultsDraw conclusions and document work Data cleaning and formatting Exploratory data analysis Feature engineering and selection Compare several machine learning models on a performance metric Perform hyperparameter tuning on the best model Evaluate the best model on the testing set Interpret the model results Draw conclusions and document work Along the way, we’ll see how each step flows into the next and how to specifically implement each part in Python. The complete project is available on GitHub, with the first notebook here. This first article will cover steps 1–3 with the rest addressed in subsequent posts. (As a note, this problem was originally given to me as an “assignment” for a job screen at a start-up. After completing the work, I was offered the job, but then the CTO of the company quit and they weren’t able to bring on any new employees. I guess that’s how things go on the start-up scene!) The first step before we get coding is to understand the problem we are trying to solve and the available data. In this project, we will work with publicly available building energy data from New York City. The objective is to use the energy data to build a model that can predict the Energy Star Score of a building and interpret the results to find the factors which influence the score. The data includes the Energy Star Score, which makes this a supervised regression machine learning task: Supervised: we have access to both the features and the target and our goal is to train a model that can learn a mapping between the two Regression: The Energy Star score is a continuous variable We want to develop a model that is both accurate — it can predict the Energy Star Score close to the true value — and interpretable — we can understand the model predictions. Once we know the goal, we can use it to guide our decisions as we dig into the data and build models. Contrary to what most data science courses would have you believe, not every dataset is a perfectly curated group of observations with no missing values or anomalies (looking at you mtcars and iris datasets). Real-world data is messy which means we need to clean and wrangle it into an acceptable format before we can even start the analysis. Data cleaning is an un-glamorous, but necessary part of most actual data science problems. First, we can load in the data as a Pandas DataFrame and take a look: import pandas as pdimport numpy as np# Read in data into a dataframe data = pd.read_csv('data/Energy_and_Water_Data_Disclosure_for_Local_Law_84_2017__Data_for_Calendar_Year_2016_.csv')# Display top of dataframedata.head() This is a subset of the full data which contains 60 columns. Already, we can see a couple issues: first, we know that we want to predict the ENERGY STAR Score but we don’t know what any of the columns mean. While this isn’t necessarily an issue — we can often make an accurate model without any knowledge of the variables — we want to focus on interpretability, and it might be important to understand at least some of the columns. When I originally got the assignment from the start-up, I didn’t want to ask what all the column names meant, so I looked at the name of the file, and decided to search for “Local Law 84”. That led me to this page which explains this is an NYC law requiring all buildings of a certain size to report their energy use. More searching brought me to all the definitions of the columns. Maybe looking at a file name is an obvious place to start, but for me this was a reminder to go slow so you don’t miss anything important! We don’t need to study all of the columns, but we should at least understand the Energy Star Score, which is described as: A 1-to-100 percentile ranking based on self-reported energy usage for the reporting year. The Energy Star score is a relative measure used for comparing the energy efficiency of buildings. That clears up the first problem, but the second issue is that missing values are encoded as “Not Available”. This is a string in Python which means that even the columns with numbers will be stored as object datatypes because Pandas converts a column with any strings into a column of all strings. We can see the datatypes of the columns using the dataframe.info()method: # See the column data types and non-missing valuesdata.info() Sure enough, some of the columns that clearly contain numbers (such as ft2), are stored as objects. We can’t do numerical analysis on strings, so these will have to be converted to number (specifically float) data types! Here’s a little Python code that replaces all the “Not Available” entries with not a number ( np.nan), which can be interpreted as numbers, and then converts the relevant columns to the float datatype: Once the correct columns are numbers, we can start to investigate the data. In addition to incorrect datatypes, another common problem when dealing with real-world data is missing values. These can arise for many reasons and have to be either filled in or removed before we train a machine learning model. First, let’s get a sense of how many missing values are in each column (see the notebook for code). (To create this table, I used a function from this Stack Overflow Forum). While we always want to be careful about removing information, if a column has a high percentage of missing values, then it probably will not be useful to our model. The threshold for removing columns should depend on the problem (here is a discussion), and for this project, we will remove any columns with more than 50% missing values. At this point, we may also want to remove outliers. These can be due to typos in data entry, mistakes in units, or they could be legitimate but extreme values. For this project, we will remove anomalies based on the definition of extreme outliers: Below the first quartile − 3 ∗ interquartile range Above the third quartile + 3 ∗ interquartile range (For the code to remove the columns and the anomalies, see the notebook). At the end of the data cleaning and anomaly removal process, we are left with over 11,000 buildings and 49 features. Now that the tedious — but necessary — step of data cleaning is complete, we can move on to exploring our data! Exploratory Data Analysis (EDA) is an open-ended process where we calculate statistics and make figures to find trends, anomalies, patterns, or relationships within the data. In short, the goal of EDA is to learn what our data can tell us. It generally starts out with a high level overview, then narrows in to specific areas as we find interesting parts of the data. The findings may be interesting in their own right, or they can be used to inform our modeling choices, such as by helping us decide which features to use. The goal is to predict the Energy Star Score (renamed to score in our data) so a reasonable place to start is examining the distribution of this variable. A histogram is a simple yet effective way to visualize the distribution of a single variable and is easy to make using matplotlib. import matplotlib.pyplot as plt# Histogram of the Energy Star Scoreplt.style.use('fivethirtyeight')plt.hist(data['score'].dropna(), bins = 100, edgecolor = 'k');plt.xlabel('Score'); plt.ylabel('Number of Buildings'); plt.title('Energy Star Score Distribution'); This looks quite suspicious! The Energy Star score is a percentile rank, which means we would expect to see a uniform distribution, with each score assigned to the same number of buildings. However, a disproportionate number of buildings have either the highest, 100, or the lowest, 1, score (higher is better for the Energy Star score). If we go back to the definition of the score, we see that it is based on “self-reported energy usage” which might explain the very high scores. Asking building owners to report their own energy usage is like asking students to report their own scores on a test! As a result, this probably is not the most objective measure of a building’s energy efficiency. If we had an unlimited amount of time, we might want to investigate why so many buildings have very high and very low scores which we could by selecting these buildings and seeing what they have in common. However, our objective is only to predict the score and not to devise a better method of scoring buildings! We can make a note in our report that the scores have a suspect distribution, but our main focus in on predicting the score. A major part of EDA is searching for relationships between the features and the target. Variables that are correlated with the target are useful to a model because they can be used to predict the target. One way to examine the effect of a categorical variable (which takes on only a limited set of values) on the target is through a density plot using the seaborn library. A density plot can be thought of as a smoothed histogram because it shows the distribution of a single variable. We can color a density plot by class to see how a categorical variable changes the distribution. The following code makes a density plot of the Energy Star Score colored by the the type of building (limited to building types with more than 100 data points): We can see that the building type has a significant impact on the Energy Star Score. Office buildings tend to have a higher score while Hotels have a lower score. This tells us that we should include the building type in our modeling because it does have an impact on the target. As a categorical variable, we will have to one-hot encode the building type. A similar plot can be used to show the Energy Star Score by borough: The borough does not seem to have as large of an impact on the score as the building type. Nonetheless, we might want to include it in our model because there are slight differences between the boroughs. To quantify relationships between variables, we can use the Pearson Correlation Coefficient. This is a measure of the strength and direction of a linear relationship between two variables. A score of +1 is a perfectly linear positive relationship and a score of -1 is a perfectly negative linear relationship. Several values of the correlation coefficient are shown below: While the correlation coefficient cannot capture non-linear relationships, it is a good way to start figuring out how variables are related. In Pandas, we can easily calculate the correlations between any columns in a dataframe: # Find all correlations with the score and sort correlations_data = data.corr()['score'].sort_values() The most negative (left) and positive (right) correlations with the target: There are several strong negative correlations between the features and the target with the most negative the different categories of EUI (these measures vary slightly in how they are calculated). The EUI — Energy Use Intensity — is the amount of energy used by a building divided by the square footage of the buildings. It is meant to be a measure of the efficiency of a building with a lower score being better. Intuitively, these correlations make sense: as the EUI increases, the Energy Star Score tends to decrease. To visualize relationships between two continuous variables, we use scatterplots. We can include additional information, such as a categorical variable, in the color of the points. For example, the following plot shows the Energy Star Score vs. Site EUI colored by the building type: This plot lets us visualize what a correlation coefficient of -0.7 looks like. As the Site EUI decreases, the Energy Star Score increases, a relationship that holds steady across the building types. The final exploratory plot we will make is known as the Pairs Plot. This is a great exploration tool because it lets us see relationships between multiple pairs of variables as well as distributions of single variables. Here we are using the seaborn visualization library and the PairGrid function to create a Pairs Plot with scatterplots on the upper triangle, histograms on the diagonal, and 2D kernel density plots and correlation coefficients on the lower triangle. To see interactions between variables, we look for where a row intersects with a column. For example, to see the correlation of Weather Norm EUI with score, we look in the Weather Norm EUI row and the score column and see a correlation coefficient of -0.67. In addition to looking cool, plots such as these can help us decide which variables to include in modeling. Feature engineering and selection often provide the greatest return on time invested in a machine learning problem. First of all, let’s define what these two tasks are: Feature engineering: The process of taking raw data and extracting or creating new features. This might mean taking transformations of variables, such as a natural log and square root, or one-hot encoding categorical variables so they can be used in a model. Generally, I think of feature engineering as creating additional features from the raw data. Feature selection: The process of choosing the most relevant features in the data. In feature selection, we remove features to help the model generalize better to new data and create a more interpretable model. Generally, I think of feature selection as subtracting features so we are left with only those that are most important. A machine learning model can only learn from the data we provide it, so ensuring that data includes all the relevant information for our task is crucial. If we don’t feed a model the correct data, then we are setting it up to fail and we should not expect it to learn! For this project, we will take the following feature engineering steps: One-hot encode categorical variables (borough and property use type) Add in the natural log transformation of the numerical variables One-hot encoding is necessary to include categorical variables in a model. A machine learning algorithm cannot understand a building type of “office”, so we have to record it as a 1 if the building is an office and a 0 otherwise. Adding transformed features can help our model learn non-linear relationships within the data. Taking the square root, natural log, or various powers of features is common practice in data science and can be based on domain knowledge or what works best in practice. Here we will include the natural log of all numerical features. The following code selects the numeric features, takes log transformations of these features, selects the two categorical features, one-hot encodes these features, and joins the two sets together. This seems like a lot of work, but it is relatively straightforward in Pandas! After this process we have over 11,000 observations (buildings) with 110 columns (features). Not all of these features are likely to be useful for predicting the Energy Star Score, so now we will turn to feature selection to remove some of the variables. Many of the 110 features we have in our data are redundant because they are highly correlated with one another. For example, here is a plot of Site EUI vs Weather Normalized Site EUI which have a correlation coefficient of 0.997. Features that are strongly correlated with each other are known as collinear and removing one of the variables in these pairs of features can often help a machine learning model generalize and be more interpretable. (I should point out we are talking about correlations of features with other features, not correlations with the target, which help our model!) There are a number of methods to calculate collinearity between features, with one of the most common the variance inflation factor. In this project, we will use thebcorrelation coefficient to identify and remove collinear features. We will drop one of a pair of features if the correlation coefficient between them is greater than 0.6. For the implementation, take a look at the notebook (and this Stack Overflow answer) While this value may seem arbitrary, I tried several different thresholds, and this choice yielded the best model. Machine learning is an empirical field and is often about experimenting and finding what performs best! After feature selection, we are left with 64 total features and 1 target. # Remove any columns with all na valuesfeatures = features.dropna(axis=1, how = 'all')print(features.shape)(11319, 65) We have now completed data cleaning, exploratory data analysis, and feature engineering. The final step to take before getting started with modeling is establishing a naive baseline. This is essentially a guess against which we can compare our results. If the machine learning models do not beat this guess, then we might have to conclude that machine learning is not acceptable for the task or we might need to try a different approach. For regression problems, a reasonable naive baseline is to guess the median value of the target on the training set for all the examples in the test set. This sets a relatively low bar for any model to surpass. The metric we will use is mean absolute error (mae) which measures the average absolute error on the predictions. There are many metrics for regression, but I like Andrew Ng’s advice to pick a single metric and then stick to it when evaluating models. The mean absolute error is easy to calculate and is interpretable. Before calculating the baseline, we need to split our data into a training and a testing set: The training set of features is what we provide to our model during training along with the answers. The goal is for the model to learn a mapping between the features and the target.The testing set of features is used to evaluate the trained model. The model is not allowed to see the answers for the testing set and must make predictions using only the features. We know the answers for the test set so we can compare the test predictions to the answers. The training set of features is what we provide to our model during training along with the answers. The goal is for the model to learn a mapping between the features and the target. The testing set of features is used to evaluate the trained model. The model is not allowed to see the answers for the testing set and must make predictions using only the features. We know the answers for the test set so we can compare the test predictions to the answers. We will use 70% of the data for training and 30% for testing: # Split into 70% training and 30% testing setX, X_test, y, y_test = train_test_split(features, targets, test_size = 0.3, random_state = 42) Now we can calculate the naive baseline performance: The baseline guess is a score of 66.00Baseline Performance on the test set: MAE = 24.5164 The naive estimate is off by about 25 points on the test set. The score ranges from 1–100, so this represents an error of 25%, quite a low bar to surpass! In this article we walked through the first three steps of a machine learning problem. After defining the question, we: Cleaned and formatted the raw dataPerformed an exploratory data analysis to learn about the datasetDeveloped a set of features that we will use for our models Cleaned and formatted the raw data Performed an exploratory data analysis to learn about the dataset Developed a set of features that we will use for our models Finally, we also completed the crucial step of establishing a baseline against which we can judge our machine learning algorithms. The second post (available here) will show how to evaluate machine learning models using Scikit-Learn, select the best model, and perform hyperparameter tuning to optimize the model. The third post, dealing with model interpretation and reporting results, is here. As always, I welcome feedback and constructive criticism and can be reached on Twitter @koehrsen_will.
[ { "code": null, "e": 675, "s": 172, "text": "Reading through a data science book or taking a course, it can feel like you have the individual pieces, but don’t quite know how to put them together. Taking the next step and solving a complete machine learning problem can be daunting, but preserving an...
Detectron2 & Python Wheels Cache. Building a Detectron2 wheel package on... | by Ian Ormesher | Towards Data Science
I’ve been recently doing some work with a custom Detectron2 model. I’ve been training my model on a Linux VM in the cloud, but I wanted to use this trained model in a Windows environment. The official line from Facebook Research about Detectron2 is that it isn’t supported on Windows. But I wasn’t going to let that stop me from trying! The good news is I was able to do it and this article describes the process I used. In the process of explaining how I did this, I’m going to assume that you are familiar/comfortable with the following: Python virtual environments GIT command line The first thing is to create a virtual environment and then activate it: virtualenv detectron2detectron2\Scripts\activate From your virtual environment install PyTorch and torchvision (CPU version only): pip install torch==1.5.1+cpu torchvision==0.6.1+cpu -f https://download.pytorch.org/whl/torch_stable.html It’s also probably worth installing fvcore and the windows version of pycocotools which is pycocotools-windows: pip install fvcorepip install pycocotools-windows To download the Detectron2 source we are going to clone a particular tagged release (here the tag is ‘v0.1’) using GIT command line: git clone --depth 1 --branch v0.1 https://github.com/facebookresearch/detectron2.git Once you’ve cloned the release change to the detectron2 folder you’ve just created with this clone from within your virtual environment (not your GIT command line). E.g. if you GIT cloned into the folder D:\Source\Python\detectron2 then change into that directory and issue the following: pip install -U . Don’t forget the full-stop at the end — that’s really important! This should build the detectron2 package and if all has gone well you should see something like this at the end: Because we cloned the v0.1 tag you’ll see that it’s built detectron2–0.1 You’ll notice that when you install a python package it checks it’s local wheel cache to see if it’s already downloaded this in the past, and if it has then it uses the cached version. To create you’re own wheel cache you need to first install the wheel package: pip install wheel Once you’ve done that we want to list the currently installed packages for our virtual environment and put them into a file (here it’s called requirements.txt): pip freeze > requirements.txt To copy/create all the wheels for the currently installed packages and put them in a cache folder (here I’m using D:\wheels_cache) do the following: pip wheel --wheel-dir=D:\wheels_cache -r requirements.txt One of the benefits of using a wheels cache is that in the future you can install directly using the cached wheel rather than having to build a package again. This is useful if you want to do this on a different PC where you may not have all the dependencies. You can do that like this: pip install D:\wheels_cache\detectron2-0.1-cp38-cp38-win_amd64.whl One of the things to note with this approach is that wheels are dependent on the target platform and Python version. This one is for Windows 64-bit and Python 3.8. You can even install everything from the requirements.txt file, telling it to look for the wheels in your cache folder: pip install --no-index --find-links=D:\wheels_cache -r requirements.txt Once I had to set up a PC to be installed at a customer site and this box was to be isolated — i.e. it would not be allowed to connect to the internet or a local network. I had to be able to install Python and all the packages needed on this machine to run the machine learning software I had implemented. The approach I took was to build a wheels cache for the machine of all the packages that were needed. That way installation could happen from the cache.
[ { "code": null, "e": 593, "s": 172, "text": "I’ve been recently doing some work with a custom Detectron2 model. I’ve been training my model on a Linux VM in the cloud, but I wanted to use this trained model in a Windows environment. The official line from Facebook Research about Detectron2 is that i...
GATE | GATE-CS-2015 (Set 3) | Question 65 - GeeksforGeeks
28 Jun, 2021 Consider a software project with the following information domain characteristic for calculation of function point metric. Number of external inputs (I) = 30 Number of external output (O) = 60 Number of external inquiries (E) = 23 Number of files (F) = 08 Number of external interfaces (N) = 02 It is given that the complexity weighting factors for I, O, E, F and N are 4, 5, 4, 10 and 7, respectively. It is also given that, out of fourteen value adjustment factors that influence the development effort, four factors are not applicable, each of he other four factors have value 3, and each of the remaining factors have value 4. The computed value of function point metric is ____________(A) 612.06(B) 212.05(C) 305.09(D) 806.9Answer: (A)Explanation: Function point metrics provide a standardized method for measuring the various functions of a software application The value of function point metric = UPF * VAF Here, UPF: Unadjusted Function Point (UFP) count VAF: Value Adjustment Factor UPF = 4*30 + 60*5 + 23*4 + 8*10 + 7*2 = 606 VAF = (TDI * 0.01) + 0.65 Here TDI is Total Degree of Influence TDI = 3*4 + 0*4 + 4*6 = 36 VAF = (TDI * 0.01) + 0.65 = 36*0.01 + 0.65 = 0.36 + 0.65 = 1.01 FP = UPF * VAF = 1.01 * 606 = 612.06 Refer https://cs.uwaterloo.ca/~apidduck/CS846/Seminars/abbas.pdf Quiz of this Question GATE-CS-2015 (Set 3) GATE-GATE-CS-2015 (Set 3) GATE Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments GATE | GATE-CS-2016 (Set 2) | Question 48 GATE | GATE-CS-2014-(Set-1) | Question 30 GATE | GATE-CS-2001 | Question 23 GATE | GATE-CS-2015 (Set 1) | Question 65 GATE | GATE CS 2010 | Question 45 GATE | GATE-CS-2015 (Set 3) | Question 65 GATE | GATE-CS-2015 (Set 1) | Question 42 GATE | GATE-CS-2014-(Set-1) | Question 65 C++ Program to count Vowels in a string using Pointer GATE | GATE-CS-2004 | Question 3
[ { "code": null, "e": 24085, "s": 24057, "text": "\n28 Jun, 2021" }, { "code": null, "e": 24208, "s": 24085, "text": "Consider a software project with the following information domain characteristic for calculation of function point metric." }, { "code": null, "e": 243...
Matplotlib - 3D Wireframe plot
Wireframe plot takes a grid of values and projects it onto the specified three-dimensional surface, and can make the resulting three-dimensional forms quite easy to visualize. The plot_wireframe() function is used for the purpose − from mpl_toolkits import mplot3d import numpy as np import matplotlib.pyplot as plt def f(x, y): return np.sin(np.sqrt(x ** 2 + y ** 2)) x = np.linspace(-6, 6, 30) y = np.linspace(-6, 6, 30) X, Y = np.meshgrid(x, y) Z = f(X, Y) fig = plt.figure() ax = plt.axes(projection='3d') ax.plot_wireframe(X, Y, Z, color='black') ax.set_title('wireframe') plt.show() The above line of code will generate the following output − 63 Lectures 6 hours Abhilash Nelson 11 Lectures 4 hours DATAhill Solutions Srinivas Reddy 9 Lectures 2.5 hours DATAhill Solutions Srinivas Reddy 32 Lectures 4 hours Aipython 10 Lectures 2.5 hours Akbar Khan 63 Lectures 6 hours Anmol Print Add Notes Bookmark this page
[ { "code": null, "e": 2748, "s": 2516, "text": "Wireframe plot takes a grid of values and projects it onto the specified three-dimensional surface, and can make the resulting three-dimensional forms quite easy to visualize. The plot_wireframe() function is used for the purpose −" }, { "code":...
Deploying Models to Flask. A walk-through on how to deploy machine... | by Jeremy Chow | Towards Data Science
Code for this project can be found here. You’ve built a model using Pandas, Sci-kit Learn, and Jupyter Notebooks. The results look great in your notebooks, but how do you share what you’ve made with others? To share models, we need to deploy them, ideally to some website, or at minimum using Python files. Today I will walk you through the process of deploying a model to a website using Python and Flask using my chatroom toxicity classifier models as an example. This article assumes you know how to write Python code, know the basics of HTML, and have Flask installed (pip install flask or conda install flask). We’ll start by going into the file structure! Flask wants things in a specific folder layout in order for it to load properly. I’ve taken a snapshot of my file structure for this project but there are only a couple important elements listed below: The necessary elements are: Static folder — Exists in the root directory. This contains all your static assets like css files, images, fonts, and compressed models.Template folder — Exists in the root directory. This is the default location that template HTML files MUST be in for Flask to render them properly. Any page that interacts with your models will be in here.predictor.html — This is the front-facing HTML file that users can interact with and that your model will output its results to. This is an example of a file that needs to be in the Templates folder.predictor_api.py — Exists in root directory. This file contains the functions that run your model and data preprocessing.predictor_app.py — Exists in root directory. This file acts as the link between your API file calling the model and the HTML file to display the results and take in user inputs. Static folder — Exists in the root directory. This contains all your static assets like css files, images, fonts, and compressed models. Template folder — Exists in the root directory. This is the default location that template HTML files MUST be in for Flask to render them properly. Any page that interacts with your models will be in here. predictor.html — This is the front-facing HTML file that users can interact with and that your model will output its results to. This is an example of a file that needs to be in the Templates folder. predictor_api.py — Exists in root directory. This file contains the functions that run your model and data preprocessing. predictor_app.py — Exists in root directory. This file acts as the link between your API file calling the model and the HTML file to display the results and take in user inputs. Everything else in the image above is not necessary for Flask to operate properly and can be ignored for the rest of this article. Let’s go into how you set these files up! To start, you need to make an API Python file. This is the file that contains all the methods that preprocesses your data, loads your model, then runs your model on the data inputs given by the user. Note that this is independent of Flask, in the sense that this is just a python file that runs your model with no Flask functionality. Here is the skeleton of my predictor_api.py file that contains all the functions to run my model: # predictor_api.py - contains functions to run modeldef clean_word(text): # Removes symbols, numbers, some stop words return cleaned_textdef raw_chat_to_model_input(raw_input_string): # Converts string into cleaned text, converts it to model input return word_vectorizer.transform(cleaned_text)def predict_toxicity(raw_input_string): # Takes in a user input string, predict the toxicity levels model_input = raw_chat_to_model_input(raw_input_string) results = [] # I use a dictionary of multiple models in this project for key,model in model_dict.items(): results.append(round(model.predict_proba(model_input))) return resultsdef make_prediction(input_chat): ''' Given string to classify, returns the input argument and the dictionary of model classifications in a dict so that it may be passed back to the HTML page. ''' # Calls on previous functions to get probabilities of toxicity pred_probs = predict_toxicity(input_chat) probs = [{'name': list(model_dict.keys())[index], 'prob': \ pred_probs[index]} \ for index in np.argsort(pred_probs)[::-1]] return (input_chat, probs) This is personal preference and your data processing steps will vary on the type of model you’re doing as well as the data you’re working with, but I break up the functions in this toxic chat classifier into: String cleaningVectorization of the string to feed it into the modelModel predictions using output of step 2Final make_predictions function which calls all the previous steps in a pipeline from raw input to model predictions in one function call. String cleaning Vectorization of the string to feed it into the model Model predictions using output of step 2 Final make_predictions function which calls all the previous steps in a pipeline from raw input to model predictions in one function call. Side note: you’ll want to pass your predictions in a dictionary format, as this is the format that Flask passes information between its templates and your python files. Once you’ve set up your functions, you need some way of testing them. This is when we set up a main section to our script: if __name__ == '__main__': from pprint import pprint print("Checking to see what empty string predicts") print('input string is ') chat_in = 'bob' pprint(chat_in)x_input, probs = make_prediction(chat_in) print(f'Input values: {x_input}') print('Output probabilities') pprint(probs) The __name__=='__main__' portion will only run if we launch script on the commandline using python script_name.py . This allows us to debug of our functions and add any unit tests in an area of the file that will not run once our app or website is launched. This part of the code is purely for you as the programmer to make sure your functions are working. Now your API file should be working. Cool, how do we get it onto a website? This is where Flask comes in. Flask is a Python framework that uses Jinja2 HTML templates to allow you to easily create webpages using Python. The Flask framework deals with a lot of the backend web stuff so that you can do more with just a couple lines of Python code. To get started, you’re going to need to create your app Python file: # predictor_app.pyimport flaskfrom flask import requestfrom predictor_api import make_prediction# Initialize the appapp = flask.Flask(__name__)# An example of routing:# If they go to the page "/" (this means a GET request# to the page http://127.0.0.1:5000/)@app.route("/", methods=["GET","POST"])def predict(): # request.args contains all the arguments passed by our form # comes built in with flask. It is a dictionary of the form # "form name (as set in template)" (key): "string in the # textbox" (value) print(request.args) if(request.args): x_input, predictions = \ make_prediction(request.args['chat_in']) print(x_input) return flask.render_template('predictor.html', chat_in=x_input, prediction=predictions) else: #For first load, request.args will be an empty ImmutableDict # type. If this is the case we need to pass an empty string # into make_prediction function so no errors are thrown. x_input, predictions = make_prediction('') return flask.render_template('predictor.html', chat_in=x_input, prediction=predictions)# Start the server, continuously listen to requests.if __name__=="__main__": # For local development, set to True: app.run(debug=False) # For public web serving: #app.run(host='0.0.0.0') app.run() There’s a lot going on here, so I’ll try to break it down into digestible sections. First we import flask, and explicitly import request for quality of life purposes. Next, we have from predictor_api import make_prediction This goes to the API file we wrote earlier and imports the make_prediction function, which takes in the user input and runs all data preprocessing then outputs our predictions. I’m going to skip to the bottom code briefly. As mentioned earlier, if __name__=="__main__": runs when you run the Python script through the command line. In order to host our web page, we need to run python your_app_name.py. This will call app.run() and run our web page locally, hosted on your computer. After initializing the app, we have to tell Flask what we want to do when the web page loads. The line @app.route("/", methods = ["GET","POST"]) tells Flask what to do when we load the home page of our website. The GET method is the type of request your web browser will send the website when it accesses the URL of the web page. Don’t worry about the POST method because it’s a request conventionally used when a user want to change a website, and it doesn’t have much relevance for us in this deployment process. If you want to add another page to your website, you can add an: @app.route("/page_name", methods = ["GET","POST"])def do_something(): flask.render_template('page_name.html',var_1 = v1, var_2 = v2) The function name under the routing doesn’t mean anything, it just contains the code that will run upon the user reaching that page. In the function below the routing, we have request.args. This is a dictionary (JSON) object that contains the information submitted when someone clicks the ‘Submit’ button on our form. I’ll show below how we assign what is in the request.args object. Once we have the args, we pass it into our model using the function we imported from our other file, then render the tempate with the predicted values that our model spit out by returning: return flask.render_template('predictor.html', chat_in=x_input, prediction=predictions) This function takes in the html file that our website runs, then passes in our variables outputted from the model x_input, predictions and shoots them to the HTML template as chat_in, prediction . From here, the job of the model is done, and now we just need to worry about displaying the results to the users! First we need to make a way for the user to pass in their input to our model. Since we’re taking chat input, let’s make a text box and a submit button. HTML Code<input type="text" name="chat_in" maxlength="500" ><!-- Submit button --><input type="submit" value="Submit" method="get" > When the user enters a value and hits the submit button, it will send a get request to the template and will fill the request.args dictionary with the key being the name flag in our text box. To access the user’s input, we would use request.args['chat_in'] in the our Python app file. We can pass this into the model, as shown above in this line: make_prediction(request.args['chat_in']) So we’ve made our predictions using our python files, but now it’s time to display them using HTML templates. Templates are just a way to change HTML code so that we can update the user with new values (such as our predictions). My goal isn’t to teach you HTML here, so I’m not going to elaborate on the HTML code (but you can find it in the github link if you’re curious), but basically you will display the passed variable using the following syntax: <!-- predictor.html file --><!DOCTYPE html><html lang="en"><p> Here are my predictions!<br>{{ chat_in }}{{ prediction[0]['prob'] }} </p> In this example I am displaying the chat_in variable that was passed in our render_template function above. I also display the first element in the dictionary prediction that I passed to the template, which contains multiple models. From a functional standpoint, we are done! From here, you can focus on making your app look pretty and responsive. That’s it for Flask! With some extra HTML code and possibly some JavaScript, you can have a pretty and interactive website that runs on your computer. From here, you can deploy the website to a platform of your choice, be it Heroku, Amazon Web Services, or Google Cloud. Thanks for reading, and keep an eye out for my next article on deploying to Heroku!
[ { "code": null, "e": 212, "s": 171, "text": "Code for this project can be found here." }, { "code": null, "e": 378, "s": 212, "text": "You’ve built a model using Pandas, Sci-kit Learn, and Jupyter Notebooks. The results look great in your notebooks, but how do you share what you’...
dmesg command in Linux for driver messages - GeeksforGeeks
15 May, 2019 dmesg command also called as “driver message” or “display message” is used to examine the kernel ring buffer and print the message buffer of kernel. The output of this command contains the messages produced by the device drivers. Usage of dmesg : When the computer boots up, there are lot of messages(log) generated during the system start-up.So you can read all these messages by using dmesg command. The contents of kernel ring buffer are also stored in /var/log/dmesg file. The dmesg command can be useful when system encounters any problem during its start-up, so by reading the contents of dmesg command you can actually find out where the problem occurred(as there are many steps in system boot-up sequence). Syntax : dmesg [options] Options : -C –clear : clear the ring buffer.-c –read-clear : clear the ring buffer after printing its contents.-D –console-off : disable the messages printing to console.-E –console-on : Enable printing messages to console.-F –file file : read the messages from given file.-h –help : display help text.-k –kernel : print kernel messages.-t –notime : do not print kernel’s timestamps.-u –userspace : print userspace messages. You can check more options here Since output of dmesg command is very large, so for finding specific information in dmesg output, it is better to use dmesg command with less or grep command. dmesg | less or dmesg | grep "text_to_search" For example :This is output of dmesg command when I plugged in USB drive and then unplugged it.This is part of output of dmesg command, since output is very large, you can try on your Linux terminal [ 6982.128179] usb 2-2: New USB device found, idVendor=0930, idProduct=6544[ 6982.128185] usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3[ 6982.128188] usb 2-2: Product: DataTraveler 2.0[ 6982.128190] usb 2-2: Manufacturer: Kingston[ 6982.128193] usb 2-2: SerialNumber: C86000886407C141DA1401A2[ 6982.253866] usb-storage 2-2:1.0: USB Mass Storage device detected[ 6982.254035] scsi host3: usb-storage 2-2:1.0[ 6982.254716] usbcore: registered new interface driver usb-storage[ 6982.265103] usbcore: registered new interface driver uas[ 6983.556572] scsi 3:0:0:0: Direct-Access Kingston DataTraveler 2.0 1.00 PQ: 0 ANSI: 4[ 6983.557750] sd 3:0:0:0: Attached scsi generic sg1 type 0[ 6983.557863] sd 3:0:0:0: [sdb] 30310400 512-byte logical blocks: (15.5 GB/14.5 GiB)[ 6983.558092] sd 3:0:0:0: [sdb] Write Protect is off[ 6983.558095] sd 3:0:0:0: [sdb] Mode Sense: 45 00 00 00[ 6983.558314] sd 3:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn’t support DPO or FUA[ 6983.560061] sdb: sdb1[ 6983.563403] sd 3:0:0:0: [sdb] Attached SCSI removable disk[ 7045.431954] wlp2s0: disassociated from a0:55:4f:27:bd:01 (Reason: 1)[ 7049.003277] wlp2s0: authenticate with a0:55:4f:27:bd:01[ 7049.006680] wlp2s0: send auth to a0:55:4f:27:bd:01 (try 1/3)[ 7049.015786] wlp2s0: authenticated[ 7049.021441] wlp2s0: associate with a0:55:4f:27:bd:01 (try 1/3)[ 7049.038590] wlp2s0: RX AssocResp from a0:55:4f:27:bd:01 (capab=0x431 status=0 aid=140)[ 7049.043217] wlp2s0: associated[ 7049.063811] wlp2s0: Limiting TX power to 30 (30 – 0) dBm as advertised by a0:55:4f:27:bd:01[ 7129.257920] usb 2-2: USB disconnect, device number 3 Since output is always large, it is advisable to use dmesg command along with grep command.For example : dmesg | grep "usb" It gives output [ 5944.925979] usb 2-1: new low-speed USB device number 2 using xhci_hcd[ 5945.085658] usb 2-1: New USB device found, idVendor=04d9, idProduct=1702[ 5945.085663] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0[ 5945.085666] usb 2-1: Product: USB Keyboard[ 5945.085669] usb 2-1: Manufacturer:[ 5945.222536] input: USB Keyboard as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.0/0003:04D9:1702.0003/input/input19[ 5945.282554] hid-generic 0003:04D9:1702.0003: input,hidraw2: USB HID v1.10 Keyboard [ USB Keyboard] on usb-0000:00:14.0-1/input0[ 5945.284803] input: USB Keyboard as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.1/0003:04D9:1702.0004/input/input20[ 5945.342340] hid-generic 0003:04D9:1702.0004: input,hidraw3: USB HID v1.10 Device [ USB Keyboard] on usb-0000:00:14.0-1/input1[ 6981.985310] usb 2-2: new high-speed USB device number 3 using xhci_hcd[ 6982.128179] usb 2-2: New USB device found, idVendor=0930, idProduct=6544[ 6982.128185] usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3[ 6982.128188] usb 2-2: Product: DataTraveler 2.0[ 6982.128190] usb 2-2: Manufacturer: Kingston[ 6982.128193] usb 2-2: SerialNumber: C86000886407C141DA1401A2[ 6982.253866] usb-storage 2-2:1.0: USB Mass Storage device detected[ 6982.254035] scsi host3: usb-storage 2-2:1.0[ 6982.254716] usbcore: registered new interface driver usb-storage[ 6982.265103] usbcore: registered new interface driver uas[ 7129.257920] usb 2-2: USB disconnect, device number 3 Output with options :For example : dmesg -t -t specifies output with timestamps.Output : usb 2-2: new high-speed USB device number 3 using xhci_hcdusb 2-2: New USB device found, idVendor=0930, idProduct=6544usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3usb 2-2: Product: DataTraveler 2.0usb 2-2: Manufacturer: Kingstonusb 2-2: SerialNumber: C86000886407C141DA1401A2usb-storage 2-2:1.0: USB Mass Storage device detectedscsi host3: usb-storage 2-2:1.0usbcore: registered new interface driver usb-storageusbcore: registered new interface driver uasscsi 3:0:0:0: Direct-Access Kingston DataTraveler 2.0 1.00 PQ: 0 ANSI: 4sd 3:0:0:0: Attached scsi generic sg1 type 0sd 3:0:0:0: [sdb] 30310400 512-byte logical blocks: (15.5 GB/14.5 GiB)sd 3:0:0:0: [sdb] Write Protect is offsd 3:0:0:0: [sdb] Mode Sense: 45 00 00 00sd 3:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn’t support DPO or FUAsdb: sdb1sd 3:0:0:0: [sdb] Attached SCSI removable diskwlp2s0: disassociated from a0:55:4f:27:bd:01 (Reason: 1)wlp2s0: authenticate with a0:55:4f:27:bd:01wlp2s0: send auth to a0:55:4f:27:bd:01 (try 1/3)wlp2s0: authenticatedwlp2s0: associate with a0:55:4f:27:bd:01 (try 1/3)wlp2s0: RX AssocResp from a0:55:4f:27:bd:01 (capab=0x431 status=0 aid=140)wlp2s0: associatedwlp2s0: Limiting TX power to 30 (30 – 0) dBm as advertised by a0:55:4f:27:bd:01usb 2-2: USB disconnect, device number 3 References :1) http://www.linfo.org/dmesg.html2) wikipedia dmesg – Mandeep Singh linux-command Linux-system-commands Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. TCP Server-Client implementation in C tar command in Linux with examples curl command in Linux with Examples UDP Server-Client implementation in C Conditional Statements | Shell Script Cat command in Linux with examples Tail command in Linux with examples touch command in Linux with Examples Mutex lock for Linux Thread Synchronization echo command in Linux with Examples
[ { "code": null, "e": 24014, "s": 23986, "text": "\n15 May, 2019" }, { "code": null, "e": 24244, "s": 24014, "text": "dmesg command also called as “driver message” or “display message” is used to examine the kernel ring buffer and print the message buffer of kernel. The output of ...
Curve Navigation Drawer in Android using ArcNavigationView - GeeksforGeeks
14 Oct, 2020 The Navigation Drawer is a layout that can be seen in certain applications, that consists of some other activity shortcuts (Intents). This drawer generally can be seen at the left edge of the screen, which is by default. A Button to access the Navigation Drawer is by default provided at the action bar. UI changes can be applied to the Navigation Drawer. One such idea to change the Navigation Drawer UI is by making its outer edge an inner or an outer arc. This gives a creative and a richer look to the Drawer. Since some changes are limited by the Android Studio packages, such implementations require an outside library for making desired changes. This is done by implementing a known dependency (library) of an implementation that is desirable. Similarly, we as well implemented a dependency to fulfill our requirements. To create an Inside Arc Design to the Navigation Drawer in an Android device, follow the following steps: Step 1: Create a New Project Create a Navigation Drawer Activity in Android Studio. To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. As you click finish, the project builds, might take a minute or two. Step 2: Add the Dependency to the build.gradle of the app Add the following dependency to the build.gradle (app) file. implementation ‘com.rom4ek:arcnavigationview:2.0.0’ Step 3: Working with the activity_main.xml file When the setup is ready, go to the activity_main.xml file, which represents the UI of the project. Add the Script as shown below between the opening and closing Drawer Layout element. XML <?xml version="1.0" encoding="utf-8"?><androidx.drawerlayout.widget.DrawerLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/drawer_layout" android:layout_width="match_parent" android:layout_height="match_parent" android:fitsSystemWindows="true" tools:openDrawer="start"> <include layout="@layout/app_bar_main" android:layout_width="match_parent" android:layout_height="match_parent" /> <!--ArcNavigationView Element--> <com.rom4ek.arcnavigationview.ArcNavigationView android:id="@+id/nav_view" android:layout_width="wrap_content" android:layout_height="match_parent" android:layout_gravity="start" android:background="@android:color/white" android:fitsSystemWindows="true" app:itemBackground="@android:color/white" app:headerLayout="@layout/nav_header_main" app:menu="@menu/activity_main_drawer" app:arc_cropDirection="cropInside" app:arc_width="96dp"/> </androidx.drawerlayout.widget.DrawerLayout> Similarly to create an Outside Arc, make changes to the layout file. Changes made to activity_main.xml file: XML <?xml version="1.0" encoding="utf-8"?><androidx.drawerlayout.widget.DrawerLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/drawer_layout" android:layout_width="match_parent" android:layout_height="match_parent" android:fitsSystemWindows="true" tools:openDrawer="start"> <include layout="@layout/app_bar_main" android:layout_width="match_parent" android:layout_height="match_parent" /> <!--ArcNavigationView Element--> <com.rom4ek.arcnavigationview.ArcNavigationView android:id="@+id/nav_view" android:layout_width="wrap_content" android:layout_height="match_parent" android:layout_gravity="start" android:background="@android:color/white" android:fitsSystemWindows="true" app:itemBackground="@android:color/white" app:headerLayout="@layout/nav_header_main" app:menu="@menu/activity_main_drawer" app:arc_width="96dp" app:arc_cropDirection="cropOutside"/> </androidx.drawerlayout.widget.DrawerLayout> android Android Kotlin Android Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Flutter - Custom Bottom Navigation Bar How to Read Data from SQLite Database in Android? How to Post Data to API using Retrofit in Android? Android Listview in Java with Example Retrofit with Kotlin Coroutine in Android Android UI Layouts Kotlin Array Retrofit with Kotlin Coroutine in Android Kotlin Setters and Getters
[ { "code": null, "e": 24725, "s": 24697, "text": "\n14 Oct, 2020" }, { "code": null, "e": 25030, "s": 24725, "text": "The Navigation Drawer is a layout that can be seen in certain applications, that consists of some other activity shortcuts (Intents). This drawer generally can be ...
How can we convert subqueries to INNER JOIN?
To make it understand we are using the data from the following tables − mysql> Select * from customers; +-------------+----------+ | Customer_Id | Name | +-------------+----------+ | 1 | Rahul | | 2 | Yashpal | | 3 | Gaurav | | 4 | Virender | +-------------+----------+ 4 rows in set (0.00 sec) mysql> Select * from reserve; +------+------------+ | ID | Day | +------+------------+ | 1 | 2017-12-30 | | 2 | 2017-12-28 | | 2 | 2017-12-25 | | 1 | 2017-12-24 | | 3 | 2017-12-26 | +------+------------+ 5 rows in set (0.00 sec) Now, the following is a subquery which will find the name of all the customers who have reserved a car. mysql> Select Name from customers WHERE customer_id IN (Select id from reserve); +----------+ | Name | +----------+ | Rahul | | Yashpal | | Gaurav | +----------+ 3 rows in set (0.00 sec) Now, with the help of followings steps, we can convert the above subquery into inner join − Move the ‘Reserve’ table named in the subquery to the FROM clause. Move the ‘Reserve’ table named in the subquery to the FROM clause. The WHERE clause compares the customer_id column to the ids returned from the subquery. The WHERE clause compares the customer_id column to the ids returned from the subquery. Hence convert the expression to an explicit direct comparison between id columns of two tables. mysql> SELECT Name from customers, reserve WHERE customer_id = id; +----------+ | Name | +----------+ | Rahul | | Yashpal | | Yashpal | | Rahul | | Gaurav | +----------+ 5 rows in set (0.00 sec) As we can see that the above result is not exactly the same as the result of subquery so use DISTINCT keyword to get the same result as follows: mysql> SELECT DISTINCT name from customers,reserve WHERE customer_id = id; +----------+ | Name | +----------+ | Rahul | | Yashpal | | Gaurav | +----------+ 3 rows in set (0.03 sec)
[ { "code": null, "e": 1134, "s": 1062, "text": "To make it understand we are using the data from the following tables −" }, { "code": null, "e": 1661, "s": 1134, "text": "mysql> Select * from customers;\n+-------------+----------+\n| Customer_Id | Name |\n+-------------+------...
Count N-digits numbers made up of even and prime digits at odd and even positions respectively - GeeksforGeeks
13 Sep, 2021 Given a positive integer N, the task is to find the number of integers of N digits having even digits at odd indices and prime digits at even indices. Examples: Input: N = 2Output: 20Explanation:Following are the possible number of 2-digits satisfying the given criteria {20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 50, 52, 54, 56, 58, 70, 72, 74, 76, 78}. Therefore, the count of such number is 20. Input: N = 5Output: 1600 Approach: The given problem can be solved using the concept of Permutations and Combinations by observing the fact that there are only 4 choices for the even positions as [2, 3, 5, 7] and 5 choices for the odd positions as [0, 2, 4, 6, 8]. Therefore, the count of N-digits numbers satisfying the given criteria is given by: total count = 4P5Q, where P and Q is the number of even and odd positions respectively. Below is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ program for the above approache#include<bits/stdc++.h>using namespace std; int m = 1000000007; // Function to find the value of x ^ yint power(int x, int y){ // Stores the value of x ^ y int res = 1; // Iterate until y is positive while (y > 0) { // If y is odd if ((y & 1) != 0) res = (res * x) % m; // Divide y by 2 y = y >> 1; x = (x * x) % m; } // Return the value of x ^ y return res;} // Function to find the number of N-digit// integers satisfying the given criteriaint countNDigitNumber(int N){ // Count of even positions int ne = N / 2 + N % 2; // Count of odd positions int no = floor(N / 2); // Return the resultant count return power(4, ne) * power(5, no);} // Driver Codeint main(){ int N = 5; cout << countNDigitNumber(N) % m << endl;} // This code is contributed by SURENDRA_GANGWAR // Java program for the above approachimport java.io.*;class GFG { static int m = 1000000007; // Function to find the value of x ^ ystatic int power(int x, int y){ // Stores the value of x ^ y int res = 1; // Iterate until y is positive while (y > 0) { // If y is odd if ((y & 1) != 0) res = (res * x) % m; // Divide y by 2 y = y >> 1; x = (x * x) % m; } // Return the value of x ^ y return res;} // Function to find the number of N-digit// integers satisfying the given criteriastatic int countNDigitNumber(int N){ // Count of even positions int ne = N / 2 + N % 2; // Count of odd positions int no = (int)Math.floor(N / 2); // Return the resultant count return power(4, ne) * power(5, no);} // Driver Codepublic static void main(String[] args){ int N = 5; System.out.println(countNDigitNumber(N) % m);}} // This code is contributed by sanjoy_62. # Python program for the above approach import mathm = 10**9 + 7 # Function to find the value of x ^ ydef power(x, y): # Stores the value of x ^ y res = 1 # Iterate until y is positive while y > 0: # If y is odd if (y & 1) != 0: res = (res * x) % m # Divide y by 2 y = y >> 1 x = (x * x) % m # Return the value of x ^ y return res # Function to find the number of N-digit# integers satisfying the given criteriadef countNDigitNumber(n: int) -> None: # Count of even positions ne = N // 2 + N % 2 # Count of odd positions no = N // 2 # Return the resultant count return power(4, ne) * power(5, no) # Driver Codeif __name__ == '__main__': N = 5 print(countNDigitNumber(N) % m) // C# program for the above approachusing System; class GFG{ static int m = 1000000007; // Function to find the value of x ^ ystatic int power(int x, int y){ // Stores the value of x ^ y int res = 1; // Iterate until y is positive while (y > 0) { // If y is odd if ((y & 1) != 0) res = (res * x) % m; // Divide y by 2 y = y >> 1; x = (x * x) % m; } // Return the value of x ^ y return res;} // Function to find the number of N-digit// integers satisfying the given criteriastatic int countNDigitNumber(int N){ // Count of even positions int ne = N / 2 + N % 2; // Count of odd positions int no = (int)Math.Floor((double)N / 2); // Return the resultant count return power(4, ne) * power(5, no);} // Driver Codepublic static void Main(){ int N = 5; Console.Write(countNDigitNumber(N) % m);}} // This code is contributed by splevel62. <script> // JavaScript program for the above approache var m = 10 ** 9 + 7 // Function to find the value of x ^ y function power(x, y) { // Stores the value of x ^ y var res = 1 // Iterate until y is positive while (y > 0) { // If y is odd if ((y & 1) != 0) res = (res * x) % m // Divide y by 2 y = y >> 1 x = (x * x) % m } // Return the value of x ^ y return res } // Function to find the number of N-digit // integers satisfying the given criteria function countNDigitNumber(N) { // Count of even positions var ne = Math.floor(N / 2) + N % 2 // Count of odd positions var no = Math.floor(N / 2) // Return the resultant count return power(4, ne) * power(5, no) } // Driver Code let N = 5 document.write(countNDigitNumber(N) % m); // This code is contributed by Potta Lokesh </script> 1600 Time Complexity: O(log N)Auxiliary Space: O(1) lokeshpotta20 SURENDRA_GANGWAR sanjoy_62 splevel62 gabaa406 number-digits permutation Permutation and Combination Prime Number Mathematical Mathematical Prime Number permutation Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Algorithm to solve Rubik's Cube Program to print prime numbers from 1 to N. Fizz Buzz Implementation Program to multiply two matrices Modular multiplicative inverse Check if a number is Palindrome Find first and last digits of a number Count ways to reach the n'th stair Program to convert a given number to words Find Union and Intersection of two unsorted arrays
[ { "code": null, "e": 24716, "s": 24688, "text": "\n13 Sep, 2021" }, { "code": null, "e": 24867, "s": 24716, "text": "Given a positive integer N, the task is to find the number of integers of N digits having even digits at odd indices and prime digits at even indices." }, { ...
Python - Log Gamma Distribution in Statistics - GeeksforGeeks
10 Jan, 2020 scipy.stats.loggamma() is a log gamma continuous random variable. It is inherited from the of generic methods as an instance of the rv_continuous class. It completes the methods with details specific for this particular distribution. Parameters : q : lower and upper tail probabilityx : quantilesloc : [optional]location parameter. Default = 0scale : [optional]scale parameter. Default = 1size : [tuple of ints, optional] shape or random variates.moments : [optional] composed of letters [‘mvsk’]; ‘m’ = mean, ‘v’ = variance, ‘s’ = Fisher’s skew and ‘k’ = Fisher’s kurtosis. (default = ‘mv’). Results : log gamma continuous random variable Code #1 : Creating log gamma continuous random variable # importing library from scipy.stats import loggamma numargs = loggamma.numargs a, b = 4.32, 3.18rv = loggamma(a, b) print ("RV : \n", rv) Output : RV : scipy.stats._distn_infrastructure.rv_frozen object at 0x000002A9D6AE0588 Code #2 : log gamma continuous variates and probability distribution import numpy as np quantile = np.arange (0.01, 1, 0.1) # Random Variates R = loggamma.rvs(a, b) print ("Random Variates : \n", R) # PDF R = loggamma.pdf(a, b, quantile) print ("\nProbability Distribution : \n", R) Output : Random Variates : 3.941580350134656 Probability Distribution : [1.76757240e-27 1.53388070e-24 6.78322725e-22 1.62994246e-19 2.25532281e-17 1.89389591e-15 1.01217167e-13 3.59400367e-12 8.81510518e-11 1.54700389e-09] Code #3 : Graphical Representation. import numpy as np import matplotlib.pyplot as plt distribution = np.linspace(0, np.minimum(rv.dist.b, 3)) print("Distribution : \n", distribution) plot = plt.plot(distribution, rv.pdf(distribution)) Output : Distribution : [0. 0.06122449 0.12244898 0.18367347 0.24489796 0.30612245 0.36734694 0.42857143 0.48979592 0.55102041 0.6122449 0.67346939 0.73469388 0.79591837 0.85714286 0.91836735 0.97959184 1.04081633 1.10204082 1.16326531 1.2244898 1.28571429 1.34693878 1.40816327 1.46938776 1.53061224 1.59183673 1.65306122 1.71428571 1.7755102 1.83673469 1.89795918 1.95918367 2.02040816 2.08163265 2.14285714 2.20408163 2.26530612 2.32653061 2.3877551 2.44897959 2.51020408 2.57142857 2.63265306 2.69387755 2.75510204 2.81632653 2.87755102 2.93877551 3. ] Code #4 : Varying Positional Arguments import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 5, 100) # Varying positional arguments y1 = loggamma .pdf(x, 1, 3) y2 = loggamma .pdf(x, 1, 4) plt.plot(x, y1, "*", x, y2, "r--") Output : Python scipy-stats-functions Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Enumerate() in Python Read a file line by line in Python Defaultdict in Python Different ways to create Pandas Dataframe sum() function in Python Iterate over a list in Python How to Install PIP on Windows ? Deque in Python Python String | replace()
[ { "code": null, "e": 24233, "s": 24205, "text": "\n10 Jan, 2020" }, { "code": null, "e": 24467, "s": 24233, "text": "scipy.stats.loggamma() is a log gamma continuous random variable. It is inherited from the of generic methods as an instance of the rv_continuous class. It complet...
What is the size of void pointer in C/C++?
The size of void pointer varies system to system. If the system is 16-bit, size of void pointer is 2 bytes. If the system is 32-bit, size of void pointer is 4 bytes. If the system is 64-bit, size of void pointer is 8 bytes. Here is an example to find the size of void pointer in C language, Live Demo #include <stdio.h> int main() { void *ptr; printf("The size of pointer value : %d", sizeof(ptr)); return 0; } The size of pointer value : 8 In the above example, a void type pointer variable is created and by using sizeof() function, the size of void pointer is found out. void *ptr; printf("The size of pointer value : %d", sizeof(ptr));
[ { "code": null, "e": 1286, "s": 1062, "text": "The size of void pointer varies system to system. If the system is 16-bit, size of void pointer is 2 bytes. If the system is 32-bit, size of void pointer is 4 bytes. If the system is 64-bit, size of void pointer is 8 bytes." }, { "code": null, ...
Array Subset of another array | Practice | GeeksforGeeks
Given two arrays: a1[0..n-1] of size n and a2[0..m-1] of size m. Task is to check whether a2[] is a subset of a1[] or not. Both the arrays can be sorted or unsorted. It may be assumed that elements in both array are distinct. Example 1: Input: a1[] = {11, 1, 13, 21, 3, 7} a2[] = {11, 3, 7, 1} Output: Yes Explanation: a2[] is a subset of a1[] Example 2: Input: a1[] = {1, 2, 3, 4, 5, 6} a2[] = {1, 2, 4} Output: Yes Explanation: a2[] is a subset of a1[] Example 3: Input: a1[] = {10, 5, 2, 23, 19} a2[] = {19, 5, 3} Output: No Explanation: a2[] is not a subset of a1[] Your Task: You don't need to read input or print anything. Your task is to complete the function isSubset() which takes the array a1[], a2[], its size n and m as inputs and return "Yes" if arr2 is subset of arr1 else return "No" if arr2 is not subset of arr1. Expected Time Complexity: O(n) Expected Auxiliary Space: O(n) Constraints: 1 <= n,m <= 105 1 <= a1[i], a2[j] <= 105 0 kuldeepahirwarmba3 hours ago public String isSubset( long a1[], long a2[], long n, long m) { long len = Math.min(n,m); long max = Math.max(n,m); boolean result = false; for(int i=0;i<len;i++) { for(int j=0;j<max;j++) { if(a2.length<=a1.length) { if(a2[i]==a1[j]) { result = true; break; }else{ result = false; } } } if(result == false) { break; } } if(result) { return "Yes"; }else{ return "No"; } 0 pmisrah31051 day ago Solution using Maps: string isSubset(int a1[], int a2[], int n, int m) { string str=""; bool ans=true; unordered_map<int,int> mp; for(int i=0;i<n;i++){ mp[a1[i]]+=1; } for(int i=0;i<m;i++){ if(mp.count(a2[i])>0){ ans=true; } else{ ans=false; break; } } if(ans){ str+="Yes"; } else{ str+="No"; } return str;} 0 ashrithbhgowda2 days ago //for c++ unordered_set<int> s; for(int i=0;i<n;i++){ s.insert(a1[i]); } bool found=false; for(int i=0;i<m;i++){ if(s.find(a2[i])!=s.end()){ found=true; } else{ found=false; break; } } if(found==true){ return "Yes"; } else{ return "No"; } 0 anshitasinha3 days ago PYTHON SOLUTION: def isSubset( a1, a2, n, m): count=0 for i in a2: if i in a1: count+=1 if count==len(a2): return "Yes" return "No" 0 anigambe203 days ago string isSubset(int a1[], int a2[], int n, int m) { map<int,int> ma; int cnt=0; for(int i=0; i<n; i++) { ma[a1[i]]++; } for(int i=0; i<m; i++) { if(ma[a2[i]]) { cnt++; } } if(cnt!=m) { return "No"; } else { return "Yes"; } } 0 minipandey3325 days ago string isSubset(int a1[], int a2[], int n, int m) { map <int,int> m1; for(int i=0; i<n; i++) { m1[a1[i]]++; // } for(int j=0; j<m; j++) { m1[a2[j]]++; if(m1[a2[j]] <= 1) return "No"; } return "Yes"; } 0 bhavya0381cse196 days ago class Compute { public String isSubset( long a1[], long a2[], long n, long m) { Set <Long> set=new HashSet<>(); for(int i=0;i<n;i++) { set.add(a1[i]); } int flag=0; for(int i=0;i<m;i++) { if(!set.contains(a2[i])) { flag=1; break; } } if(flag==0) { return "Yes"; } else { return "No"; } }} 0 blaze_061 week ago Just check if the elements of a2 are in a1 if they are then simply delete it from a2 costing constant time. Later on check if the size of a2 is larger than 0, if it is then you know a2 aint a subset of a1. python- 0.1sec (i am sure it will be faster in other languages) 0 pawancoding00111 week ago string isSubset(int a1[], int a2[], int n, int m) { unordered_set<int>s(a1,a1+n); for(int i=0;i<m;i++) { if(s.find(a2[i])==s.end())return "No"; } return "Yes"; } 0 nitishkamath635902 weeks ago // c++ solution string isSubset(int a1[], int a2[], int n, int m) { // use unordered_map(ump) to improve time complexity unordered_map<int,int> ump; // accessing each element and putting it into unordered map int i =0; while(i<n) { ump[a1[i]]=1; i++; } // this is checking condition // if element found in ump then its freq must be 1 bcz given that element is unique int j = 0; while(j<m) { // if freq != 1 -> it means that freq is zero -> element is not present in master array-> return No if(ump[a2[j]]!=1) { return "No"; } j++; } // if all element is present in ump then we will print yes return "Yes"; } We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 466, "s": 238, "text": "Given two arrays: a1[0..n-1] of size n and a2[0..m-1] of size m. Task is to check whether a2[] is a subset of a1[] or not. Both the arrays can be sorted or unsorted. It may be assumed that elements in both array are distinct.\n " }, { "code": null...
Save the plots into a PDF in matplotlib
Using plt.savefig("myImagePDF.pdf", format="pdf", bbox_inches="tight") method, we can save a figure in PDF format. Create a dictionary with Column 1 and Column 2 as the keys and Values are like i and i*i, where i is from 0 to 10, respectively. Create a dictionary with Column 1 and Column 2 as the keys and Values are like i and i*i, where i is from 0 to 10, respectively. Create a data frame using pd.DataFrame(d), d created in step 1. Create a data frame using pd.DataFrame(d), d created in step 1. Plot the data frame with ‘o’ and ‘rx’ style. Plot the data frame with ‘o’ and ‘rx’ style. To save the file in PDF format, use savefig() method where the image name is myImagePDF.pdf, format = ”pdf”. To save the file in PDF format, use savefig() method where the image name is myImagePDF.pdf, format = ”pdf”. To show the image, use the plt.show() method. To show the image, use the plt.show() method. import pandas as pd from matplotlib import pyplot as plt d = {'Column 1': [i for i in range(10)], 'Column 2': [i * i for i in range(10)]} df = pd.DataFrame(d) df.plot(style=['o', 'rx']) plt.savefig("myImagePDF.pdf", format="pdf", bbox_inches="tight") plt.show()
[ { "code": null, "e": 1177, "s": 1062, "text": "Using plt.savefig(\"myImagePDF.pdf\", format=\"pdf\", bbox_inches=\"tight\") method, we can save a figure in PDF format." }, { "code": null, "e": 1306, "s": 1177, "text": "Create a dictionary with Column 1 and Column 2 as the keys an...
Check if a string contains two non overlapping sub-strings "geek" and "keeg" - GeeksforGeeks
21 May, 2021 Given a string str, the task is to check whether the string contains two non-overlapping sub-strings s1 = “geek” and s2 = “keeg” such that s2 starts after s1 ends.Examples: Input: str = “geekeekeeg” Output: Yes “geek” and “keeg” both are present in the given string without overlapping.Input: str = “geekeeg” Output: No “geek” and “keeg” both are present but they overlap. Approach: Check if the sub-string “geek” occurs before “keeg” in the given string. This problem is simpler when we use a predefined function strstr in order to find the occurrence of a sub-string in the given string.Below is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function that returns true// if s contains two non overlapping// sub strings "geek" and "keeg"bool isValid(char s[]){ char* p; // If "geek" and "keeg" are both present // in s without over-lapping and "keeg" // starts after "geek" ends if ((p = strstr(s, "geek")) && (strstr(p + 4, "keeg"))) return true; return false;} // Driver codeint main(){ char s[] = "geekeekeeg"; if (isValid(s)) cout << "Yes"; else cout << "No"; return 0;} // Java implementation of the approachclass GFG{ // Function that returns true// if s contains two non overlapping// sub Strings "geek" and "keeg"static boolean isValid(String s){ // If "geek" and "keeg" are both present // in s without over-lapping and "keeg" // starts after "geek" ends if ((s.indexOf( "geek")!=-1) && (s.indexOf( "keeg",s.indexOf( "geek") + 4)!=-1)) return true; return false;} // Driver codepublic static void main(String args[]){ String s = "geekeekeeg"; if (isValid(s)) System.out.println("Yes"); else System.out.println("No");}} // This code is contributed by Arnab Kundu # Python 3 implementation of the approach # Function that returns true# if s contains two non overlapping# sub strings "geek" and "keeg"def isValid(s): p="" # If "geek" and "keeg" are both present # in s without over-lapping and "keeg" # starts after "geek" ends p=s.find("geek") if (s.find("keeg",p+4)): return True return False # Driver codeif __name__ == "__main__": s = "geekeekeeg" if (isValid(s)): print("Yes") else: print("No") # This code is contributed by ChitraNayal // C# implementation of the approach using System; class GFG{ // Function that returns true// if s contains two non overlapping// sub Strings "geek" and "keeg"static bool isValid(string s){ // If "geek" and "keeg" are both present // in s without over-lapping and "keeg" // starts after "geek" ends if ((s.IndexOf( "geek")!=-1) && (s.IndexOf( "keeg",s.IndexOf( "geek") + 4)!=-1)) return true; return false;} // Driver codepublic static void Main(){ string s = "geekeekeeg"; if (isValid(s)) Console.WriteLine("Yes"); else Console.WriteLine("No");}} // This code is contributed by AnkitRai01 <script> // JavaScript implementation of the approach // Function that returns true// if s contains two non overlapping// sub Strings "geek" and "keeg"function isValid(s){ // If "geek" and "keeg" are both present // in s without over-lapping and "keeg" // starts after "geek" ends if ((s.indexOf("geek") != -1) && (s.indexOf("keeg", s.indexOf("geek") + 4) != -1)) return true; return false;} // Driver Codevar s = "geekeekeeg"; if (isValid(s)) document.write("Yes");else document.write("No"); // This code is contributed by Khushboogoyal499 </script> Yes andrew1234 ankthon ukasp khushboogoyal499 substring Strings Strings Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Top 50 String Coding Problems for Interviews Print all the duplicates in the input string Vigenère Cipher sprintf() in C Convert character array to string in C++ Naive algorithm for Pattern Searching How to Append a Character to a String in C Boyer Moore Algorithm for Pattern Searching Hill Cipher Program to count occurrence of a given character in a string
[ { "code": null, "e": 26624, "s": 26596, "text": "\n21 May, 2021" }, { "code": null, "e": 26799, "s": 26624, "text": "Given a string str, the task is to check whether the string contains two non-overlapping sub-strings s1 = “geek” and s2 = “keeg” such that s2 starts after s1 ends....
numpy.ndarray.size() method | Python - GeeksforGeeks
26 Mar, 2020 numpy.ndarray.size() function return the number of elements in the array. it equal to np.prod(a.shape), i.e., the product of the array’s dimensions. Syntax : numpy.ndarray.size(arr) Parameters :arr : [array_like] Input array. Return : [int] The number of elements in the array. Code #1 : # Python program explaining# numpy.ndarray.size() function # importing numpy as geek import numpy as geek arr = geek.zeros((3, 4, 2), dtype = geek.complex128) gfg = arr.size print (gfg) Output : 24 Code #2 : # Python program explaining# numpy.ndarray.size() function # importing numpy as geek import numpy as geek arr = geek.zeros((3, 4, 2), dtype = geek.complex128) gfg = geek.prod(arr.shape) print (gfg) Output : 24 Python numpy-ndarray Python-numpy Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Install PIP on Windows ? Selecting rows in pandas DataFrame based on conditions How to drop one or multiple columns in Pandas Dataframe How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | Get unique values from a list Defaultdict in Python Python OOPs Concepts Python | os.path.join() method Python | Pandas dataframe.groupby()
[ { "code": null, "e": 24292, "s": 24264, "text": "\n26 Mar, 2020" }, { "code": null, "e": 24441, "s": 24292, "text": "numpy.ndarray.size() function return the number of elements in the array. it equal to np.prod(a.shape), i.e., the product of the array’s dimensions." }, { ...
How to create a rectangle with rounded corners in HTML5 SVG?
SVG stands for Scalable Vector Graphics and is a language for describing 2D-graphics and graphical applications in XML and the XML is then rendered by an SVG viewer. Most of the web browsers can display SVG just like they can display PNG, GIF, and JPG. To draw a rectangle in HTML SVG, use the SVG <rect> element. For rounded corners, set the rx and ry attribute, which rounds the corners of the rectangle. You can try to run the following code to learn how to draw a rectangle with rounded corners in HTML5 SVG. <!DOCTYPE html> <html> <head> <style> #svgelem { position: relative; left: 10%; -webkit-transform: translateX(-20%); -ms-transform: translateX(-20%); transform: translateX(-20%); } </style> <title>SVG</title> </head> <body> <h2>HTML5 SVG Rectangle</h2> <svg id="svgelem" width="300" height="200" xmlns="http://www.w3.org/2000/svg"> <rect x="50" y="20" rx="30" ry="30" width="150" height="150" style="fill:blue;/"> </svg> </body> </html>
[ { "code": null, "e": 1315, "s": 1062, "text": "SVG stands for Scalable Vector Graphics and is a language for describing 2D-graphics and graphical applications in XML and the XML is then rendered by an SVG viewer. Most of the web browsers can display SVG just like they can display PNG, GIF, and JPG."...
Auto adjust font size in Seaborn heatmap using Matplotlib
To adjust font size in Seaborn, we can take followig steps− Create a dictionary with some mathematical expressions Create a dataframe using Pandas data frame. Create a heatmap using heatmap() method. To adjust the font size in Seaborn heatmap, change the fontsize value. To display the figure, use show() method. import numpy as np import seaborn as sns from matplotlib import pyplot as plt import pandas as pd plt.rcParams["figure.figsize"] = [7.00, 3.50] plt.rcParams["figure.autolayout"] = True d = { 'y=1/x': [1 / i for i in range(1, 10)], 'y=x': [i for i in range(1, 10)], 'y=x^2': [i * i for i in range(1, 10)], 'y=x^3': [i * i * i for i in range(1, 10)] } df = pd.DataFrame(d) ax = sns.heatmap(df, vmax=1) plt.xlabel('Mathematical Expression', fontsize=16) plt.show()
[ { "code": null, "e": 1122, "s": 1062, "text": "To adjust font size in Seaborn, we can take followig steps−" }, { "code": null, "e": 1177, "s": 1122, "text": "Create a dictionary with some mathematical expressions" }, { "code": null, "e": 1221, "s": 1177, "text...
Is the Point Inside the Polygon?. “In computational geometry, the... | by Anirudh Topiwala | Towards Data Science
“In computational geometry, the point-in-polygon (PIP) problem asks whether a given point in the plane lies inside, outside, or on the boundary of a polygon.” Wikipedia. A quick and simple algorithm to find whether a point lies inside, on or outside a polygon is very useful in various applications like computer graphics, geographical information systems (GIS), motion planning, CAD, computer vision, etc. Being a computer vision engineer, some direct application I see are: Lane Detection: as a lane can be represented as trapezoid, PIP can be used to determine if a pixel lies inside the trapezoid (lane).Calculate area of Object using Edge Detection: any object’s area can be calculated by performing edge detection on it and the area can then be calculated by checking if the pixels lie inside the polygon formed by the edges of the object. Lane Detection: as a lane can be represented as trapezoid, PIP can be used to determine if a pixel lies inside the trapezoid (lane). Calculate area of Object using Edge Detection: any object’s area can be calculated by performing edge detection on it and the area can then be calculated by checking if the pixels lie inside the polygon formed by the edges of the object. Some methods to solve the PIP problem is by using Ray Casting algorithm and Winding Number algorithm. A point to note is that the winding number algorithm is more accurate then ray casting for points really close to the polygon. Also with the newer implementations it is also faster than ray casting algorithm. The PIP problem is further simplified for convex polygons and we will discuss one such method to solve it. In this article I am going to explain the Winding Number Algorithm which solves PIP for any polygon. I will then cover a simplified method for solving PIP for convex polygons. C++ code for both these methods can be found here. Winding number is defined by the number of times a curve travels counter clockwise around a point. The algorithm states that for any point inside the polygon the winding number would be non zero. Therefore it also know as the nonzero-rule algorithm. One way to calculate the winding number would be to calculate the angle subtended by each side of the polygon with the query point. This is indicated by angles θ1, θ2, θ3 and θ4 for sides AB, BC , CD and DA respectively. If the summation of these angle add up to 2 π the point lies inside the polygon and if the sum is 0, the point lies outside. sum_of_angles = θ1 + θ2 + θ3 + θ4 = 2 π -> Point is insidesum_of_angles = θ1 + θ2 + θ3 + θ4 = 0 -> Point is outside. The time complexity of this algorithm would be O(n) similar to ray casting algorithm, but it would involve repeated calculation of inverse trigonometric functions like atan2, to get the angle subtended by the sides of the polygon with the query point. One way to reduce the complexity as described by W. Randolph Franklin is by simply observing the edges that actually wind around the query point whilst all the other edges can be ignored. Algorithm: For any polygon, find all the edges of the polygon that cut through the line passing through the query point and is parallel to y_axis. For these edges check if the query point is on the left or right side of the edge when looking at all the edges in anticlockwise direction. Increase the value of winding number (wn) by one if query point is on the left side of an upward crossing and decrease the wn by one if query point is on the right side of an downward crossing. If the final winding number is non zero then the point lies inside the polygon. Example:For part a) in the figure below:Initially wn = 0Only edges CD and AB cut the line passing through P1 and parallel to y axis.1) wn++ for CD as it is an upward crossing for which P1 is left of CD2) wn wont be changed as P1 is on left of AB (downward crossing).As final winding number wn = 1 which is not equal to zero point P1 lies inside the polygon.A similar case can be made for b) part of the figure. Although it should be noted that we are avoiding the added complexity of the polygon and hence make the algorithm more efficient.For c) part in the figure below:Initially wn = 0Only edges DE and BC cut the line passing through P1 and parallel to y axis.1) wn++ for DE as it is upward crossing for which P1 is left of CD2) wn-- for BC as it is downward crossing for which P1 is right of BCAs final winding number wn = 0 point P1 lies outside the polygon. Cpp code: (GitHub Handle) substitute_point_in_line(): This function calculates on which side of the line the point lies.is_point_inside_polygon(): This is the complete algorithm and calculates if a point is inside, outside or on any polygon given the vertices of the polygon in anticlockwise direction. Fixing the direction of the edges by listing the vertices in anticlockwise directions helps locking in the left and the right side of the line segment (polygon edges). substitute_point_in_line(): This function calculates on which side of the line the point lies. is_point_inside_polygon(): This is the complete algorithm and calculates if a point is inside, outside or on any polygon given the vertices of the polygon in anticlockwise direction. Fixing the direction of the edges by listing the vertices in anticlockwise directions helps locking in the left and the right side of the line segment (polygon edges). A convex polygon is a polygon with all its interior angles less than 180°, which means all the vertices point away from the interior of the polygon. We discuss this separately as the most common types of polygons encountered in computer vision are convex polygons. Theses includes all triangles, squares, parallelograms, trapezoids, etc. Algorithm: For a convex polygon, if the sides of the polygon can be considered as a path from any one of the vertex. Then, a query point is said to be inside the polygon if it lies on the same side of all the line segments making up the path. This can be seen in the diagram below. To find on which side of the line segment does the point lie, we can simply substitute the point in the equation of the line segment. For example for the line formed by (x1, y1) and (x2, y2), the query point (xp,yp) can be substituted like: result = (yp - y1) * (x2 -x1) - (xp - x1) * (y2 - y1) When looking at segment in anticlockwise direction if the result is : result > 0: Query point lies on left of the line.result = 0: Query point lies on the line.result < 0: Query point lies on right of the line. result > 0: Query point lies on left of the line. result = 0: Query point lies on the line. result < 0: Query point lies on right of the line. Cpp code: (GitHub Handle) substitute_point_in_line(): This function calculates on which side of the line the point lies.is_point_inside_convex_polygon(): This is the complete algorithm and calculates if a point is inside, outside or on a convex polygon given the vertices of the polygon in either clockwise or anticlockwise direction. substitute_point_in_line(): This function calculates on which side of the line the point lies. is_point_inside_convex_polygon(): This is the complete algorithm and calculates if a point is inside, outside or on a convex polygon given the vertices of the polygon in either clockwise or anticlockwise direction. As discussed previously point in convex polygon would also cover cases like Point in TrapezoidPoint in ParallelogramPoint in Rectangle Point in Trapezoid Point in Parallelogram Point in Rectangle To sum it up we covered how to find the winding number for a polygon and use it to find if the point is Inside, On or Outside the polygon. We also saw a more simpler solution which can be applied for convex polygons to solve PIP. Please reach out to me if you have any questions and hope you enjoyed the math. https://en.wikipedia.org/wiki/Point_in_polygon#cite_note-5http://www.eecs.umich.edu/courses/eecs380/HANDOUTS/PROJ2/InsidePoly.htmlhttp://geomalgorithms.com/a03-_inclusion.htmlWm. Randolph Franklin, “PNPOLY — Point Inclusion in Polygon Test” https://en.wikipedia.org/wiki/Point_in_polygon#cite_note-5 http://www.eecs.umich.edu/courses/eecs380/HANDOUTS/PROJ2/InsidePoly.html http://geomalgorithms.com/a03-_inclusion.html Wm. Randolph Franklin, “PNPOLY — Point Inclusion in Polygon Test”
[ { "code": null, "e": 341, "s": 171, "text": "“In computational geometry, the point-in-polygon (PIP) problem asks whether a given point in the plane lies inside, outside, or on the boundary of a polygon.” Wikipedia." }, { "code": null, "e": 578, "s": 341, "text": "A quick and simp...
Powershell - Alias
PowerShell alias is another name for the cmdlet or for any command element. Use New-Alias cmdlet to create a alias. In the below example, we've created an alias help for Get-Help cmdlet. New-Alias -Name help -Value Get-Help Now invoke the alias. help Get-WmiObject -Detailed You will see the following output. NAME Get-WmiObject SYNOPSIS Gets instances of Windows Management Instrumentation (WMI) classes or information about the available classes. SYNTAX Get-WmiObject [ ... Use get-alias cmdlet to get all the alias present in current session of powershell. Get-Alias You will see the following output. CommandType Name Definition ----------- ---- ---------- Alias % ForEach-Object Alias ? Where-Object Alias ac Add-Content Alias asnp Add-PSSnapIn ... 15 Lectures 3.5 hours Fabrice Chrzanowski 35 Lectures 2.5 hours Vijay Saini 145 Lectures 12.5 hours Fettah Ben Print Add Notes Bookmark this page
[ { "code": null, "e": 2110, "s": 2034, "text": "PowerShell alias is another name for the cmdlet or for any command element." }, { "code": null, "e": 2221, "s": 2110, "text": "Use New-Alias cmdlet to create a alias. In the below example, we've created an alias help for Get-Help cmd...
strrchr() function in C/C++ - GeeksforGeeks
12 Oct, 2021 strrchr() function In C++, strrchr() is a predefined function used for string handling. cstring is the header file required for string functions.This function returns a pointer to the last occurrence of a character in a string. The character whose last occurrence we want to find is passed as the second argument to the function and the string in which we have to find the character is passed as the first argument to the function. Syntax char *strrchr(const char *str, int c) Here, str is the string and c is the character to be located. It is passed as its int promotion, but it is internally converted back to char. Application Given a string in C++, we need to find the last occurrence of a character, let’s say ‘a’.Examples: Input : string = 'This is a string' Output :9 Input :string = 'My name is Ayush' Output :12 Algorithm 1. Pass the given string in the strchr() function and mention the character you need to point to. 2. The function returns a value, print the value. CPP // C++ program to demonstrate working strchr()#include <iostream>#include <cstring>using namespace std; int main(){ char str[] = "This is a string"; char * ch = strrchr(str,'a'); cout << ch - str + 1; return 0;} Output: 9 C Examples : C // C code to demonstrate the working of// strrchr() #include <stdio.h>#include <string.h> // Driver functionint main(){ // initializing variables char st[] = "GeeksforGeeks"; char ch = 'e'; char* val; // Use of strrchr() // returns "ks" val = strrchr(st, ch); printf("String after last %c is : %s \n", ch, val); char ch2 = 'm'; // Use of strrchr() // returns null // test for null val = strrchr(st, ch2); printf("String after last %c is : %s ", ch2, val); return (0);} Output: String after last e is : eks String after last m is : (null) Practical Application: Since it returns the entire string after the last occurrence of a particular character, it can be used to extract the suffix of a string. For e.g to know the entire leading zeroes in a denomination when we know the first number. This example is demonstrated below. C // C code to demonstrate the application of// strrchr() #include <stdio.h>#include <string.h> // Driver functionint main(){ // initializing the denomination char denom[] = "Rs 10000000"; // Printing original string printf("The original string is : %s", denom); // initializing the initial number char first = '1'; char* entire; // Use of strrchr() // returns entire number entire = strrchr(denom, first); printf("\nThe denomination value is : %s ", entire); return (0);} Output: The original string is : Rs 10000000 The denomination value is : 10000000 This article is contributed by Ayush Saxena and Vaishnavi Tripathi. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. decpk biyaniram99 niharikatanwar61 C-Library C-String CPP-Library cpp-string C Language C++ CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Dynamic Memory Allocation in C using malloc(), calloc(), free() and realloc() std::sort() in C++ STL Bitwise Operators in C/C++ rand() and srand() in C/C++ Left Shift and Right Shift Operators in C/C++ Vector in C++ STL Initialize a vector in C++ (6 different ways) std::sort() in C++ STL Bitwise Operators in C/C++ Socket Programming in C/C++
[ { "code": null, "e": 26962, "s": 26934, "text": "\n12 Oct, 2021" }, { "code": null, "e": 27403, "s": 26962, "text": "strrchr() function In C++, strrchr() is a predefined function used for string handling. cstring is the header file required for string functions.This function retu...
Building a Logistic Regression in Python | by Animesh Agarwal | Towards Data Science
Suppose you are given the scores of two exams for various applicants and the objective is to classify the applicants into two categories based on their scores i.e, into Class-1 if the applicant can be admitted to the university or into Class-0 if the candidate can’t be given admission. Can this problem be solved using Linear Regression? Let’s check. Note: I suggest you read Linear Regression before going ahead with this blog. What is Logistic Regression? Dataset Visualization Hypothesis and Cost Function Training the model from scratch Model evaluation Scikit-learn implementation If you recall Linear Regression, it is used to determine the value of a continuous dependent variable. Logistic Regression is generally used for classification purposes. Unlike Linear Regression, the dependent variable can take a limited number of values only i.e, the dependent variable is categorical. When the number of possible outcomes is only two it is called Binary Logistic Regression. Let’s look at how logistic regression can be used for classification tasks. In Linear Regression, the output is the weighted sum of inputs. Logistic Regression is a generalized Linear Regression in the sense that we don’t output the weighted sum of inputs directly, but we pass it through a function that can map any real value between 0 and 1. If we take the weighted sum of inputs as the output as we do in Linear Regression, the value can be more than 1 but we want a value between 0 and 1. That’s why Linear Regression can’t be used for classification tasks. We can see from the below figure that the output of the linear regression is passed through an activation function that can map any real value between 0 and 1. The activation function that is used is known as the sigmoid function. The plot of the sigmoid function looks like We can see that the value of the sigmoid function always lies between 0 and 1. The value is exactly 0.5 at X=0. We can use 0.5 as the probability threshold to determine the classes. If the probability is greater than 0.5, we classify it as Class-1(Y=1) or else as Class-0(Y=0). Before we build our model let’s look at the assumptions made by Logistic Regression The dependent variable must be categorical The independent variables(features) must be independent (to avoid multicollinearity). The data used in this blog has been taken from Andrew Ng’s Machine Learning course on Coursera. The data can be downloaded from here. The data consists of marks of two exams for 100 applicants. The target value takes on binary values 1,0. 1 means the applicant was admitted to the university whereas 0 means the applicant didn't get an admission. The objective is to build a classifier that can predict whether an application will be admitted to the university or not. Let’s load the data into pandas Dataframe using the read_csv function. We will also split the data into admitted and non-admitted to visualize the data. Now that we have a clear understanding of the problem and the data, let’s go ahead and build our model. Till now we have understood how Logistic Regression can be used to classify the instances into different classes. In this section, we will define the hypothesis and the cost function. A Linear Regression model can be represented by the equation. We then apply the sigmoid function to the output of the linear regression where the sigmoid function is represented by, The hypothesis for logistic regression then becomes, If the weighted sum of inputs is greater than zero, the predicted class is 1 and vice-versa. So the decision boundary separating both the classes can be found by setting the weighted sum of inputs to 0. Like Linear Regression, we will define a cost function for our model and the objective will be to minimize the cost. The cost function for a single training example can be given by: Cost function intuition If the actual class is 1 and the model predicts 0, we should highly penalize it and vice-versa. As you can see from the below picture, for the plot -log(h(x)) as h(x) approaches 1, the cost is 0 and as h(x) nears 0, the cost is infinity(that is we penalize the model heavily). Similarly for the plot -log(1-h(x)) when the actual value is 0 and the model predicts 0, the cost is 0 and the cost becomes infinity as h(x) approaches 1. We can combine both of the equations using: The cost for all the training examples denoted by J(θ) can be computed by taking the average over the cost of all the training samples where m is the number of training samples. We will use gradient descent to minimize the cost function. The gradient w.r.t any parameter can be given by The equation is similar to what we achieved in Linear Regression, only h(x) is different in both the cases. Now we have everything in place we need to build our model. Let’s implement this in code. Let’s first prepare the data for our model. X = np.c_[np.ones((X.shape[0], 1)), X]y = y[:, np.newaxis]theta = np.zeros((X.shape[1], 1)) We will define some functions that will be used to compute the cost. def sigmoid(x): # Activation function used to map any real value between 0 and 1 return 1 / (1 + np.exp(-x))def net_input(theta, x): # Computes the weighted sum of inputs return np.dot(x, theta)def probability(theta, x): # Returns the probability after passing through sigmoid return sigmoid(net_input(theta, x)) Next, we define the cost and the gradient function. def cost_function(self, theta, x, y): # Computes the cost function for all the training samples m = x.shape[0] total_cost = -(1 / m) * np.sum( y * np.log(probability(theta, x)) + (1 - y) * np.log( 1 - probability(theta, x))) return total_costdef gradient(self, theta, x, y): # Computes the gradient of the cost function at the point theta m = x.shape[0] return (1 / m) * np.dot(x.T, sigmoid(net_input(theta, x)) - y) Let’s also define the fit function which will be used to find the model parameters that minimizes the cost function. In this blog, we coded the gradient descent approach to compute the model parameters. Here, we will use fmin_tnc function from the scipy library. It can be used to compute the minimum for any function. It takes arguments as func: the function to minimize x0: initial values for the parameters that we want to find fprime: gradient for the function defined by ‘func’ args: arguments that needs to be passed to the functions. def fit(self, x, y, theta): opt_weights = fmin_tnc(func=cost_function, x0=theta, fprime=gradient,args=(x, y.flatten())) return opt_weights[0]parameters = fit(X, y, theta) The model parameters are [-25.16131856 0.20623159 0.20147149] To see how good our model performed, we will plot the decision boundary. As there are two features in our dataset, the linear equation can be represented by, As discussed earlier, the decision boundary can be found by setting the weighted sum of inputs to 0. Equating h(x) to 0 gives us, We will plot our decision boundary on top of the plot we used for visualizing our dataset. x_values = [np.min(X[:, 1] - 5), np.max(X[:, 2] + 5)]y_values = - (parameters[0] + np.dot(parameters[1], x_values)) / parameters[2]plt.plot(x_values, y_values, label='Decision Boundary')plt.xlabel('Marks in 1st Exam')plt.ylabel('Marks in 2nd Exam')plt.legend()plt.show() Looks that our model has done a decent job in predicting the classes. But how accurate is it? Let’s find out. def predict(self, x): theta = parameters[:, np.newaxis] return probability(theta, x)def accuracy(self, x, actual_classes, probab_threshold=0.5): predicted_classes = (predict(x) >= probab_threshold).astype(int) predicted_classes = predicted_classes.flatten() accuracy = np.mean(predicted_classes == actual_classes) return accuracy * 100accuracy(X, y.flatten()) The accuracy of the model is 89%. Let’s implement our classifier using scikit-learn and compare it with the model we built from scratch. from sklearn.linear_model import LogisticRegressionfrom sklearn.metrics import accuracy_score model = LogisticRegression()model.fit(X, y)predicted_classes = model.predict(X)accuracy = accuracy_score(y.flatten(),predicted_classes)parameters = model.coef_ The model parameters are [[-2.85831439, 0.05214733, 0.04531467]] and the accuracy is 91%. Why are the model parameters significantly different from the model we implemented from scratch? If you look at the documentation of sk-learn’s Logistic Regression implementation, it takes regularization into account. Basically, regularization is used to prevent the model from overfitting the data. I won’t be getting into the details of regularization in this blog. But for now, that’s it. Thanks for Reading !! The complete code used in this blog can be found in this GitHub repo. Please drop me a message if you are stuck anywhere or if you have any feedback. In the next blog, we will use the concepts learned in this blog to build a classifier on the Adult data from the UCI Machine Learning Repository. The dataset contains close to 49K samples and includes categorical, numerical and missing values. This will be an interesting dataset to explore. Do watch this space for more.
[ { "code": null, "e": 524, "s": 172, "text": "Suppose you are given the scores of two exams for various applicants and the objective is to classify the applicants into two categories based on their scores i.e, into Class-1 if the applicant can be admitted to the university or into Class-0 if the cand...
Rexx - left String
This method returns a certain number of characters from the left of the string. left(str,count) str − The source string. str − The source string. count − The number of characters to return from the left of the string. count − The number of characters to return from the left of the string. This method returns a certain number of characters from the left of the string. /* Main program */ a = "Hello World" say left(a,5) When we run the above program, we will get the following result. Hello Print Add Notes Bookmark this page
[ { "code": null, "e": 2419, "s": 2339, "text": "This method returns a certain number of characters from the left of the string." }, { "code": null, "e": 2437, "s": 2419, "text": "left(str,count) \n" }, { "code": null, "e": 2462, "s": 2437, "text": "str − The so...
Prettifying pandas DataFrames. Enhance your DataFrames by... | by Zolzaya Luvsandorj | Towards Data Science
Did you know that we can prettify pandas DataFrames by accessing the .style attribute? Here’s an example where we styled a DataFrame such that it resembles a heatmap: After styling, it looks more obvious and intuitive to see positive and negative correlations as well as the strength of correlations. By colour-coding, we can make it easier to interpret and analyse the DataFrame. In this post, I will show 4 useful ways to prettify your DataFrame. We will use the penguins dataset for this post. Let’s import libraries and the data: import numpy as npimport pandas as pdpd.options.display.precision = 2from seaborn import load_dataset# Load sample datacolumns = {'culmen_length_mm': 'length', 'culmen_depth_mm': 'depth', 'flipper_length_mm': 'flipper', 'body_mass_g': 'mass'}df = load_dataset('penguins').rename(columns=columns)df.head() When loading the data, column names are renamed for brevity. In order to style DataFrames, we need to access the .style attribute which returns a Styler object: type(df.style) This Styler object creates an HTML table which can be further styled using CSS. In the sections to come, we will be using Styler object’s built-in methods as well as a little bit of CSS syntax to customise the formatting. We don’t need to know CSS to style DataFrames as we will be making only a few CSS references. For that, cheatsheets like this can help us get the basics. In the following sections, we will be chaining multiple methods one after another. This makes the code very long. To format the code in a more readable way, we will break the long code over a few lines and use () to wrap the code. Let’s start this section by looking at how the previous heatmap was created. We will use .background_gradient() method to create a heatmap for correlation matrix. correlation_matrix = df.corr()(correlation_matrix.style .background_gradient(cmap='seismic_r', axis=None)) Adding background gradient takes only an extra line of code. By passing axis=None, the colour gradients are applied along the entire table rather than within a specific axis. The name of the desired colour palette is passed onto the cmap parameter. For this parameter, we can use any Matplotlib colourmap. Here’s a useful tip for colourmaps: If you ever need to flip the colour scale, adding _r suffix to the colour map name will do the trick. For instance, if we used 'seismic' instead of 'seismic_r', negative correlations would have been blue and positive correlations would have been red. The previous example doesn’t look identical to the example shown at the beginning of this post. It needs a few more customisations to look the same: (correlation_matrix.style .background_gradient(cmap='seismic_r', axis=None) .set_properties(**{'text-align': 'center', 'padding': '12px'}) .set_caption('CORRELATION MATRIX')) We center-aligned the values ({'text-align': 'center'}) and increased the row height ({'padding': '12px' ) with .set_properties(). Then, we added a caption above the table with .set_caption(). In this example, we have applied colour gradients to the background. We can also apply colour gradients to the text with .text_gradient(): (correlation_matrix.style .text_gradient(cmap='seismic_r', axis=None)) If useful, we can chain both types of gradients as well: (correlation_matrix.style .background_gradient(cmap='YlGn', axis=None) .text_gradient(cmap='YlGn_r', axis=None)) Before we wrap up this section, I want to show one more useful example. Let’s imagine we had a simple confusion matrix: # Create made-up predictionsdf['predicted'] = df['species']df.loc[140:160, 'predicted'] = 'Gentoo'df.loc[210:250, 'predicted'] = 'Adelie'# Create confusion matrixconfusion_matrix = pd.crosstab(df['species'], df['predicted'])confusion_matrix We can do a bit of make-over to make it more useful and pretty: (confusion_matrix.style .background_gradient('Greys') .set_caption('CONFUSION MATRIX') .set_properties(**{'text-align': 'center', 'padding': '12px', 'width': '80px'}) .set_table_styles([{'selector': 'th.col_heading', 'props': 'text-align: center'}, {'selector': 'caption', 'props': [('text-align', 'center'), ('font-size', '11pt'), ('font-weight', 'bold')]}])) This looks pretty, useful and minimalistic. Don’t you love the look of this confusion matrix? Since we familiarised with the first 5 lines of the code in the previous examples, let’s understand what the remaining code is doing:◼️ .set_properties(**{'width': '80px'}): to increase column width◼️ .set_table_styles([{'selector': 'th.col_heading', 'props': 'text-align: center'}]): to align column headers in center◼️ .set_table_styles([{'selector': 'caption', 'props': [('text-align', 'center' ), ('font-size', '11pt'), ('font-weight', 'bold')]}]): to center-align caption, increase its font size and bold it. Now, let’s see how to add data bars to the DataFrame. We will first create a pivot table, then use .bar() to create data bars: # Create a pivot table with missing datapivot = df.pivot_table('mass', ['species', 'island'], 'sex')pivot.iloc[(-2,0)] = np.nan# Stylepivot.style.bar(color='aquamarine') This can be styled further just like in the previous examples: (pivot.style .bar(color='aquamarine') .set_properties(padding='8px', width='50')) Previously we got familiar with this format: .set_properties(**{'padding': '8px', 'width': '50'}). The code above shows an alternative way to pass your arguments to .set_properties(). If you have positive and negative values, you can format the data as follows by passing two colours (color=['salmon', 'lightgreen']) and aligning the bars in the middle (align='mid'): # Style on toy data(pd.DataFrame({'feature': ['a', 'b', 'c', 'd', 'e', 'f'], 'coefficient': [30, 10, 1, -5, -10, -20]}).style .bar(color=['salmon', 'lightgreen'], align='mid') .set_properties(**{'text-align': 'center'}) .set_table_styles([{'selector': 'th.col_heading', 'props': 'text-align: center'}])) Here, we also made sure to center align the column headers and the values. There are times when highlighting values based on conditions can be useful. In this section, we will learn about a few functions to highlight special values. Firstly, we can highlight minimum values from each column like this: pivot.style.highlight_min(color='pink') There’s an equivalent function for maximum values: pivot.style.highlight_max(color='lightgreen') We can chain these highlight functions together like this: (pivot.style .highlight_min(color='pink') .highlight_max(color='lightgreen')) There is also a function for highlighting missing values. Let’s add it to the previous code snippet: (pivot.style .highlight_min(color='pink') .highlight_max(color='lightgreen') .highlight_null(null_color='grey')) These built in-functions are quite easy to use, aren’t they? Let’s look at two more functions before wrapping up this section. We can highlight values between a range like below: pivot.style.highlight_between(left=3500, right=4500, color='gold') We can also highlight quantiles: pivot.style.highlight_quantile(q_left=0.7, axis=None, color='#4ADBC8') Here, we’ve highlighted the top 30%. We have used a few different colours so far. If you are wondering what other colour names you could use, check out this resource for colour names. As shown in the example above, you can also use hexadecimal colours which will give you access to a wider range of options (over 16 million colours!). Here’s my favourite resource to explore hexadecimal colour code. In this last section, we will look at a few other ways to colour-code DataFrames using custom functions. We will use the following two methods to apply our custom styling functions: ◼️ .applymap(): elementwise◼️ .apply(): column/row/tablewise Let’s create a small numerical data by slicing the top 8 rows from the numerical columns. We will use a lambda function to colour values above 190 as blue and the rest as grey: df_num = df.select_dtypes('number').head(8)(df_num.style .applymap(lambda x: f"color: {'blue' if x>190 else 'grey'}")) Let’s look at another example: green = 'background-color: lightgreen'pink = 'background-color: pink; color: white'(df_num.style .applymap(lambda value: green if value>190 else pink)) We can convert the lambda function into a regular function and pass it to .applymap(): def highlight_190(value): green = 'background-color: lightgreen' pink = 'background-color: pink; color: white' return green if value > 190 else pinkdf_num.style.applymap(highlight_190) Let’s see how we could do the same formatting using .apply(): def highlight_190(series): green = 'background-color: lightgreen' pink = 'background-color: pink; color: white' return [green if value > 190 else pink for value in series]df_num.style.apply(highlight_190) We can also chain them just like the previous functions: (df_num.style .apply(highlight_190) .applymap(lambda value: 'opacity: 40%' if value<30 else None)) It’s useful to know how to use both .apply() and .applymap(). Here’s an example where we can use .apply() but not .applymap(): def highlight_above_median(series): is_above = series>series.median() above = 'background-color: lightgreen' below = 'background-color: grey; color: white' return [above if value else below for value in is_above]df_num.style.apply(highlight_above_median) We find the median value by each column and highlight values higher than median in green and the rest in grey. We can also style the entire column based on conditions with .apply(): def highlight(data): n = len(data) if data['sex']=='Male': return n*['background-color: lightblue'] if data['sex']=='Female': return n*['background-color: lightpink'] else: return n*['']df.head(6).style.apply(highlight, axis=1).hide_index() Here, we have hidden DataFrame’s indices with .hide_index() for a cleaner look. If needed, you can also hide columns with .hide_columns() as well. Lastly, most of these functions we looked at in this post take optional arguments to customise styling. The following two arguments are common and quite useful to know:◼ ️axis for along which axis to operate: columns, rows or the entire table◼️ subset to select a subset of columns to style. Hope you enjoyed learning about useful ways to prettify your DataFrames by colour-coding it. Styled DataFrames can help explore and analyse the data more easily and make your analysis more interpretable and attractive. If you are keen to learn more about styling, check out this useful documentation by pandas. Would you like to access more content like this? Medium members get unlimited access to any articles on Medium. If you become a member using my referral link, a portion of your membership fee will directly go to support me. Thank you for reading this article. If you are interested, here are links to some of my other posts on pandas:◼️️ From pandas to PySpark◼️️ Writing 5 common SQL queries in pandas◼️️ Writing advanced SQL queries in pandas◼️️ 5 tips for pandas users◼️️ 5 tips for data aggregation in pandas◼️️ How to transform variables in a pandas DataFrame◼️️ 3 easy ways to reshape pandas DataFrame◼️ 3 easy ways to crosstab in pandas Bye for now 🏃 💨
[ { "code": null, "e": 338, "s": 171, "text": "Did you know that we can prettify pandas DataFrames by accessing the .style attribute? Here’s an example where we styled a DataFrame such that it resembles a heatmap:" }, { "code": null, "e": 620, "s": 338, "text": "After styling, it l...
Clustering the US population: observation-weighted k-means | by Carl Anderson | Towards Data Science
Thanks to Chris Decker who provided the following info: For anyone discovering this post in recent years: scikit learn implemented a ‘sample_weight’ parameter into KMeans as of 0.20.0 in 2018. No need to roll your own anymore. — — — — — -In this post, I detail a form of k-means clustering in which weights are associated with individual observations. k-means clustering has been a workhorse of machine learning for almost 60 years. It is simple, effective, and intuitive. One of the reasons that I love it, is that you can plot cluster assignments over time and see it learning. However, it doesn’t scale particularly well. On a laptop, can you cluster 10,000 points? Sure, no problem. A million? Maybe, but it will be slow. A hundred million points? Fuhgeddaboutit! Imagine that we wanted to cluster the U.S. population. Why? We could set k=48 and determine what the lower 48 states might look like based on current centers of population. Perhaps, we want to set up k=1000 distribution centers across the country. Maybe we want to set k=10 million Starbucks locations. The U.S. population has more than 300 million people. How would you cluster them using k-means? Ignoring parallel k-means, let’s constrain it to run on a single laptop in say less than 3 minutes. Ideas? One thing that you could do is sample the data; that is, run with a reduced, hopefully representative, subset of the data. Another approach would be to aggregate the data, drastically reducing the number of points, but associate each with the original sample size that each aggregate point represents. In other words, weight each point. Where might we get such a dataset? The U.S. Census Bureau. They provide a very convenient dataset consisting of the number of inhabitants for each Zip code. There are about 43,000 Zip codes in the U.S, a number we can comfortably cluster on a laptop. Imagine then, we have a data file consisting of Zip code, a latitude-longitude pair (which are the x-y coordinates that k-means works on), and the number of inhabitants in that Zip (the weight): “zip”,”state”,”latitude”,”longitude”,”population”“00601”,”PR”,18.180103,-66.74947,19143“00602”,”PR”,18.363285,-67.18024,42042...99929,”AK”,56.409507,-132.33822,242499950,”AK”,55.875767,-131.46633,47 Surely we can go to scikit-learn or R or other major machine learning library and run some weighted k-means algorithm. Unfortunately, not. There are weighted k-means in a few of those libraries but they are not the sort that we want. They provide weights not for the observations but for the features. That is, with feature-weights we could specify that latitude should influence the centroids more than longitude. However, we want to specify observation-weights such that Zip code=10012 (a dense Manhattan Zip) has far greater draw on a centroid than 59011 (a vast, low density region in Montana). The core idea is very intuitive. Take four equally-weighted points in the (x,y) plane and the centroid is (mean(x), mean(y)). If we apply weights, say w=(13,7,4,4), then the point with weight 13 has far greater gravity and should draw the cluster center much closer to it. The weighted centroid is now: (weighted.mean(x,w),weighted.mean(y,w)) The distance metric can be our usual Euclidean distance. However, as we are dealing with latitude-longitude pairs, the correct as-the-crow-flies distance metric is something called the Haversine distance so we’ll use that. I was not able to find an implementation in major library. Even stackoverflow failed me. One suggestion from there was to replicate the data points. That is, if the weighted data were as shown (top), one would created the equivalent unweighted dataset (below): Obviously, this has the problem of scaling up the data — the exact thing we wanted to fix — and this only works with small integer weights. Instead, I found a nice “regular” k-means example from stackexchange (thanks Gareth Rees!) which I stripped down and modified to be observation-weighted: import randomimport numpy as npimport pandas as pd import scipy.spatialfrom haversine import haversinedef distance(p1,p2): return haversine(p1[1:],p2[1:])def cluster_centroids(data, clusters, k): results=[] for i in range(k): results.append( np.average(data[clusters == i],weights=np.squeeze(np.asarray(data[clusters == i][:,0:1])),axis=0)) return resultsdef kmeans(data, k=None, centroids=None, steps=20): # Forgy initialization method: choose k data points randomly. centroids = data[np.random.choice(np.arange(len(data)), k, False)] for _ in range(max(steps, 1)): sqdists = scipy.spatial.distance.cdist(centroids, data, lambda u, v: distance(u,v)**2) # Index of the closest centroid to each data point. clusters = np.argmin(sqdists, axis=0) new_centroids = cluster_centroids(data, clusters, k) if np.array_equal(new_centroids, centroids): break centroids = new_centroids return clusters, centroids#setupdata = pd.read_csv(“us_census.csv”)data = data[~data[‘state’].isin([‘AK’,’HI’,’PR’])]vals = data[[‘population’,’latitude’,’longitude’]].valuesk = 3random.seed(42)#run itclusters,centroids=observation_weighted_kmeans(vals,k)#outputdata[‘c’]=[int(c) for c in clusters]lats = [centroids[i][1] for i in range(k)]data[‘clat’] = data[‘c’].map(lambda x: lats[x])longs = [centroids[i][2] for i in range(k)]data[‘clong’] = data[‘c’].map(lambda x: longs[x])data.to_csv("clustered_us_census.csv", index=False) On a old Macbook Air, run time ranged from 2 seconds (k=1) to 160 seconds (k=48). Where is the center of mass of the U.S. population (i.e., k=1)? That appears to be in south east Missouri. And for k=10, Finally, if we were to “redistrict” our states based on population density then we see a very different picture. At least 5 states would be divvied out to neighbors and CA and TX split into four or five regions. Update (2017–05–09): here is the full source code: https://github.com/leapingllamas/medium_posts/tree/master/observation_weighted_kmeans Update (2018–06–10): there are some bugs in the medium post code shown above. However, the github code works great. To be clear, make sure that you are running: https://github.com/leapingllamas/medium_posts/blob/master/observation_weighted_kmeans/medium.py P.S. if you are going to be running clustering on some subset of the data, say for districts within a single state, you will need to make sure that you adjust the centroids initialization. That is, ensure that the initial centroids are randomly distributed across the area of interest (not across the broader US as in the example code) otherwise they will end up in a single cluster.
[ { "code": null, "e": 228, "s": 172, "text": "Thanks to Chris Decker who provided the following info:" }, { "code": null, "e": 399, "s": 228, "text": "For anyone discovering this post in recent years: scikit learn implemented a ‘sample_weight’ parameter into KMeans as of 0.20.0 in...
AWS SageMaker. Build, Train, Tune, and Deploy a ML... | by Vysakh Nair | Towards Data Science
Let’s start with a short and simple introduction to SageMaker and understand what it is that we are working with and later we’ll dive into the ML tutorial! Amazon SageMaker is a cloud machine-learning platform{just like your jupyter notebook environment :) but on the cloud} that helps users in building, training, tuning and deploying machine learning models in a production ready hosted environment. Highly Scalable Fast Training Maintains Uptime — Process keeps on running without any stoppage. High Data Security The SageMaker comes with a lot of built-in optimized ML algorithms which are widely used for training purposes. Now to build a model, we need data. We can either collect and prepare training data by ourselves or we can choose from the Amazon S3 buckets which are the storage service (kind of like harddrives in your system) inside the AWS SageMaker. Lets see how we can make use of this service to build an end-to-end ML project. The main focus of this tutorial will be on working with the SageMaker and the libraries used. There won’t be any explanation for any ML concept. NOTE: You should have an AWS account for performing these tasks. Just like you create a jupyter notebook in your system, we will be creating a jupyter notebook on our platform. Below are the steps for doing the same: Sign into the AWS SageMaker Console. Click on Notebook Instances and then choose create notebook instance. On the next page, name your notebook, keep the instance type and elastic inference as default and select the IAM role for your instance. IAM(Identity and Access Management) Role: In short, SageMaker and S3 buckets are services provided by the AWS. Our notebook instance need data that we store in the S3 bucket to build the model. A service can’t directly access another service in AWS. Therefore a role should be provided so that the notebook instance can access data from the S3 bucket. You can either give specific S3 buckets or all the S3 buckets for your instance to work with. After creating the role, click on create notebook instance. It takes a couple of minutes for the instance to get created. After that click on jupyter, select the notebook environment that you want to work with. There you have it. Your notebook has been created. In this session, we will look into all the libraries required to perform the task: As I mentioned before, AWS contains a lot of inbuilt ML algorithms which can be used by us. To use those algorithms we need the sagemaker library. All these built-in algorithms are in the form of image containers, therefore get_image_uri helps us to access those containers. If you are using the sagemaker, you need the boto3 library. Just like you use pandas to read data from your local system, boto3 helps us to access data from the S3 buckets if access to those buckets are provided( remember IAM role?). Now if we want to use sagemaker instance we have to create sessions to do so. Session library is used for creating sessions. S3 buckets can be created manually or from our notebook instance using boto3. In this tutorial we will be using boto3 to create one. In AWS there are multiple regions and each user works in their own region. By default, the bucket is created in the US East (N. Virginia) Region, therefore if your region is other than US-East-1, you have to explicitly specify your region while creating the bucket. my_region = boto3.session.Session().region_name # this gives you your regions3.create_bucket(Bucket=bucket_name, CreateBucketConfiguration={ 'LocationConstraint': my_region }) # this is how you explicitly add the location constraint Bucket names are GLOBALLY unique! AWS will give you the ‘IllegalLocationConstraintException’ error if you collide with an already existing bucket and you’ve specified a region different than the region of the already existing bucket. If you happen to guess the correct region of the existing bucket it will give you the BucketAlreadyExists exception. Along with that there are some naming conventions which has to be kept in mind while naming them: Bucket names must be between 3 and 63 characters long. Bucket names can consist only of lowercase letters, numbers, dots (.), and hyphens (-). Bucket names must begin and end with a letter or number. Bucket names must not be formatted as an IP address (for example, 192.168.5.4) Bucket names can’t begin with xn-- (for buckets created after February 2020). Bucket names must be unique within a partition. A partition is a grouping of Regions. AWS currently has three partitions: aws (Standard Regions), aws-cn (China Regions), and aws-us-gov (AWS GovCloud [US] Regions). Buckets used with Amazon S3 Transfer Acceleration can’t have dots (.) in their names. For more information about transfer acceleration, see Amazon S3 Transfer Acceleration. We will first divide our data into train and test. Then we will load it into S3. An important step to keep in mind while using SageMaker is that, the in-built algorithms in the SageMaker expects the dependent feature to be the first column of our dataset. So if your dataset’s first column is not that of the dependent feature, make sure that you change it. s3_input_train and s3_input_test contains the path of the uploaded train and test data in the S3 bucket which will be used later while training. The container retrieves the inbuilt XGB model by specifying the region name. The Estimator handles the end-to-end Amazon SageMaker training and deployment tasks by specifying the algorithm that we want to use under image_uri. The s3_input_train and s3_input_test specifies the location of the train and test data in the S3 bucket. We identified these paths in step 4. xgb_predictor = xgb.deploy(initial_instance_count=1,instance_type='ml.m4.xlarge') The trained model can then be deployed using the above line of code. The initial_instance_count specifies the number of instances that should be used to while predicting. More the number of instances, faster the prediction. The above code can be used for predicting the result. In this step, you terminate all the resources you used. Terminating resources that are not actively being used reduces costs and is a best practice. Not terminating your resources will result in charges to your account. # Delete your deployed end pointsxgb_predictor.delete_endpoint() xgb_predictor.delete_model()# Delete your S3 bucketbucket_to_delete = boto3.resource('s3').Bucket(bucket_name) bucket_to_delete.objects.all().delete() Finally delete your SageMaker Notebook: Stop and delete your SageMaker Notebook. Open the SageMaker Console.Under Notebooks, choose Notebook instances.Choose the notebook instance that you created for this tutorial, then choose Actions, Stop. The notebook instance takes up to several minutes to stop. When Status changes to Stopped, move on to the next step.Choose Actions, then Delete.Choose Delete. Open the SageMaker Console. Under Notebooks, choose Notebook instances. Choose the notebook instance that you created for this tutorial, then choose Actions, Stop. The notebook instance takes up to several minutes to stop. When Status changes to Stopped, move on to the next step. Choose Actions, then Delete. Choose Delete. There you have it! This is how you can build an end-to-end ML model using AWS SageMaker The entire code for this tutorial can be accessed from my GitHub. Feel free to connect with me on LinkedIn. Hope you enjoyed this Tutorial. Thanks for reading :)
[ { "code": null, "e": 328, "s": 172, "text": "Let’s start with a short and simple introduction to SageMaker and understand what it is that we are working with and later we’ll dive into the ML tutorial!" }, { "code": null, "e": 574, "s": 328, "text": "Amazon SageMaker is a cloud ma...
How to Update multiple documents in a MongoDB collection using Java?
Using the updateMany() method you can update all the documents of a collection. db.COLLECTION_NAME.update(<filter>, <update>) In Java the com.mongodb.client.MongoCollection interface provides you a method with the same name. Using this method you can update multiple documents in a collection at once, to this method you need to pass the filter and values for the update. import com.mongodb.client.FindIterable; import com.mongodb.client.MongoCollection; import com.mongodb.client.MongoDatabase; import com.mongodb.client.model.Filters; import com.mongodb.client.model.Updates; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import org.bson.Document; import org.bson.conversions.Bson; import com.mongodb.MongoClient; public class UpdatingMultipleDocuments { public static void main( String args[] ) { // Creating a Mongo client MongoClient mongo = new MongoClient( "localhost" , 27017 ); //Connecting to the database MongoDatabase database = mongo.getDatabase("myDatabase"); //Creating a collection object MongoCollection<Document>collection = database.getCollection("myCollection"); //Preparing documents Document document1 = new Document("name", "Ram").append("age", 26).append("city", "Hyderabad"); Document document2 = new Document("name", "Robert").append("age", 27).append("city", "Delhi"); Document document3 = new Document("name", "Rahim").append("age", 30).append("city", "Delhi"); //Inserting the created documents List<Document> list = new ArrayList<Document>(); list.add(document1); list.add(document2); list.add(document3); collection.insertMany(list); System.out.println("List of the documents: "); FindIterable<Document> iterDoc = collection.find(); Iterator it = iterDoc.iterator(); while (it.hasNext()) { System.out.println(it.next()); } //Updating multiple documents Bson filter = new Document("city", "Delhi"); Bson newValue = new Document("city", "Vijayawada"); Bson updateOperationDocument = new Document("$set", newValue); collection.updateMany(filter, updateOperationDocument); System.out.println("Document update successfully..."); System.out.println("List of the documents after update"); iterDoc = collection.find(); it = iterDoc.iterator(); while (it.hasNext()) { System.out.println(it.next()); } } } List of the documents: Document{{_id=5e88a61fe7a0124a4fc51b2c, name=Ram, age=26, city=Hyderabad}} Document{{_id=5e88a61fe7a0124a4fc51b2d, name=Robert, age=27, city=Delhi}} Document{{_id=5e88a61fe7a0124a4fc51b2e, name=Rahim, age=30, city=Delhi}} Document update successfully... List of the documents after update Document{{_id=5e88a61fe7a0124a4fc51b2c, name=Ram, age=26, city=Hyderabad}} Document{{_id=5e88a61fe7a0124a4fc51b2d, name=Robert, age=27, city=Vijayawada}} Document{{_id=5e88a61fe7a0124a4fc51b2e, name=Rahim, age=30, city=Vijayawada}}
[ { "code": null, "e": 1142, "s": 1062, "text": "Using the updateMany() method you can update all the documents of a collection." }, { "code": null, "e": 1188, "s": 1142, "text": "db.COLLECTION_NAME.update(<filter>, <update>)" }, { "code": null, "e": 1434, "s": 1188...
React Native - ListView
In this chapter, we will show you how to create a list in React Native. We will import List in our Home component and show it on screen. App.js import React from 'react' import List from './List.js' const App = () => { return ( <List /> ) } export default App To create a list, we will use the map() method. This will iterate over an array of items, and render each one. List.js import React, { Component } from 'react' import { Text, View, TouchableOpacity, StyleSheet } from 'react-native' class List extends Component { state = { names: [ { id: 0, name: 'Ben', }, { id: 1, name: 'Susan', }, { id: 2, name: 'Robert', }, { id: 3, name: 'Mary', } ] } alertItemName = (item) => { alert(item.name) } render() { return ( <View> { this.state.names.map((item, index) => ( <TouchableOpacity key = {item.id} style = {styles.container} onPress = {() => this.alertItemName(item)}> <Text style = {styles.text}> {item.name} </Text> </TouchableOpacity> )) } </View> ) } } export default List const styles = StyleSheet.create ({ container: { padding: 10, marginTop: 3, backgroundColor: '#d9f9b1', alignItems: 'center', }, text: { color: '#4f603c' } }) When we run the app, we will see the list of names. You can click on each item in the list to trigger an alert with the name. 20 Lectures 1.5 hours Anadi Sharma 61 Lectures 6.5 hours A To Z Mentor 40 Lectures 4.5 hours Eduonix Learning Solutions 56 Lectures 12.5 hours Eduonix Learning Solutions 62 Lectures 4.5 hours Senol Atac 67 Lectures 4.5 hours Senol Atac Print Add Notes Bookmark this page
[ { "code": null, "e": 2481, "s": 2344, "text": "In this chapter, we will show you how to create a list in React Native. We will import List in our Home component and show it on screen." }, { "code": null, "e": 2488, "s": 2481, "text": "App.js" }, { "code": null, "e": 2...
Predicting Battery Lifetime with CNNs | by Hannes Knobloch | Towards Data Science
This article was written by Hannes Knobloch, Adem Frenk, and Wendy Chang.You can find the source code of this project on GitHub: github.com Lithium-ion batteries power almost every electronic device in our lives, including phones and laptops. They’re at the heart of renewable energy and e-mobility. For years companies have tried to predict how many charging cycles a battery will last before it dies. Better predictions would enable more accurate quality assessment and improve long-term planning. But that’s difficult, because every battery ages differently, depending on its usage and conditions during manufacturing. A recent paper, called Data-driven prediction of battery cycle life before capacity degradation, by Kristen A. Severson et al., claims to have found the key to solve this problem by “combining comprehensive experimental data and artificial intelligence”. Even though their results are ahead of traditional methods, the team focused more on their domain knowledge in electrical engineering than on the machine learning part. The good news: The researchers have made the dataset, the largest of its kind, publicly available! Although the dataset is limited to measurements from new batteries in a lab setting, it is still the most comprehensive of its kind. We used a more sophisticated machine learning approach to build a more versatile and accurate model for predicting battery lifetime (under these circumstances). Here you can see it in action: www.ion-age.org/example . To understand fully what the model does, read on! This article describes our research process from start to finish. Additionally, we describe our workflow and tools to give beginner data scientists inspiration for their own projects. The authors of the paper focused on brand-new batteries and predicted their lifetime. That’s impressive, but if you want to diagnose a battery that’s already been in use, you’re out of luck. Instead we want to predict the remaining cycle life of any battery, used and new. Another caveat is the amount of data needed for a prediction. The researchers used data from the first and the hundredth charging cycle for the prediction. Our goal is to get accurate results with measurements from only 20 consecutive charging cycles, making the model much more applicable in the real world. On top of that, it would be useful to know the current age of a battery. This leads us to the following problem definition: Given measurements during a limited amount of charging cycles, how many cycles has a battery cell lived through and how many cycles will it last before it breaks? Let’s walk through the steps to build a model and predict this! The authors of the original paper assembled 124 lithium-ion battery cells to measure data from. Each cell is charged and discharged according to one of many predetermined policies, until the battery reaches 80 percent of its original capacity (meaning the battery has become too unreliable for normal use and is considered “broken”). The number of cycles (charging fully, then discharging fully) until this state is reached is called the battery cycle life, one of our targets. Within our dataset this number varies wildly from 150 to 2,300. The data for each cell is presented in a nested structure, with some features only measured once per cycle and others multiple times. Over a full cycle, we have more than a thousand measurements for capacity, temperature, voltage, and current, but only one scalar measurement for other metrics such as internal resistance of the cell or the total cycle time. Since a big chunk of the measurements is taken during the experimentally controlled charging policy (which varies from cell to cell), we crop the data to the discharging period (which is comparable across all cells). This brings us to the next step... Raw measurement data can be extremely noisy. Distances between measurements are not always equal, data that’s supposed to decrease monotonically increases unexpectedly and sometimes the hardware just shuts off and continues measuring at a random point in time. So we took special care that the data was clean and in the correct format before feeding it to a model. We remove cycles that had time gaps, small outliers, or other inconsistencies. One particular useful thing we found for smoothing out noise is the savitzky golay filter. This helped us recover some data samples that had measuring issues during the experiments. Another problem in the data was time. The different charging policies meant that some cycles were finished quicker than others and the time measurements of charge and temperature couldn’t be compared as they were. Instead we resampled the data in a similar fashion as the original paper did: Take the voltage range during discharging as the reference instead of time!For this cell model, 3.6V and 2.0V always correspond to fully charged and discharged. This range stays constant, even when time doesn’t.Interpolate charge and temperature over voltage.Resample charge and temperature at 1000 equidistant voltage steps. Take the voltage range during discharging as the reference instead of time!For this cell model, 3.6V and 2.0V always correspond to fully charged and discharged. This range stays constant, even when time doesn’t. Interpolate charge and temperature over voltage. Resample charge and temperature at 1000 equidistant voltage steps. Done! All measurements now had the same length for every cell and cycle, but we still had some features with 1000 steps and others only as scalars. How do we avoid a shape mismatch when feeding array features and scalar features into our model at the same time? One solution is to feed the data into the model at different entry points and bring everything together later. This trick will become clearer when we talk about the model in detail. There wasjust one more thing we needed to do. To be able to detect a trend, we took multiple consecutive charging cycles as input. These groups of cycles we call windows. There should always be just one target for the whole window, but every cycle has a “current cycle” and “remaining cycles” value. Which is why we defined the values from the last cycle to be the targets for the whole window. Let’s finally move to the (more) fun part! Before we could dive into the data and create cool models, we needed to think about our setup. We wanted to use TensorFlow 2.0 from start to finish to profit from integrated functionality like tensorboard, the dataset API, and hyperparameter tuning. After choosing the framework, we decided what platform we should run our training jobs on. Instead of overheating our own laptops, we went with AI Platform from Google Cloud. AI Platform allowed us to run several training jobs at the same time, label them easily and monitor the process. This requires some setup. This can take quite some time to do it right the first time, so we won’t go into all the details in this article. Here’s a summary: Create an account and install the google cloud sdk on your machineUpload your data to a google cloud bucketWrite one central python script that runs a job (load the data, load the model, train the model, save the results)Make sure your project and folder structure are properly set up for AI Platform Create an account and install the google cloud sdk on your machine Upload your data to a google cloud bucket Write one central python script that runs a job (load the data, load the model, train the model, save the results) Make sure your project and folder structure are properly set up for AI Platform Now we were able start a training job from the command line with the option to modify almost everything on the fly. We could adjust things like number of epochs, batch size, shuffling, checkpoint saving and even switch between model architectures easily, by adding a flag after the command. This allowed us to iterate fast, test different theories, and burn through a lot of (free) credits. We’ built our model with tf.Keras using the functional API. We feed the array and scalar features into the model at separate entry points, so we can do different things to them before bringing them back together. The array features in each window we concatenate along their short side to make them a 3D matrix with shape (window size, length, number of features). We can then pass this matrix through three Conv2D layers with MaxPooling to extract relevant information from it while keeping the sequential nature of the window. The Conv2D acts on the “number of features” dimension as if these were the number of color channels in an image. This works because the array features all share the same voltage range and thus are highly correlated (just like RGB channels in an image). After the convolutions we flatten the data to a 1D array. We concatenate the scalar features in a similar fashion in the window direction to produce an array with shape (window size, number of features), before passing it through two Conv1D layers with one MaxPooling and flatten it in the end. We now have two flat arrays with feature mappings which we can easily put together and feed it into a fully connected dense network to produce our result. After we had built our model, it was time to train it. We wrote a script to call the GCP API in a simple command-line interface, so while you’re in the project’s main directory, starting a training job in the cloud becomes super easy: ./train.sh If, for example, we wanted to modify the number of training epochs and the number of samples per window, our script would allow us to do this with simple flags: ./train.sh -e 70 -w 10 During training we tracked three metrics across both train (orange) and validation (blue) set: The loss and mean absolute error (MAE) of current cycle, as well as remaining cycles. After a few minutes we can look at the results in TensorBoard. Let’s see what the loss looks like: It’s going in the right direction, but we were not happy with the gap between train and validation loss. To decrease that gap, dropout is a popular tool, so we added it to the model. We also needed to tune our hyperparameters, that’s why we used gridsearch over different setups. To keep track of these setups, we used the hparams module which from TensorFlow 2.0 on you can find in tensorboard.plugins. Now we were able to compare different runs and pick the parameters that work best. Since the correct predictions for “current” and “remaining cycles” should always be bigger than zero, we tried ReLU as an activation function for the output layer to reduce the search space for our model during the training process. Additionally our model heavily relies on CNNs, so we also tried different kernel sizes. Finally, we tested two different learning rates, and we measured the MAE of current cycles and remaining cycles of the setups. With the best model setup we obtained from hyperparameter tuning, and by setting the number of training epochs to 1000, we ended up with a model that’s at 90 MAE for the current and 115 MAE for the remaining cycles: It’s still not perfect, but we were pretty happy with the results for the application we had in mind (and we finally got some food to celebrate!). When we looked at the training curve for our best setup, we could see that the lowest loss wasn’t at the end of training, but somewhere around three quarters through training. How do we use the model as it was at that point for our predictions? We had to implement checkpoints that allow us to restore a saved model at certain times during training. Once we have a model, we can serve it with TensorFlow Serving or a web-framework such as Flask. At the time Google Cloud Platform didn’t support TF2 Serving, so we decided to build the app completely in Flask and host it on an AWS EC2 instance. You can see the results again here: www.ion-age.org/example After loading a random sample file, you can preview the data that we’re going to predict on and you’ll find our two linear and three scalar features. Hitting the predict button produces a graph that shows you our two targets: current and remaining cycles. That’s it! That’s all you need for an algorithm that can accurately predict the age and expected lifetime of any lithium-ion battery, given the proper measurement data. If you have any suggestions or questions or you found a bug, feel free to leave a comment. We are happy to answer questions! We created this project during Data Science Retreat, a three-month Data Science bootcamp in Berlin.
[ { "code": null, "e": 301, "s": 172, "text": "This article was written by Hannes Knobloch, Adem Frenk, and Wendy Chang.You can find the source code of this project on GitHub:" }, { "code": null, "e": 312, "s": 301, "text": "github.com" }, { "code": null, "e": 672, ...
Count of only repeated element in a sorted array of consecutive elements in C++
We are given an array of consecutive numbers of length n. The array has only one number which is repeated more than once. The goal is to get the number of times that element is repeated in the array. Or we can say find the length of a repeated element in the array. We will traverse the array from i=0 to i<n. If any arr[i]==arr[i+1] increment count. At last increment count by 1 for last element. Count will have length of repeated element. Let’s understand with examples. Input − arr[]= { 0,1,2,3,3,3 }, N=6 Output − Count of only repeated element − 3 Explanation − 3 is repeated thrice here. Input − arr[]= { 1,2,3,4,4,4,4,4,5,6 }, N=10 Output − Count of only repeated element − 5 Explanation − 4 is repeated 5 times here. We take an integer array arr[] initialized with consecutive numbers where one number is repeated. We take an integer array arr[] initialized with consecutive numbers where one number is repeated. Variable len stores the length of the array. Variable len stores the length of the array. Function findRepeat(int arr[],int n) takes an array and its length as input and displays the repeated element value and length of repeated elements. Function findRepeat(int arr[],int n) takes an array and its length as input and displays the repeated element value and length of repeated elements. Take the initial count as 0. Take the initial count as 0. Starting from index i=0 to i<n. If arr[i]==arr[i+1]. Increment count. Store element in variable value. Starting from index i=0 to i<n. If arr[i]==arr[i+1]. Increment count. Store element in variable value. At the end of loop increment count by 1 for the last element. At the end of loop increment count by 1 for the last element. Display element which is repeated as value. Display element which is repeated as value. Display number of repetitions as count. Display number of repetitions as count. Live Demo #include <bits/stdc++.h> using namespace std; void findRepeat(int arr[],int n){ int count=0; //count of repeated element int value=0; //to store repeated element for(int i=0;i<n;i++){ if(arr[i]==arr[i+1]){ count++; value=arr[i]; } } count++; //for last element cout<<"Repeated Element: "<<value; cout<<endl<<"Number of occurrences: "<<count; } int main(){ int Arr[]={ 2,3,4,5,5,5,6,7,8 }; int len=sizeof(Arr)/sizeof(Arr[0]); findRepeat(Arr,len); return 0; } If we run the above code it will generate the following output − Repeated Element: 5 Number of occurrences: 3
[ { "code": null, "e": 1328, "s": 1062, "text": "We are given an array of consecutive numbers of length n. The array has only one number which is repeated more than once. The goal is to get the number of times that element is repeated in the array. Or we can say find the length of a repeated element i...
Scaffold in Android using Jetpack Compose
28 Jul, 2021 There are a lot of apps that contain TopAppBar, Drawer, Floating Action Button, BottomAppBar (in the form of bottom navigation), Snackbar. While you can individually set up all of these in an app but takes a lot of setups. Jetpack Compose provides Scaffold Composable which can save a lot of time. It’s like a prebuilt template. In this article, we will see how to set up Scaffold in android with Jetpack Compose. We will be building a basic app that will demonstrate the Scaffold composable, here is a video showing the app. Prerequisites: Knowledge of Kotlin. Knowledge of Jetpack Compose. Step 1: Creating TopAppBar Open MainActivity.kt and create a TopBar Composable function, It will be a wrapper for our TopAppBar in Scaffold. Kotlin // A function which will receive a // callback to trigger to opening the drawer@Composablefun TopBar(onMenuClicked: () -> Unit) { // TopAppBar Composable TopAppBar( // Provide Title title = { Text(text = "Scaffold||GFG", color = Color.White) }, // Provide the navigation Icon (Icon on the left to toggle drawer) navigationIcon = { Icon( imageVector = Icons.Default.Menu, contentDescription = "Menu", // When clicked trigger onClick // Callback to trigger drawer open modifier = Modifier.clickable(onClick = onMenuClicked), tint = Color.White ) }, // background color of topAppBar backgroundColor = Color(0xFF0F9D58) )} Step 2: Create BottomAppBar Open MainActivity.kt and create a BottomBar Composable. It will be a simple straight forward in our app. Kotlin @Composablefun BottomBar() { // BottomAppBar Composable BottomAppBar( backgroundColor = Color(0xFF0F9D58) ) { Text(text = "Bottom App Bar", color = Color.White) }} Step 3: Create Drawer content Open MainActivity.kt and create a composable Drawer, It will be the drawer in our Scaffold. Kotlin @Composablefun Drawer() { // Column Composable Column( Modifier .background(Color.White) .fillMaxSize() ) { // Repeat is a loop which // takes count as argument repeat(5) { item -> Text(text = "Item number $item", modifier = Modifier.padding(8.dp), color = Color.Black) } }} Step 4: Creating Body part of Scaffold Create another composable function Body. It will be a simple Text composable in our app. Be sure to customize it when implementing it in other apps. Kotlin @Composablefun Body() { Column( verticalArrangement = Arrangement.Center, horizontalAlignment = Alignment.CenterHorizontally, modifier = Modifier .fillMaxSize() .background(Color.White) ) { Text(text = "Body Content", color = Color(0xFF0F9D58)) }} Since all the components we need are done, let’s work on the Scaffold Part. Step 5: Working with Scaffold Since we have already created all the components, the Scaffold code will be pretty simple and self-explanatory. Kotlin @Composablefun ScaffoldExample() { // create a scaffold state, set it to close by default val scaffoldState = rememberScaffoldState(rememberDrawerState(DrawerValue.Closed)) // Create a coroutine scope. Opening of // Drawer and snackbar should happen in // background thread without blocking main thread val coroutineScope = rememberCoroutineScope() // Scaffold Composable Scaffold( // pass the scaffold state scaffoldState = scaffoldState, // pass the topbar we created topBar = { TopBar( // When menu is clicked open the // drawer in coroutine scope onMenuClicked = { coroutineScope.launch { // to close use -> scaffoldState.drawerState.close() scaffoldState.drawerState.open() } }) }, // pass the bottomBar // we created bottomBar = { BottomBar() }, // Pass the body in // content parameter content = { Body() }, // pass the drawer drawerContent = { Drawer() }, floatingActionButton = { // Create a floating action button in // floatingActionButton parameter of scaffold FloatingActionButton( onClick = { // When clicked open Snackbar coroutineScope.launch { when (scaffoldState.snackbarHostState.showSnackbar( // Message In the snackbar message = "Snack Bar", actionLabel = "Dismiss" )) { SnackbarResult.Dismissed -> { // do something when // snack bar is dismissed } SnackbarResult.ActionPerformed -> { // when it appears } } } }) { // Simple Text inside FAB Text(text = "X") } } )} Now call this composable from setContent in Mainactivity class Kotlin class MainActivity : ComponentActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContent { Surface(color = Color.White) { // Scaffold we created ScaffoldExample() } } }} Now run the app and see it working. Complete code: Kotlin package com.gfg.scaffoldjetpackcompose import android.os.Bundleimport androidx.activity.ComponentActivityimport androidx.activity.compose.setContentimport androidx.compose.foundation.backgroundimport androidx.compose.foundation.clickableimport androidx.compose.foundation.layout.Arrangementimport androidx.compose.foundation.layout.Columnimport androidx.compose.foundation.layout.fillMaxSizeimport androidx.compose.foundation.layout.paddingimport androidx.compose.material.*import androidx.compose.material.icons.Iconsimport androidx.compose.material.icons.filled.Menuimport androidx.compose.runtime.Composableimport androidx.compose.runtime.rememberCoroutineScopeimport androidx.compose.ui.Alignmentimport androidx.compose.ui.Modifierimport androidx.compose.ui.graphics.Colorimport androidx.compose.ui.unit.dpimport kotlinx.coroutines.launch class MainActivity : ComponentActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContent { Surface(color = Color.White) { // Scaffold we created ScaffoldExample() } } }} @Composablefun ScaffoldExample() { // create a scaffold state, set it to close by default val scaffoldState = rememberScaffoldState(rememberDrawerState(DrawerValue.Closed)) // Create a coroutine scope. Opening of Drawer // and snackbar should happen in background // thread without blocking main thread val coroutineScope = rememberCoroutineScope() // Scaffold Composable Scaffold( // pass the scaffold state scaffoldState = scaffoldState, // pass the topbar we created topBar = { TopBar( // When menu is clicked open the // drawer in coroutine scope onMenuClicked = { coroutineScope.launch { // to close use -> scaffoldState.drawerState.close() scaffoldState.drawerState.open() } }) }, // pass the bottomBar we created bottomBar = { BottomBar() }, // Pass the body in // content parameter content = { Body() }, // pass the drawer drawerContent = { Drawer() }, floatingActionButton = { // Create a floating action button in // floatingActionButton parameter of scaffold FloatingActionButton( onClick = { // When clicked open Snackbar coroutineScope.launch { when (scaffoldState.snackbarHostState.showSnackbar( // Message In the snackbar message = "Snack Bar", actionLabel = "Dismiss" )) { SnackbarResult.Dismissed -> { // do something when // snack bar is dismissed } SnackbarResult.ActionPerformed -> { // when it appears } } } }) { // Simple Text inside FAB Text(text = "X") } } )} // A function which will receive a // callback to trigger to opening the drawer@Composablefun TopBar(onMenuClicked: () -> Unit) { // TopAppBar Composable TopAppBar( // Provide Title title = { Text(text = "Scaffold||GFG", color = Color.White) }, // Provide the navigation Icon ( Icon on the left to toggle drawer) navigationIcon = { Icon( imageVector = Icons.Default.Menu, contentDescription = "Menu", // When clicked trigger onClick // Callback to trigger drawer open modifier = Modifier.clickable(onClick = onMenuClicked), tint = Color.White ) }, // background color of topAppBar backgroundColor = Color(0xFF0F9D58) )} @Composablefun BottomBar() { // BottomAppBar Composable BottomAppBar( backgroundColor = Color(0xFF0F9D58) ) { Text(text = "Bottom App Bar", color = Color.White) }} @Composablefun Body() { Column( verticalArrangement = Arrangement.Center, horizontalAlignment = Alignment.CenterHorizontally, modifier = Modifier .fillMaxSize() .background(Color.White) ) { Text(text = "Body Content", color = Color(0xFF0F9D58)) }} @Composablefun Drawer() { // Column Composable Column( Modifier .background(Color.White) .fillMaxSize() ) { // Repeat is a loop which // takes count as argument repeat(5) { item -> Text(text = "Item number $item", modifier = Modifier.padding(8.dp), color = Color.Black) } }} Output: Get the complete project from GitHub. Android-Jetpack Android Kotlin Android Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 53, "s": 25, "text": "\n28 Jul, 2021" }, { "code": null, "e": 579, "s": 53, "text": "There are a lot of apps that contain TopAppBar, Drawer, Floating Action Button, BottomAppBar (in the form of bottom navigation), Snackbar. While you can individually set up a...
Convert Date to XMLGregorianCalendar in Java
30 Jun, 2021 XML Gregorian Calendar: The rules for specifying dates in XML format are defined in the XML Schema standard. The Java XMLGregorianCalendar class, introduced in Java 1.5, is a representation of the W3C XML Schema 1.0 date/time datatypes and is required to use the XML format. In this approach, we have first changed the standard date to Gregorian Calendar date format and then changed it to XML Gregorian Date using the DatatypeFactory(). newInstance method which creates new javax.xml.datatype Objects that map XML to/from Java Objects. Code: Java // Java program to Convert Date to XMLGregorianCalendar // importing necessary packagesimport java.util.Date;import java.util.GregorianCalendar;import javax.xml.datatype.DatatypeFactory;import javax.xml.datatype.XMLGregorianCalendar; public class DateToXMLGregorianCalendar { public static void main(String[] args) { // Create Date Object Date current_date = new Date(); // current date time in standard format System.out.println("Standard Format :- " + current_date); XMLGregorianCalendar xmlDate = null; // Gregorian Calendar object creation GregorianCalendar gc = new GregorianCalendar(); // giving current date and time to gc gc.setTime(current_date); try { xmlDate = DatatypeFactory.newInstance() .newXMLGregorianCalendar(gc); } catch (Exception e) { e.printStackTrace(); } // current date time in XMLGregorain Calendar format System.out.println("XMLGregorianCalendar Format :- " + xmlDate); }} Standard Format :- Tue Feb 16 17:44:25 UTC 2021 XMLGregorianCalendar Format :- 2021-02-16T17:44:25.164Z arorakashish0911 Java-Date-Time Picked Technical Scripter 2020 Java Java Programs Technical Scripter Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n30 Jun, 2021" }, { "code": null, "e": 304, "s": 28, "text": "XML Gregorian Calendar: The rules for specifying dates in XML format are defined in the XML Schema standard. The Java XMLGregorianCalendar class, introduced in Java 1.5, is a ...
Python | Empty String to None Conversion
30 Jan, 2020 Sometimes, while working with Machine Learning, we can encounter empty strings and we wish to convert to the None for data consistency. This and many other utilities can require the solution to this problem. Let’s discuss certain ways in which this problem can be solved. Method #1 : Using lambdaThis task can be performed using the lambda function. In this we check for string for None or empty string using the or operator and replace the empty string with None. # Python3 code to demonstrate working of# Empty String to None Conversion# Using lambda # initializing list of stringstest_list = ["Geeks", '', "CS", '', ''] # printing original list print("The original list is : " + str(test_list)) # using lambda# Empty String to None Conversionconv = lambda i : i or Noneres = [conv(i) for i in test_list] # printing result print("The list after conversion of Empty Strings : " + str(res)) The original list is : ['Geeks', '', 'CS', '', ''] The list after conversion of Empty Strings : ['Geeks', None, 'CS', None, None] Method #2 : Using str()Simply the str function can be used to perform this particular task because, None also evaluates to a “False” value and hence will not be selected and rather a string converted false which evaluates to empty string is returned. # Python3 code to demonstrate working of# Empty String to None Conversion# Using str() # initializing list of stringstest_list = ["Geeks", '', "CS", '', ''] # printing original list print("The original list is : " + str(test_list)) # using str()# Empty String to None Conversionres = [str(i or None) for i in test_list] # printing result print("The list after conversion of Empty Strings : " + str(res)) The original list is : ['Geeks', '', 'CS', '', ''] The list after conversion of Empty Strings : ['Geeks', None, 'CS', None, None] Python list-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Iterate over a list in Python How to iterate through Excel rows in Python? Enumerate() in Python Python Dictionary Deque in Python Python program to convert a list to string Python program to add two numbers Python | Get dictionary keys as a list Python Program for Fibonacci numbers Python Program for factorial of a number
[ { "code": null, "e": 28, "s": 0, "text": "\n30 Jan, 2020" }, { "code": null, "e": 300, "s": 28, "text": "Sometimes, while working with Machine Learning, we can encounter empty strings and we wish to convert to the None for data consistency. This and many other utilities can requi...
How to select an element by its class name in AngularJS ?
14 Oct, 2020 Given an HTML document and the task is to select an element by its className using AngularJS. Approach: The approach is to use the document.querySelector() method to get the element of a particular className. In the first example the element of className class1 is selected and its background color is changed to green. In the second example, 2 elements of same class are selected and some of the CSS is changed. Example 1: <!DOCTYPE HTML><html> <head> <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.13/angular.min.js"> </script> <script> var myApp = angular.module("app", []); myApp.controller("controller", function ($scope) { $scope.getClass = function () { var el = angular.element( document.querySelector(".class1")); el.css('background', 'green'); }; }); </script></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p> How to select an element by its class in AngularJS </p> <div ng-app="app"> <div ng-controller="controller"> <p class="class1"> This is GeeksForGeeks </p> <input type="button" value="click here" ng-click="getClass()"> </div> </div></body> </html> Output: Example 2: <!DOCTYPE HTML><html> <head> <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.13/angular.min.js"> </script> <script> var myApp = angular.module("app", []); myApp.controller("controller", function ($scope) { $scope.getClass = function () { var el = angular.element( document.querySelectorAll(".class1")); el.css({ 'background': 'green', 'color': 'white' }); }; }); </script></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p> How to select an element by its class in AngularJS </p> <div ng-app="app"> <div ng-controller="controller"> <p class="class1"> This is GeeksForGeeks </p> <p class="class1"> A computer science portal </p> <input type="button" value="click here" ng-click="getClass()"> </div> </div></body> </html> Output: AngularJS-Misc HTML-Misc AngularJS HTML Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Auth Guards in Angular 9/10/11 Routing in Angular 9/10 What is AOT and JIT Compiler in Angular ? Angular PrimeNG Dropdown Component How to set focus on input field automatically on page load in AngularJS ? How to update Node.js and NPM to next version ? Top 10 Projects For Beginners To Practice HTML and CSS Skills How to insert spaces/tabs in text using HTML/CSS? REST API (Introduction) Hide or show elements in HTML using display property
[ { "code": null, "e": 28, "s": 0, "text": "\n14 Oct, 2020" }, { "code": null, "e": 122, "s": 28, "text": "Given an HTML document and the task is to select an element by its className using AngularJS." }, { "code": null, "e": 441, "s": 122, "text": "Approach: Th...
PyTorch – How to compute the inverse of a square matrix?
To compute the inverse of a square matrix, we could apply torch.linalg.inv() method. It returns a new tensor with inverse of the given matrix. It accepts a square matrix, a batch of square matrices, and also batches of square matrices. A matrix is a 2D torch Tensor. It supports input of float, double, cfloat, and cdouble data types. The inverse matrix exists if and only if the square matrix is invertible. torch.linalg.inv(M) Where M is a square matrix or a batch of square matrices. It returns the inverse matrix. We could use the following steps to compute the inverse of a square matrix − Import the required library. In all the following examples, the required Python library is torch. Make sure you have already installed it. import torch Define a square matrix. Here, we define a square matrix (2D tensor of size 3×3. Define a square matrix. Here, we define a square matrix (2D tensor of size 3×3. M = torch.tensor([[1.,2., 3.],[1.5, 2., 2.3],[.1, .2, .5]]) Compute the inverse of square matrix using torch.linalg.inv(M). M is the square matrix or batch/es of square matrices. Optionally assign this value to a new variable. Compute the inverse of square matrix using torch.linalg.inv(M). M is the square matrix or batch/es of square matrices. Optionally assign this value to a new variable. M_inv = torch.linalg.inv(M) Print the above computed inverse matrix. Print the above computed inverse matrix. print("Norm:", M_inv) Let's take a couple of examples to demonstrate how to compute the inverse of a square matrix. # Python program to compute the inverse of a square matrix # import required library import torch # define a 3x3 square matrix M = torch.tensor([[1.,2., 3.],[1.5, 2., 2.3],[.1, .2, .5]]) print("Matrix M:\n", M) # compute the inverse of above defined matrix Minv = torch.linalg.inv(M) print("Inversr Matrix:\n", Minv) It will produce the following output − Matrix M: tensor([[1.0000, 2.0000, 3.0000], [1.5000, 2.0000, 2.3000], [0.1000, 0.2000, 0.5000]]) Inversr Matrix: tensor([[ -2.7000, 2.0000, 7.0000], [ 2.6000, -1.0000, -11.0000], [ -0.5000, 0.0000, 5.0000]]) # Python program to compute the inverse of a square matrix # import required library import torch # define a 3x3 square matrix of random complex numbers M = torch.randn(3,3, dtype = torch.complex128) print("Matrix M:\n", M) # compute the inverse of above defined matrix Minv = torch.linalg.inv(M) print("Inverse Matrix:\n", Minv) It will produce the following output − Matrix M: tensor([[ 0.4425-1.4046j, -0.2492+0.7280j, -0.4746-0.4261j], [-0.0246-0.4826j, -0.0250-0.3656j, 1.1983-0.4130j], [ 0.1904+0.7817j, 0.5823-0.2140j, 0.6129+0.0590j]], dtype=torch.complex128) Inversr Matrix: tensor([[ 0.3491+0.2565j, -0.2743+0.2843j, 0.4041-0.3382j], [ 0.4856-0.6789j, -0.2541+0.0598j, 1.2471-0.5962j], [ 0.0221+0.2874j, 0.6732+0.0512j, 0.1537+0.5768j]], dtype=torch.complex128) # Python program to compute the inverse of batch of matrices # import required library import torch # define a batch of two 3x3 square matrices B = torch.randn(2,3,3) print("Batch of Matrices :\n", B) # compute the inverse of above defined batch matrices Binv = torch.linalg.inv(B) print("Inverse Matrices:\n", Binv) It will produce the following output − Batch of Matrices : tensor([[[ 1.0002, 0.4318, -0.9800], [-1.7990, 0.0913, 0.9440], [-0.1339, 0.0824, -0.5501]], [[ 0.5289, -0.0909, 0.0354], [-0.2159, -0.5417, 0.3659], [-0.7216, -0.0669, -0.6662]]]) Inverse Matrices: tensor([[[ 0.2685, -0.3290, -1.0427], [ 2.3415, 1.4297, -1.7177], [ 0.2852, 0.2941, -1.8211]], [[ 1.6932, -0.2766, -0.0620], [-1.7919, -1.4360, -0.8838], [-1.6543, 0.4438, -1.3452]]])
[ { "code": null, "e": 1423, "s": 1187, "text": "To compute the inverse of a square matrix, we could apply torch.linalg.inv() method. It returns a new tensor with inverse of the given matrix. It accepts a square matrix, a batch of square matrices, and also batches of square matrices." }, { "co...
Moment.js isMoment() Function
29 Jul, 2020 It is used to check whether a variable is a particular moment or not in Moment.js using the isMoment() function that checks if a variable is a moment object, use moment.isMoment(). Syntax: moment.isMoment(obj); Parameter: Object Returns: True or False Installation of moment module: You can visit the link to Install moment module. You can install this package by using this command.npm install momentAfter installing the moment module, you can check your moment version in command prompt using the command.npm version momentAfter that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command.node index.js You can visit the link to Install moment module. You can install this package by using this command.npm install moment npm install moment After installing the moment module, you can check your moment version in command prompt using the command.npm version moment npm version moment After that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command.node index.js node index.js Example 1: Filename: index.js // Requiring moduleconst moment = require('moment'); var bool2 = moment.isMoment(new Date()); // falseconsole.log(bool2); var bool3 = moment.isMoment(moment()); // trueconsole.log(bool3); Steps to run the program: The project structure will look like this:Make sure you have installed moment module using the following command:npm install momentRun index.js file using below command:node index.jsOutput:false true The project structure will look like this: Make sure you have installed moment module using the following command:npm install moment npm install moment Run index.js file using below command:node index.jsOutput:false true node index.js Output: false true Example 2: Filename: index.js // Requiring moduleconst moment = require('moment'); function checkMoment(data){ return moment.isMoment(data);} var bool = checkMoment(moment());console.log(bool); Run index.js file using below command: node index.js Output: true Reference: https://momentjs.com/docs/#/query/is-a-moment/ Moment.js Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Installation of Node.js on Windows JWT Authentication with Node.js Difference between dependencies, devDependencies and peerDependencies Mongoose Populate() Method Mongoose find() Function Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS? How to fetch data from an API in ReactJS ? Differences between Functional Components and Class Components in React
[ { "code": null, "e": 28, "s": 0, "text": "\n29 Jul, 2020" }, { "code": null, "e": 209, "s": 28, "text": "It is used to check whether a variable is a particular moment or not in Moment.js using the isMoment() function that checks if a variable is a moment object, use moment.isMome...
How to switch between multiple CSS stylesheets using JavaScript ?
21 Jul, 2020 Many websites on the internet are available in multiple themes. One can obtain any feature by making multiple stylesheets into the HTML code and enabling one at a time. A CSS file is included into the HTML in the <head> tag using the <link> tag. <link id="theme" rel="stylesheet" type="text/css" href="light.css" /> The “href” attribute specifies the file location of the CSS file. By altering this tag, we can add new CSS to the website. The implementation can be done using any of the following methods. Method 1: When you want to make a switch or toggle button, to toggle the CSS. It switches between the values depending upon the currently active value. HTML <!DOCTYPE html><html> <head> <!-- Add the style sheet. --> <link id="theme" rel="stylesheet" type="text/css" href="light.css" /> <script> function toggleTheme() { // Obtains an array of all <link> // elements. // Select your element using indexing. var theme = document.getElementsByTagName('link')[0]; // Change the value of href attribute // to change the css sheet. if (theme.getAttribute('href') == 'light.css') { theme.setAttribute('href', 'dark.css'); } else { theme.setAttribute('href', 'light.css'); } } </script></head> <body> <h2>Changing Style Sheets</h2> <br /> Click below button to switch between light and dark themes.<br /> <button onclick="toggleTheme()">Switch</button></body> </html> Output: Light Theme:On clicking the switch button: Dark Theme: Method 2: When you want to select from multiple style sheets. The value for the “href” attribute is passed to the function call itself. Prerequisite: Prepare all the style sheets in a folder. HTML <!DOCTYPE html><html> <head> <!-- Add the style sheet. --> <link id="theme" rel="stylesheet" type="text/css" href="light.css" /> <script> function toggleTheme(value) { // Obtain the name of stylesheet // as a parameter and set it // using href attribute. var sheets = document .getElementsByTagName('link'); sheets[0].href = value; } </script></head> <body> <h2>Changing Style Sheets</h2> <br /> Switch between multiple themes using the buttons below.<br /> <button onclick="toggleTheme('light.css')"> Light </button> <button onclick="toggleTheme('dark.css')"> Dark </button> <button onclick="toggleTheme('geeky.css')"> Geeky </button> <button onclick="toggleTheme('aquatic.css')"> Aquatic </button></body> </html> Output: Light Theme:Dark Theme:Geeky Theme:Aquatic Theme: Note: The corresponding CSS files with required names should be available and the path to them should be passed using the function. The files specified here are placed in the same folder of the HTML file so that the path resembles ‘light.css’. CSS-Misc HTML-Misc JavaScript-Misc CSS HTML JavaScript Web Technologies Web technologies Questions HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Design a Tribute Page using HTML & CSS How to set space between the flexbox ? Build a Survey Form using HTML and CSS Design a web page using HTML and CSS Form validation using jQuery REST API (Introduction) Hide or show elements in HTML using display property How to set the default value for an HTML <select> element ? How to set input type date in dd-mm-yyyy format using HTML ? Design a Tribute Page using HTML & CSS
[ { "code": null, "e": 52, "s": 24, "text": "\n21 Jul, 2020" }, { "code": null, "e": 298, "s": 52, "text": "Many websites on the internet are available in multiple themes. One can obtain any feature by making multiple stylesheets into the HTML code and enabling one at a time. A CSS...
Program to check Strong Number
12 Jun, 2022 Strong Numbers are the numbers whose sum of factorial of digits is equal to the original number. Given a number, check if it is a Strong Number or not.Examples: Input : n = 145 Output : Yes Sum of digit factorials = 1! + 4! + 5! = 1 + 24 + 120 = 145 Input : n = 534 Output : No 1) Initialize sum of factorials as 0. 2) For every digit d, do following a) Add d! to sum of factorials. 3) If sum factorials is same as given number, return true. 4) Else return false. An optimization is to precompute factorials of all numbers from 0 to 10. C++ Java Python3 C# PHP Javascript // C++ program to check if a number is// strong or not.#include <bits/stdc++.h>using namespace std; int f[10]; // Fills factorials of digits from 0 to 9.void preCompute(){ f[0] = f[1] = 1; for (int i = 2; i<10; ++i) f[i] = f[i-1] * i;} // Returns true if x is Strongbool isStrong(int x){ int factSum = 0; // Traverse through all digits of x. int temp = x; while (temp) { factSum += f[temp%10]; temp /= 10; } return (factSum == x);} // Driver codeint main(){ preCompute(); int x = 145; isStrong(x) ? cout << "Yes\n" : cout << "No\n"; x = 534; isStrong(x) ? cout << "Yes\n" : cout << "No\n"; return 0;} // Java program to check if// a number is Strong or not class CheckStrong{ static int f[] = new int[10]; // Fills factorials of digits from 0 to 9. static void preCompute() { f[0] = f[1] = 1; for (int i = 2; i<10; ++i) f[i] = f[i-1] * i; } // Returns true if x is Strong static boolean isStrong(int x) { int factSum = 0; // Traverse through all digits of x. int temp = x; while (temp>0) { factSum += f[temp%10]; temp /= 10; } return (factSum == x); } // main function public static void main (String[] args) { // calling preCompute preCompute(); // first pass int x = 145; if(isStrong(x)) { System.out.println("Yes"); } else System.out.println("No"); // second pass x = 534; if(isStrong(x)) { System.out.println("Yes"); } else System.out.println("No"); }} # Python program to check if a number is# strong or not. f = [None] * 10 # Fills factorials of digits from 0 to 9.def preCompute() : f[0] = f[1] = 1; for i in range(2,10) : f[i] = f[i-1] * i # Returns true if x is Strongdef isStrong(x) : factSum = 0 # Traverse through all digits of x. temp = x while (temp) : factSum = factSum + f[temp % 10] temp = temp // 10 return (factSum == x) # Driver codepreCompute()x = 145if(isStrong(x) ) : print ("Yes")else : print ("No")x = 534if(isStrong(x)) : print ("Yes")else: print ("No") # This code is contributed by Nikita Tiwari. // C# program to check if// a number is Strong or notusing System; class CheckStrong{ static int []f = new int[10]; // Fills factorials of digits from 0 to 9. static void preCompute() { f[0] = f[1] = 1; for (int i = 2; i < 10; ++i) f[i] = f[i - 1] * i; } // Returns true if x is Strong static bool isStrong(int x) { int factSum = 0; // Traverse through all digits of x. int temp = x; while (temp > 0) { factSum += f[temp % 10]; temp /= 10; } return (factSum == x); } // Driver Code public static void Main () { // calling preCompute preCompute(); // first pass int x = 145; if(isStrong(x)) { Console.WriteLine("Yes"); } else Console.WriteLine("No"); // second pass x = 534; if(isStrong(x)) { Console.WriteLine("Yes"); } else Console.WriteLine("No"); }} // This code is contributed by Nitin Mittal. <?php// PHP program to check if a number// is strong or not. $f[10] = array(); // Fills factorials of digits// from 0 to 9.function preCompute(){ global $f; $f[0] = $f[1] = 1; for ($i = 2; $i < 10; ++$i) $f[$i] = $f[$i - 1] * $i;} // Returns true if x is Strongfunction isStrong($x){ global $f; $factSum = 0; // Traverse through all digits of x. $temp = $x; while ($temp) { $factSum += $f[$temp % 10]; $temp = (int)$temp / 10; } return ($factSum == $x);} // Driver codepreCompute();$x = 145; if(isStrong(!$x)) echo "Yes\n";else echo "No\n";$x = 534;if(isStrong($x)) echo "Yes\n";else echo "No\n"; // This code is contributed by jit_t?> <script> // Javascript program to check if a number is// strong or not. let f = new Array(10); // Fills factorials of digits from 0 to 9.function preCompute(){ f[0] = f[1] = 1; for (let i = 2; i<10; ++i) f[i] = f[i-1] * i;} // Returns true if x is Strongfunction isStrong(x){ let factSum = 0; // Traverse through all digits of x. let temp = x; while (temp) { factSum += f[temp%10]; temp = Math.floor(temp/10); } return (factSum == x);} // Driver code preCompute(); let x = 145; isStrong(x) ? document.write("Yes" + "<br>") : document.write("No" + "<br>"); x = 534; isStrong(x) ? document.write("Yes" + "<br>") : document.write("No" + "<br>"); //This code is contributed by Mayank Tyagi </script> Output: Yes No Time Complexity: O(logn) Auxiliary Space: O(n) This article is contributed by Pramod Kumar. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. nitin mittal jit_t mayanktyagi1709 amartyaghoshgfg harshmaster07705 factorial number-digits Mathematical Mathematical factorial Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Merge two sorted arrays Operators in C / C++ Sieve of Eratosthenes Prime Numbers Program to find GCD or HCF of two numbers Minimum number of jumps to reach end Find minimum number of coins that make a given value The Knight's tour problem | Backtracking-1 Algorithm to solve Rubik's Cube Program for Decimal to Binary Conversion
[ { "code": null, "e": 52, "s": 24, "text": "\n12 Jun, 2022" }, { "code": null, "e": 215, "s": 52, "text": "Strong Numbers are the numbers whose sum of factorial of digits is equal to the original number. Given a number, check if it is a Strong Number or not.Examples: " }, { ...
Stack setSize() method in Java with Example
24 Dec, 2018 The setSize() method of Java.util.Stack class changes the size of this Stack instance to the size passed as the parameter. Syntax: public void setSize(int size) Parameters: This method takes the new size as a parameter. Exception: This method throws ArrayIndexOutOfBoundsException if the new size is negative. Below are the examples to illustrate the setSize() method. Example 1: // Java program to demonstrate// setSize() method for Integer value import java.util.*; public class GFG1 { public static void main(String[] argv) { try { // Creating object of Stack<Integer> Stack<Integer> stack = new Stack<Integer>(); // adding element to stack stack.add(10); stack.add(20); stack.add(30); stack.add(40); // Print the Stack System.out.println("Stack: " + stack); // Print the current size of Stack System.out.println("Current size of Stack: " + stack.size()); // Change the size to 10 stack.setSize(10); // Print the current size of Stack System.out.println("New size of Stack: " + stack.size()); } catch (Exception e) { System.out.println("Exception thrown : " + e); } }} Stack: [10, 20, 30, 40] Current size of Stack: 4 New size of Stack: 10 Example 2: // Java program to demonstrate// setSize() method for String value import java.util.*; public class GFG1 { public static void main(String[] argv) { try { // Creating object of Stack<Integer> Stack<String> stack = new Stack<String>(); // adding element to stack stack.add("A"); stack.add("B"); stack.add("C"); stack.add("D"); // Print the Stack System.out.println("Stack: " + stack); // Print the current size of Stack System.out.println("Current size of Stack: " + stack.size()); // Change the size to -1 stack.setSize(-1); // Print the current size of Stack System.out.println("New size of Stack: " + stack.size()); } catch (Exception e) { System.out.println("Exception thrown : " + e); } }} Stack: [A, B, C, D] Current size of Stack: 4 Exception thrown : java.lang.ArrayIndexOutOfBoundsException: -1 Java - util package Java-Collections Java-Functions Java-Stack Java Java Java-Collections Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Arrays in Java Arrays.sort() in Java with examples Split() String method in Java with examples Reverse a string in Java Object Oriented Programming (OOPs) Concept in Java For-each loop in Java How to iterate any Map in Java Interfaces in Java HashMap in Java with Examples ArrayList in Java
[ { "code": null, "e": 28, "s": 0, "text": "\n24 Dec, 2018" }, { "code": null, "e": 151, "s": 28, "text": "The setSize() method of Java.util.Stack class changes the size of this Stack instance to the size passed as the parameter." }, { "code": null, "e": 159, "s": 1...
Python program to find the sum of Characters ascii values in String List
11 Dec, 2020 Given the string list, the task is to write a Python program to compute the summation value of each character’s ASCII value. Examples: Input : test_list = [“geeksforgeeks”, “teaches”, “discipline”] Output : [133, 61, 100] Explanation : Positional character summed to get required values. Input : test_list = [“geeksforgeeks”, “discipline”] Output : [133, 100] Explanation : Positional character summed to get required values. Method 1 : Using ord() + loop In this, we iterate each character in each string and keep on adding positional values to get its sum. The summed value is appended back to the result in a list. Python3 # Python3 code to demonstrate working of # Characters Positions Summation in String List# Using ord() + loop # initializing listtest_list = ["geeksforgeeks", "teaches", "us", "discipline"] # printing original listprint("The original list is : " + str(test_list)) res = []for sub in test_list: ascii_sum = 0 # getting ascii value sum for ele in sub : ascii_sum += (ord(ele) - 96) res.append(ascii_sum) # printing result print("Position Summation List : " + str(res)) Output: The original list is : [‘geeksforgeeks’, ‘teaches’, ‘us’, ‘discipline’]Position Summation List : [133, 61, 40, 100] Method 2 : Using list comprehension + sum() + ord() In this, we get summation using sum(), ord() is used to get the ASCII positional value, and list comprehension offers one-liner solution to this problem. Python3 # Python3 code to demonstrate working of # Characters Positional Summation in String List# Using list comprehension + sum() + ord() # initializing listtest_list = ["geeksforgeeks", "teaches", "us", "discipline"] # printing original listprint("The original list is : " + str(test_list)) # sum() gets summation, list comprehension# used to perform task in one line res = [sum([ord(ele) - 96 for ele in sub]) for sub in test_list] # printing result print("Positional Summation List : " + str(res)) Output: The original list is : [‘geeksforgeeks’, ‘teaches’, ‘us’, ‘discipline’]Position Summation List : [133, 61, 40, 100] Python list-programs Python string-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n11 Dec, 2020" }, { "code": null, "e": 153, "s": 28, "text": "Given the string list, the task is to write a Python program to compute the summation value of each character’s ASCII value." }, { "code": null, "e": 163, "s":...
Java.util.Random class in Java
07 May, 2019 Random class is used to generate pseudo-random numbers in java. An instance of this class is thread-safe. The instance of this class is however cryptographically insecure. This class provides various method calls to generate different random data types such as float, double, int. Constructors: Random(): Creates a new random number generator Random(long seed): Creates a new random number generator using a single long seed Declaration: public class Random extends Object implements Serializable Methods: java.util.Random.doubles(): Returns an effectively unlimited stream of pseudo random double values, each between zero (inclusive) and one (exclusive)Syntax:public DoubleStream doubles() Returns: a stream of pseudorandom double valuesjava.util.Random.ints(): Returns an effectively unlimited stream of pseudo random int valuesSyntax:public IntStream ints() Returns: a stream of pseudorandom int valuesjava.util.Random.longs(): Returns an effectively unlimited stream of pseudo random long valuesSyntax:public LongStream longs() Returns: a stream of pseudorandom long valuesnext(int bits): java.util.Random.next(int bits) Generates the next pseudo random numberSyntax:protected int next(int bits) Parameters: bits - random bits Returns: the next pseudo random value from this random number generator's sequencejava.util.Random.nextBoolean(): Returns the next pseudo random, uniformly distributed boolean value from this random number generator’s sequenceSyntax:public boolean nextBoolean() Returns: the next pseudorandom, uniformly distributed boolean value from this random number generator's sequencejava.util.Random.nextBytes(byte[] bytes) :Generates random bytes and places them into a user-supplied byte arraySyntax:public void nextBytes(byte[] bytes) Parameters: bytes - the byte array to fill with random bytes Throws: NullPointerException - if the byte array is nulljava.util.Random.nextDouble(): Returns the next pseudo random, uniformly distributed double value between 0.0 and 1.0 from this random number generator’s sequenceSyntax:public double nextDouble() Returns: the next pseudo random, uniformly distributed double value between 0.0 and 1.0 from this random number generator's sequencejava.util.Random.nextFloat(): Returns the next pseudo random, uniformly distributed float value between 0.0 and 1.0 from this random number generator’s sequenceSyntax:public float nextFloat() Returns: the next pseudorandom, uniformly distributed float value between 0.0 and 1.0 from this random number generator's sequencejava.util.Random.nextGaussian(): Returns the next pseudo random, Gaussian (“normally”) distributed double value with mean 0.0 and standard deviation 1.0 from this random number generator’s sequenceSyntax:public double nextGaussian() Returns: the next pseudorandom, Gaussian ("normally") distributed double value with mean 0.0 and standard deviation 1.0 from this random number generator's sequencejava.util.Random.nextInt(): Returns the next pseudorandom, uniformly distributed int value from this random number generator’s sequenceSyntax:public int nextInt() Returns: the next pseudorandom, uniformly distributed int value from this random number generator's sequencejava.util.Random.nextInt(int bound): Returns a pseudo random, uniformly distributed int value between 0 (inclusive) and the specified value (exclusive), drawn from this random number generator’s sequenceSyntax:public int nextInt(int bound) Parameters: bound - the upper bound (exclusive). Must be positive. Returns: the next pseudorandom, uniformly distributed int value between zero (inclusive) and bound (exclusive) from this random number generator's sequence Throws: IllegalArgumentException - if bound is not positivejava.util.Random.nextLong(): Returns the next pseudorandom, uniformly distributed long value from this random number generator’s sequenceSyntax:public long nextLong() Returns: the next pseudorandom, uniformly distributed long value from this random number generator's sequencejava.util.Random.setSeed(long seed): Sets the seed of this random number generator using a single long seedSyntax:public void setSeed(long seed) Parameters: seed - the initial seed java.util.Random.doubles(): Returns an effectively unlimited stream of pseudo random double values, each between zero (inclusive) and one (exclusive)Syntax:public DoubleStream doubles() Returns: a stream of pseudorandom double values public DoubleStream doubles() Returns: a stream of pseudorandom double values java.util.Random.ints(): Returns an effectively unlimited stream of pseudo random int valuesSyntax:public IntStream ints() Returns: a stream of pseudorandom int values public IntStream ints() Returns: a stream of pseudorandom int values java.util.Random.longs(): Returns an effectively unlimited stream of pseudo random long valuesSyntax:public LongStream longs() Returns: a stream of pseudorandom long values public LongStream longs() Returns: a stream of pseudorandom long values next(int bits): java.util.Random.next(int bits) Generates the next pseudo random numberSyntax:protected int next(int bits) Parameters: bits - random bits Returns: the next pseudo random value from this random number generator's sequence protected int next(int bits) Parameters: bits - random bits Returns: the next pseudo random value from this random number generator's sequence java.util.Random.nextBoolean(): Returns the next pseudo random, uniformly distributed boolean value from this random number generator’s sequenceSyntax:public boolean nextBoolean() Returns: the next pseudorandom, uniformly distributed boolean value from this random number generator's sequence public boolean nextBoolean() Returns: the next pseudorandom, uniformly distributed boolean value from this random number generator's sequence java.util.Random.nextBytes(byte[] bytes) :Generates random bytes and places them into a user-supplied byte arraySyntax:public void nextBytes(byte[] bytes) Parameters: bytes - the byte array to fill with random bytes Throws: NullPointerException - if the byte array is null public void nextBytes(byte[] bytes) Parameters: bytes - the byte array to fill with random bytes Throws: NullPointerException - if the byte array is null java.util.Random.nextDouble(): Returns the next pseudo random, uniformly distributed double value between 0.0 and 1.0 from this random number generator’s sequenceSyntax:public double nextDouble() Returns: the next pseudo random, uniformly distributed double value between 0.0 and 1.0 from this random number generator's sequence public double nextDouble() Returns: the next pseudo random, uniformly distributed double value between 0.0 and 1.0 from this random number generator's sequence java.util.Random.nextFloat(): Returns the next pseudo random, uniformly distributed float value between 0.0 and 1.0 from this random number generator’s sequenceSyntax:public float nextFloat() Returns: the next pseudorandom, uniformly distributed float value between 0.0 and 1.0 from this random number generator's sequence public float nextFloat() Returns: the next pseudorandom, uniformly distributed float value between 0.0 and 1.0 from this random number generator's sequence java.util.Random.nextGaussian(): Returns the next pseudo random, Gaussian (“normally”) distributed double value with mean 0.0 and standard deviation 1.0 from this random number generator’s sequenceSyntax:public double nextGaussian() Returns: the next pseudorandom, Gaussian ("normally") distributed double value with mean 0.0 and standard deviation 1.0 from this random number generator's sequence public double nextGaussian() Returns: the next pseudorandom, Gaussian ("normally") distributed double value with mean 0.0 and standard deviation 1.0 from this random number generator's sequence java.util.Random.nextInt(): Returns the next pseudorandom, uniformly distributed int value from this random number generator’s sequenceSyntax:public int nextInt() Returns: the next pseudorandom, uniformly distributed int value from this random number generator's sequence public int nextInt() Returns: the next pseudorandom, uniformly distributed int value from this random number generator's sequence java.util.Random.nextInt(int bound): Returns a pseudo random, uniformly distributed int value between 0 (inclusive) and the specified value (exclusive), drawn from this random number generator’s sequenceSyntax:public int nextInt(int bound) Parameters: bound - the upper bound (exclusive). Must be positive. Returns: the next pseudorandom, uniformly distributed int value between zero (inclusive) and bound (exclusive) from this random number generator's sequence Throws: IllegalArgumentException - if bound is not positive public int nextInt(int bound) Parameters: bound - the upper bound (exclusive). Must be positive. Returns: the next pseudorandom, uniformly distributed int value between zero (inclusive) and bound (exclusive) from this random number generator's sequence Throws: IllegalArgumentException - if bound is not positive java.util.Random.nextLong(): Returns the next pseudorandom, uniformly distributed long value from this random number generator’s sequenceSyntax:public long nextLong() Returns: the next pseudorandom, uniformly distributed long value from this random number generator's sequence public long nextLong() Returns: the next pseudorandom, uniformly distributed long value from this random number generator's sequence java.util.Random.setSeed(long seed): Sets the seed of this random number generator using a single long seedSyntax:public void setSeed(long seed) Parameters: seed - the initial seed public void setSeed(long seed) Parameters: seed - the initial seed Methods inherited from class java.lang.Object clone equals finalize getClass hashCode notify notifyAll toString wait Java program to demonstrate usage of Random class // Java program to demonstrate// method calls of Random classimport java.util.Random; public class Test{ public static void main(String[] args) { Random random = new Random(); System.out.println(random.nextInt(10)); System.out.println(random.nextBoolean()); System.out.println(random.nextDouble()); System.out.println(random.nextFloat()); System.out.println(random.nextGaussian()); byte[] bytes = new byte[10]; random.nextBytes(bytes); System.out.printf("["); for(int i = 0; i< bytes.length; i++) { System.out.printf("%d ", bytes[i]); } System.out.printf("]\n"); System.out.println(random.nextLong()); System.out.println(random.nextInt()); long seed = 95; random.setSeed(seed); // Note: Running any of the code lines below // will keep the program running as each of the // methods below produce an unlimited random // values of the corresponding type /* System.out.println("Sum of all the elements in the IntStream returned = " + random.ints().count()); System.out.println("Count of all the elements in the DoubleStream returned = " + random.doubles().count()); System.out.println("Count of all the elements in the LongStream returned = " + random.longs().count()); */ }} Output: 4 true 0.19674934340402916 0.7372021 1.4877581394085997 [-44 75 68 89 81 -72 -1 -66 -64 117 ] 158739962004803677 -1344764816 Reference: Oracle This article is contributed by Mayank Kumar. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Akanksha_Rai Java - util package Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Stream In Java Introduction to Java Constructors in Java Exceptions in Java Generics in Java Functional Interfaces in Java Java Programming Examples Strings in Java Differences between JDK, JRE and JVM Abstraction in Java
[ { "code": null, "e": 52, "s": 24, "text": "\n07 May, 2019" }, { "code": null, "e": 333, "s": 52, "text": "Random class is used to generate pseudo-random numbers in java. An instance of this class is thread-safe. The instance of this class is however cryptographically insecure. Th...
How to get the powers of an array values element-wise in Python-Pandas?
18 Aug, 2020 Let’s see how to get the powers of an array values element-wise. Dataframe/Series.pow() is used to calculate the power of elements either with itself or with other Series provided. This function is applicable for real numbers only, and doesn’t give results for complex numbers. So let’s see the programs: Example 1: The uni-dimensional arrays are mapped to a pandas series with either default numeric indices or custom indexes Then corresponding elements are raised to its own power. Python3 # import required modulesimport numpy as npimport pandas as pd # create an arraysample_array = np.array([1, 2, 3]) # uni dimensional arrays can be# mapped to pandas seriessr = pd.Series(sample_array) print ("Original Array :")print (sr) # calculating element-wise power power_array = sr.pow(sr) print ("Element-wise power array")print (power_array) Output: Example 2: Powers can also be computed for floating-point decimal numbers. Python3 # module to work with arrays in pythonimport array # module required to compute powerimport pandas as pd # creating a 1-dimensional floating # point array containing three elementssample_array = array.array('d', [1.1, 2.0, 3.5]) # uni dimensional arrays can # be mapped to pandas seriessr = pd.Series(sample_array) print ("Original Array :")print (sr) # computing power of each # element with itself power_array = sr.pow(sr) print ("Element-wise power array")print (power_array) Output: Example 3: The Multi-dimensional arrays can be mapped to pandas data frames. The data frame then contains each cell comprising a numeric (integer or floating-point numbers) which can be raised to its own individual powers. Python3 # module to work with # arrays in pythonimport array # module required to # compute powerimport pandas as pd # 2-d matrix containing # 2 rows and 3 columnsdf = pd.DataFrame({'X': [1,2], 'Y': [3,4], 'Z': [5,6]}); print ("Original Array :")print(df) # power function to calculate# power of data frame elements# with itselfpower_array = df.pow(df) print ("Element-wise power array")print (power_array) Output: Python pandas-dataFrame Python Pandas-exercise Python-pandas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n18 Aug, 2020" }, { "code": null, "e": 333, "s": 28, "text": "Let’s see how to get the powers of an array values element-wise. Dataframe/Series.pow() is used to calculate the power of elements either with itself or with other Series prov...
Sqoop - Export
This chapter describes how to export data back from the HDFS to the RDBMS database. The target table must exist in the target database. The files which are given as input to the Sqoop contain records, which are called rows in table. Those are read and parsed into a set of records and delimited with user-specified delimiter. The default operation is to insert all the record from the input files to the database table using the INSERT statement. In update mode, Sqoop generates the UPDATE statement that replaces the existing record into the database. The following is the syntax for the export command. $ sqoop export (generic-args) (export-args) $ sqoop-export (generic-args) (export-args) Let us take an example of the employee data in file, in HDFS. The employee data is available in emp_data file in ‘emp/’ directory in HDFS. The emp_data is as follows. 1201, gopal, manager, 50000, TP 1202, manisha, preader, 50000, TP 1203, kalil, php dev, 30000, AC 1204, prasanth, php dev, 30000, AC 1205, kranthi, admin, 20000, TP 1206, satish p, grp des, 20000, GR It is mandatory that the table to be exported is created manually and is present in the database from where it has to be exported. The following query is used to create the table ‘employee’ in mysql command line. $ mysql mysql> USE db; mysql> CREATE TABLE employee ( id INT NOT NULL PRIMARY KEY, name VARCHAR(20), deg VARCHAR(20), salary INT, dept VARCHAR(10)); The following command is used to export the table data (which is in emp_data file on HDFS) to the employee table in db database of Mysql database server. $ sqoop export \ --connect jdbc:mysql://localhost/db \ --username root \ --table employee \ --export-dir /emp/emp_data The following command is used to verify the table in mysql command line. mysql>select * from employee; If the given data is stored successfully, then you can find the following table of given employee data. +------+--------------+-------------+-------------------+--------+ | Id | Name | Designation | Salary | Dept | +------+--------------+-------------+-------------------+--------+ | 1201 | gopal | manager | 50000 | TP | | 1202 | manisha | preader | 50000 | TP | | 1203 | kalil | php dev | 30000 | AC | | 1204 | prasanth | php dev | 30000 | AC | | 1205 | kranthi | admin | 20000 | TP | | 1206 | satish p | grp des | 20000 | GR |
[ { "code": null, "e": 2247, "s": 1921, "text": "This chapter describes how to export data back from the HDFS to the RDBMS database. The target table must exist in the target database. The files which are given as input to the Sqoop contain records, which are called rows in table. Those are read and p...