title stringlengths 3 221 | text stringlengths 17 477k | parsed listlengths 0 3.17k |
|---|---|---|
Build a user-based collaborative filtering recommendation engine for Anime | by GreekDataGuy | Towards Data Science | Today we’ll build a recommendation engine for anime, powered by user-based collaborative filtering. This is only one of several different approaches to recommender systems
In user-based collaborative filtering:- users are deemed similar if they like similar items- we first discover which users are similar- then recommend items that other similar users like
Take a look at the picture I’ve (painstakingly) drawn above.
Sunny likes paintings by Monet, Picasso and Dali.Ted likes paintings by Monet and Picasso.
Sunny and Ted are similar because they like some of the same artists. Sunny likes Dali but Ted has never seen a Dali painting.So let's recommend Dali to Ted.
Clear as mud? Now that we understand how it works high level, let's build a recommender.
Download the data from Kaggle and load it into 2 dataframes.
anime.csv —details about the anime in our databaserating.csv —ratings by a specific user for a specific anime
DIR = 'anime-recommendations-database/'import pandas as pdanimes = pd.read_csv(DIR + 'anime.csv')ratings = pd.read_csv(DIR + 'rating.csv')
From reading the docs I know that ratings with a value of -1 means a user has watched the movie but hasn’t rated it. I’m making the assumption this gives us no useful information and remove those records.
ratings = ratings[ratings.rating != -1]ratings.head()
animes.head()
No one dislikes spending 75% of an article on data exploration more than me so let's just understand the size and distribution of our data.
# number of ratingslen(ratings)=> 6337241# number of userslen(ratings['user_id'].unique())=> 69600# number of unique animes (in anime list, not ratings)len(animes['anime_id'].unique())#=> 11200# avg number of anime rated per userimport statisticsratings_per_user = ratings.groupby('user_id')['rating'].count()statistics.mean(ratings_per_user.tolist())#=> 91.05231321839081# distribution of ratings per user# (we may want to exclude users without many data points)import matplotlib.pyplot as plt%matplotlib inlineratings_per_user.hist(bins=20, range=(0,500))
# avg number of ratings given per animeratings_per_anime = ratings.groupby('anime_id')['rating'].count()statistics.mean(ratings_per_anime.tolist())=> 638.3843054296364# distribution of ratings per animeimport matplotlib.pyplot as plt%matplotlib inlineratings_per_anime.hist(bins=20, range=(0,2500))
In user-based collaborative filtering, vectors representing users are essentially lists of the ratings they’ve given. So the more anime in our universe, the more dimensionality per user.
Let's reduce the amount of data to crunch by removing anime that hasn’t been rated by many users. Make a list of anime to keep based on id.
# counts of ratings per anime as a dfratings_per_anime_df = pd.DataFrame(ratings_per_anime)# remove if < 1000 ratingsfiltered_ratings_per_anime_df = ratings_per_anime_df[ratings_per_anime_df.rating >= 1000]# build a list of anime_ids to keeppopular_anime = filtered_ratings_per_anime_df.index.tolist()
And users who haven’t rated many anime.
# counts ratings per user as a dfratings_per_user_df = pd.DataFrame(ratings_per_user)# remove if < 500filtered_ratings_per_user_df = ratings_per_user_df[ratings_per_user_df.rating >= 500]# build a list of user_ids to keepprolific_users = filtered_ratings_per_user_df.index.tolist()
Now filter out anime and users not in those lists.
filtered_ratings = ratings[ratings.anime_id.isin(popular_anime)]filtered_ratings = ratings[ratings.user_id.isin(prolific_users)]len(filtered_ratings)=> 1005314
We’re down from 6M to 1M rating data points. Nice.
Let's build a rating matrix between users and animes.
rating_matrix = filtered_ratings.pivot_table(index='user_id', columns='anime_id', values='rating')# replace NaN values with 0rating_matrix = rating_matrix.fillna(0)# display the top few rowsrating_matrix.head()
Write a function to find the most similar users to the current_user using cosine similarity. We’ve arbitrarily decided to find the 3 most similar users.
And picked “226” as our current user, but we could have picked anyone.
from sklearn.metrics.pairwise import cosine_similarityimport operatordef similar_users(user_id, matrix, k=3): # create a df of just the current user user = matrix[matrix.index == user_id] # and a df of all other users other_users = matrix[matrix.index != user_id] # calc cosine similarity between user and each other user similarities = cosine_similarity(user,other_users)[0].tolist() # create list of indices of these users indices = other_users.index.tolist() # create key/values pairs of user index and their similarity index_similarity = dict(zip(indices, similarities)) # sort by similarity index_similarity_sorted = sorted(index_similarity.items(), key=operator.itemgetter(1)) index_similarity_sorted.reverse() # grab k users off the top top_users_similarities = index_similarity_sorted[:k] users = [u[0] for u in top_users_similarities] return users current_user = 226# try it outsimilar_user_indices = similar_users(current_user, rating_matrix)print(similar_user_indices)#=> [30773, 39021, 45603]
Now write a function to make the recommendation. We’ve set the function to return the 5 top recommended anime.
def recommend_item(user_index, similar_user_indices, matrix, items=5): # load vectors for similar users similar_users = matrix[matrix.index.isin(similar_user_indices)] # calc avg ratings across the 3 similar users similar_users = similar_users.mean(axis=0) # convert to dataframe so its easy to sort and filter similar_users_df = pd.DataFrame(similar_users, columns=['mean']) # load vector for the current user user_df = matrix[matrix.index == user_index] # transpose it so its easier to filter user_df_transposed = user_df.transpose() # rename the column as 'rating' user_df_transposed.columns = ['rating'] # remove any rows without a 0 value. Anime not watched yet user_df_transposed = user_df_transposed[user_df_transposed['rating']==0] # generate a list of animes the user has not seen animes_unseen = user_df_transposed.index.tolist() # filter avg ratings of similar users for only anime the current user has not seen similar_users_df_filtered = similar_users_df[similar_users_df.index.isin(animes_unseen)] # order the dataframe similar_users_df_ordered = similar_users_df.sort_values(by=['mean'], ascending=False) # grab the top n anime top_n_anime = similar_users_df_ordered.head(items) top_n_anime_indices = top_n_anime.index.tolist() # lookup these anime in the other dataframe to find names anime_information = animes[animes['anime_id'].isin(top_n_anime_indices)] return anime_information #items# try it outrecommend_item(226, similar_user_indices, rating_matrix)
There we have it! The 5 highest-rated anime from the most similar users that our current user has not yet watched.
In reality, we’d want to experiment with different similarity algorithms and different numbers of similar users. But I’d like you to treat this as a rough framework for user-based collaborative filtering.
Until next time. | [
{
"code": null,
"e": 344,
"s": 172,
"text": "Today we’ll build a recommendation engine for anime, powered by user-based collaborative filtering. This is only one of several different approaches to recommender systems"
},
{
"code": null,
"e": 531,
"s": 344,
"text": "In user-based ... |
How to export millions of records from Mysql to AWS S3? | by Harshit Rastogi | Towards Data Science | At Twilio, we handle millions of calls happening across the world daily. Once the call is over it is logged into a MySQL DB. The customer has the ability to query the details of the Calls via an API. (Yes Twilio is API driven company)
One of the tasks I recently worked on is to build a system allowing the customers to export their historical calls data. This would allow them to export all of their historic call logs up until the most recent
At first glance, this seems trivial but if we go deeper and think about how a system would scale for some of our biggest customers who have been with us since the inception, this problem could be classified as building a system to scale. Our typical customer scale can range from making a few 100 calls a day to millions.
Suddenly this problem becomes a big data problem too when a large customer making 1 million calls a day requests the last 5 year worth of data. The total calls can be in the range of 1,000,000 * 5 * 365.
I had to think about how to optimize reading from the Mysql and efficiently write to S3 such that the files could be available to download.
1. Write a cron job that queries Mysql DB for a particular account and then writes the data to S3. This could work well for fetching smaller sets of records but to make the job work well to store a large number of records, I need to build a mechanism to retry at the event of failure, parallelizing the reads and writes for efficient download, add monitoring to measure the success of the job. I would have to write connectors or use libraries to connect with MySql and S3.
2. Using Spark Streaming
I decided to use Apache Spark for handling this problem since on my team (Voice Insights) we already use it heavily for more real-time data processing and building analytics. Apache Spark is a popular framework for building scalable real-time processing applications and is extensively used in the industry to solve big data and Machine learning problems. One of the key features of Spark is its ability to produce/consume data from various sources such as Kafka, Kinesis, S3, Mysql, files, etc.
Apache Spark is also fault-tolerant and provides a framework for handling failure and retry elegantly. It uses a checkpointing mechanism to store the intermediate offsets of the tasks executed such that in the event of a task failure the task could be restarted from the last saved position. It works well to configure the job for horizontal scaling.
Assuming the readers have a basic understanding of Spark (Spark official documentation is a good place to start). I am going to dive into the code.
Lets, look at how to read from MySQL DB.
val jdbcDF = spark.read .format("jdbc") .option("url", "jdbc:mysql://localhost:port/db") .option("driver", "com.mysql.jdbc.Driver") .option("dbtable", "schema.tablename") .option("user", "username") .option("password", "password") .load()
The other way to connect with JDBC could be by proving the config as a Map.
val dbConfig = Map("username" -> "admin", "password" -> "pwd", "url" -> "http://localhost:3306")val query = "select * FROM CallLog where CustomerId=1"val jdbcDF = spark.read .format("jdbc") .options(dbConfig) .option("dbtable", s"${query} AS tmp") .load()ORval jdbcDF = spark.read .format("jdbc") .options(dbConfig) .option("query", s"${query} AS tmp") .load()
There is a subtle difference between using query and dbtable. Both options will create a subquery in the FROM clause. The above query will be converted to
SELECT * FROM (SELECT * FROM CallLog where CustomerId=1) tmp WHERE 1=0
The big take away here is query does not support the partitionColmn while dbTable supports partitioning which allows for better throughput through parallelism.
Let’s say if we have to export the CallLog for one of our huge customers we would need to take advantage of a divide and conquer approach and need a better way to parallelize the effort.
What it means is that Spark can execute multiple queries against the same table concurrently but each query runs by setting a different range value for a column(partitioned column). This can be done in Spark by setting a few more parameters. Let’s look at a few more parameters:
numPartitions option defines the maximum number of partitions that can be used for parallelism in a table for reading. This also determines the maximum number of concurrent JDBC connections.
partitionColumn must be a numeric, date, or a timestamp column from the table in question. The parameters describe how to partition the table when reading in parallel with multiple workers.
lowerBound and upperBound are just used to decide the partition stride, not for filtering the rows in the table. So all rows in the table will be partitioned and returned. This option applies only to reading.
fetchSize The JDBC fetch size, which determines how many rows to fetch per round trip. This can help performance on JDBC drivers which default to low fetch size (eg. Oracle with 10 rows). This option applies only to reading.
In the above example partitionColumn can be CallId. Let’s try to connect all the parameters.
val dbConfig = Map("username" -> "admin", "password" -> "pwd", "url" -> "http://localhost:3306", "numPartitions" -> 10, "paritionColumn" -> "CallId", "lowerBound" -> 0, "upperBound" -> 10,000,000)val query = "select * from CallLog where CustomerId=1"val jdbcDF = spark.read .format("jdbc") .options(dbConfig) .option("dbtable", s"(${query}) AS tmp") .load()
The above configuration will result in running the following parallel queries:
SELECT * from CallLog where CustomerId =1 AND CallId >=0 AND CallId <1,000,000SELECT * from CallLog where CustomerId =1 AND CallId >= 1,000,000 AND CallId <2,000,000SELECT * from CallLog where CustomerId =1 AND CallId>= 2,000,000 AND CallId <3,000,000....SELECT * from CallLog where CustomerId =1 AND CallId>= 10,000,000
The above parallelization of queries will help to read the results from the table faster.
Once Spark is able to read the data from Mysql, it is trivial to dump the data into S3.
jdbcDF.write .format("json") .mode("append") .save("${s3path}")
Conclusion:
The above approach gave us the opportunity to use Spark for solving a classical batch job problem. We are doing a lot more with Apache Spark and this is a demonstration of one of the many use cases. I would love to hear what are you building with the Spark. | [
{
"code": null,
"e": 407,
"s": 172,
"text": "At Twilio, we handle millions of calls happening across the world daily. Once the call is over it is logged into a MySQL DB. The customer has the ability to query the details of the Calls via an API. (Yes Twilio is API driven company)"
},
{
"code"... |
Java Examples - Downloading a Webpage | How to read and download a webpage?
Following example shows how to read and download a webpage URL() constructer of net.URL class.
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.InputStreamReader;
import java.net.URL;
public class Main {
public static void main(String[] args) throws Exception {
URL url = new URL("http://www.google.com");
BufferedReader reader = new BufferedReader(new InputStreamReader(url.openStream()));
BufferedWriter writer = new BufferedWriter(new FileWriter("data.html"));
String line;
while ((line = reader.readLine()) != null) {
System.out.println(line);
writer.write(line);
writer.newLine();
}
reader.close();
writer.close();
}
}
The above code sample will produce the following result.
Welcome to Java Tutorial
Here we have plenty of examples for you!
Come and Explore Java!
The following is an another example to read and download a webpage.
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;
public class NewClass {
public static void main(String[] args) {
URL url;
InputStream is = null;
BufferedReader br;
String line;
try {
url = new URL("https://www.tutorialspoint.com/javaexamples/net_singleuser.htm");
is = url.openStream(); // throws an IOException
br = new BufferedReader(new InputStreamReader(is));
while ((line = br.readLine()) != null) {
System.out.println(line);
}
} catch (MalformedURLException mue) {
mue.printStackTrace();
} catch (IOException ioe) {
ioe.printStackTrace();
}
finally {
try {
if (is != null) is.close();
} catch (IOException ioe) {}
}
}
}
The above code sample will produce the following result.
<!DOCTYPE html>
<!--[if IE 8]><html class="ie ie8"> <![endif]-->
<!--[if IE 9]><html class="ie ie9"> <![endif]-->
<!--[if gt IE 9]><!--> <html> <!--<![endif]-->
<head>
<!-- Basic -->
<meta charset="utf-8">
<title>Java Examples - Socket to a single client</title>
<meta name="description" content="Java Examples Socket to a single client : A beginner's tutorial containing complete knowledge of Java Syntax Object Oriented Language, Methods, Overriding, Inheritance, Polymorphism, Interfaces, Packages, Collections, Networking, Multithreading, Generics, Multimedia, Serialization, GUI." />
<meta name="keywords" content="Java, Tutorials, Learning, Beginners, Basics, Object Oriented Language, Methods, Overriding, Inheritance, Polymorphism, Interfaces, Packages, Collections, Networking, Multithreading, Generics, Multimedia, Serialization, GUI." />
<base href="https://www.tutorialspoint.com/" />
<link rel="shortcut icon" href="/favicon.ico" type="image/x-icon" />
<meta name="viewport" content="width=device-width,initial-scale=1.0,user-scalable=yes">
<meta property="og:locale" content="en_US" />
<meta property="og:type" content="website" />
<meta property="fb:app_id" content="471319149685276" />
<meta property="og:site_name" content="www.tutorialspoint.com" />
<meta name="robots" content="index, follow"/>
<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-status-bar-style" content="black">
<meta name="author" content="tutorialspoint.com">
<script type="text/javascript" src="/theme/js/script-min-v4.js"></script>
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2104,
"s": 2068,
"text": "How to read and download a webpage?"
},
{
"code": null,
"e": 2199,
"s": 2104,
"text": "Following example shows how to read and download a webpage URL() constructer of net.URL class."
},
{
"code": null,
"e": 2872,
"s":... |
What is the difference between an int and a long in C++? | The datatype int is used to store the integer values. It could be signed or unsigned. The datatype int is of 32-bit or 4 bytes. It requires less memory area than long to store a value. The keyword “int” is used to declare an integer variable.
The following is the syntax of int datatype.
int variable_name;
Here,
variable_name − The name of variable given by user.
The following is an example of int datatype.
Live Demo
#include <iostream>
using namespace std;
int main() {
int a = 8;
int b = 10;
int c = a+b;
cout << "The value of c : " << c;
return 0;
}
The value of c : 18
The datatype long is used to store the long integer values. It could be signed or unsigned. The datatype long is of 64-bit or 8 bytes. It requires more memory area than int to store the value. The keyword “long” is used to declare a long integer variable.
The following is the syntax of long datatype.
long variable_name;
Here,
variable_name − The name of variable given by user.
The following is an example of long datatype.
Live Demo
#include <iostream>
using namespace std;
int main() {
int a = 8;
long b = 28;
long c = long(a+b);
cout << "The value of c : " << c;
return 0;
}
The value of c : 36 | [
{
"code": null,
"e": 1305,
"s": 1062,
"text": "The datatype int is used to store the integer values. It could be signed or unsigned. The datatype int is of 32-bit or 4 bytes. It requires less memory area than long to store a value. The keyword “int” is used to declare an integer variable."
},
{
... |
Deep Clustering for Sparse Data. A rather “shallow” and simple approach... | by Alon Agmon | Towards Data Science | We usually cluster data in order to find or learn about relationships and structures that exist within it, especially where our data is too complex or too dimensional for simple descriptive statistics. When it comes to highly dimensional data, it is sometimes striking how informative and useful can be a simple clustering algorithm. But clustering, or unsupervised learning more generally, is often much more challenging than choosing the right algorithm or approach (and that is not simple at all), because a good clustering result usually depends on how the data is organized and represented. Indeed, I have seen more clustering problems solved with a creative way of representing or organizing the same dataset than with some innovative algorithm.
One of the most dominant features of neural networks (if not *the* most dominant one) is their ability to efficiently learn ways to represent highly dimensional data in a manner that reveals exactly those complex relationships and structures that characterize it. This is why neural networks are used so extensively for dimensionality reduction and pattern recognition tasks. Unsurprisingly, over the past few years it has been also suggested and shown that neural networks can assist clustering tasks by providing a better (or more informative) representation of data before it is clustered (See here for some nice overview of DEC and DCN — two of the most dominant approaches).
Many of the proposed architectures that follow this approach are based on some form of preliminary encoding-decoding task (or an autoencoder) which is aimed at creating a more refined representation of the data before it is clustered. Very briefly and simply put, an autoencoder is a network that learns to reconstruct the input that it receives by creating lower-dimensional representations of the data, which (if done right) preserves the most important information about it, and then uses these lower dimensional representations to reconstruct it. The reason that autoencoders are suitable for this task is that if they do a good job in reconstructing their input, then this means that they have learned a lower dimensional representation of the data that probably well captures the structures and relationships within it, and, therefore, using these representations in order to cluster the data will yield a better result.
Unfortunately, many of these proposed architectures are not at all obvious or simple to implement, certainly not for something like a practical POC, and some of which required relatively high resources comparing to their value. However, a couple of months ago I have stumbled upon the paper “N2d:(not too) deep clustering via clustering the local manifold of an autoencoded embedding” . The paper, which didn’t seem to gain much traction, essentially suggests a much simpler and rather easy-to-implement approach to deep clustering. After experimenting with it for a while and getting some very nice results, I thought it might be a good idea to create this brief and practical intro that will allow you to quickly test it and put it to work.
The idea, in short, is based on 3 stages:
(1) Create an autoencoder that will learn lower dimensional representation of the data, which will hopefully capture the most important information and structures within it;
(2) Apply a manifold learning method (such as UMAP or TSNE) to further reduce the dimensions of the data and create finer representations that will improve the performance of the clustering algorithm.
(3) Cluster the data.
I will leave the theory and explanation to the paper (which actually has a lot of practical examples) and delve straight to the slightly modified and shorter implementation, which is mostly based on the code published by authors.
The data I will be using is a rather sparse matrix of users and the number of times they have visited different UI areas of a certain website. My goal is to cluster users into groups, using their activities on the website, in order to better understand their preferences and behavior as a group. The data is normalized on the vector level, has 50 dimensions/features and about 20K users. You can see an excerpt below.
The first thing we’ll do is create the architecture of the autoencoder. Following the architecture presented in the paper, the autoencoder will expand the number of dimensions and then create a bottleneck which will reduce the dimensions to 10 ( a common practice with autoencoders, see here)
The autoencoder can be easily constructed using the following helper function (also given in the paper). The function simply takes the encoding network structure that you want to create as a list (in our case that will be [50, 500, 500, 2000, 10] but this needs to be experimented with and you can certainly use a smaller one) and return a model.
So the following piece of code gives us a full autoencoder that will encode a 50 dim dataset to 10 dimensions.
# X = matrix shown aboveencoded_dimensions = 10shape = [X.shape[-1], 500, 500, 2000, encoded_dimensions]autoencoder = get_autoencoder(shape)
Recall that the dense layer we are interested in is the “bottleneck” layer which captures the lower (10D) representation of our data (highlighted above). More accurately, what we need here is just the encoder layers which we shall use to create the lower dimensional embedding of our data. So the plan is to train the autoencoder and then just use the encoder layers (with the trained weights) to refine our data. (It sounds complex but if you check the notebook on my repo you’ll see that it's more or less straightforward)
# this will just give us the label we gave the encoder layer# 'encoder_3'encoded_layer = f'encoder_{(len(shape) - 2)}'#we take the layer that we are interested inhidden_encoder_layer = autoencoder.get_layer(name=encoded_layer).output# create just the encoder model that we can use after the autoencoder will be trainedencoder = Model(inputs=autoencoder.input, outputs=hidden_encoder_layer)
So, now all that is left is to fit() the autoencoder and then use the encoder that already references its weights to encode our data to a lower dimensional representation.
autoencoder.compile(loss='mse', optimizer='adam')autoencoder.fit( X_train, X_train, batch_size=batch_size, epochs=pretrain_epochs, verbose=1, validation_data=(X_test, X_test))# use the weights learned by the encoder to encode the data to a representation (embedding)X_encoded = encoder.predict(X)
The second stage is to use manifold learning to refine the clusters and further reduce the dimensions. The reason to add a manifold learning task on top of the encoded embedding, the authors write, is that an autoencoder usually does not take local structures into account, and that “by augmenting the autoencoder with a manifold learning technique which explicitly takes local structure into account, we can increase the quality of the representation learned in terms of clusterability.” Although there are good reasons in favor and against using manifold learning as a prepossessing step before clustering, I believe that the authors are correct (and they show it) that it will probably improve the performance of density-based algorithms and help create denser “neighborhoods.” But this is really something that you need to experiment with on a case by case basis.
In this case, I will follow the authors in the paper and used UMAP for this task using the ( slightly modified) helper function below.
X_reduced = learn_manifold(X_encoded, umap_neighbors=30, umap_dim=int(encoded_dimensions/2))
And now we should have our dateset ready to be clustered.
The next step is pretty much straightforward: just pick your favorite density based clustering method and let it run over the data. Just note that when the data should be visualized in 2d then it is better to lower the dimension of the embedding before visualizing rather than the post-manifold-learning embedding. The data should now be lumped together in a sufficiently dense forms that will make it easier to cluster.
# this is the data that we need to clusterlabels = hdbscan.HDBSCAN( min_samples=100, min_cluster_size=1000).fit_predict(X_reduced)# lower dim to 2d so we can plot itreducer = umap.UMAP(n_components=2)embedding = reducer.fit_transform(X_encoded)
As a comparison, this is the result of running HDBSCAN on the data before it was further refined using manifold learning. As you can see, with exactly the same hyper-parameters, HDBSCAN only picked up 2 main clusters, probably due to less refined neighborhoods comparing to the ones produced by UMAP.
It sounds intuitive, but researchers sometimes forget that a good clustering result often depends more on how the data is organized or represented than on the chosen clustering algorithm and its hyper parameters. It is much more challenging to find ways to reorganize a complex data set, but in most cases it pays off.
Deep learning methods usually excel in efficiently learning and producing embedded representations of data, and this is why they are sometimes used as a pre-processing stage for clustering tasks that is aimed at creating a less dimensional and more cluster-able representation of the data. Although deep learning approaches to clustering are often more complex to implement, when the data is highly dimensional then the performance gains they offer cannot be overlooked.
In this post, I have tried to create a brief and very practical intro to a rather simple and straightforward approach (named N2D by its authors). Although N2D is similarly aimed at producing a more clustering friendly encoded embedding of highly dimensional data using DL, I have showed that unlike other approaches, N2D can yield pretty impressive results and performance gains with a much simpler architecture and with relatively little implementation efforts. Although my data set was based on sparse data, which presents a distinct problem in its own right, the authors of the article show pretty impressive results using this approach with many common clustering data sets. To sum, I think that N2D is a very nice addition to the clustering toolbox, especially in the preliminary research stages, which can quickly and efficiently yield some results to work with.
Hope this will be helpful.
The notebook is available in my repo here | [
{
"code": null,
"e": 924,
"s": 172,
"text": "We usually cluster data in order to find or learn about relationships and structures that exist within it, especially where our data is too complex or too dimensional for simple descriptive statistics. When it comes to highly dimensional data, it is somet... |
Event Handling in Spring | You have seen in all the chapters that the core of Spring is the ApplicationContext, which manages the complete life cycle of the beans. The ApplicationContext publishes certain types of events when loading the beans. For example, a ContextStartedEvent is published when the context is started and ContextStoppedEvent is published when the context is stopped.
Event handling in the ApplicationContext is provided through the ApplicationEvent class and ApplicationListener interface. Hence, if a bean implements the ApplicationListener, then every time an ApplicationEvent gets published to the ApplicationContext, that bean is notified.
Spring provides the following standard events −
ContextRefreshedEvent
This event is published when the ApplicationContext is either initialized or refreshed. This can also be raised using the refresh() method on the ConfigurableApplicationContext interface.
ContextStartedEvent
This event is published when the ApplicationContext is started using the start() method on the ConfigurableApplicationContext interface. You can poll your database or you can restart any stopped application after receiving this event.
ContextStoppedEvent
This event is published when the ApplicationContext is stopped using the stop() method on the ConfigurableApplicationContext interface. You can do required housekeep work after receiving this event.
ContextClosedEvent
This event is published when the ApplicationContext is closed using the close() method on the ConfigurableApplicationContext interface. A closed context reaches its end of life; it cannot be refreshed or restarted.
RequestHandledEvent
This is a web-specific event telling all beans that an HTTP request has been serviced.
Spring's event handling is single-threaded so if an event is published, until and unless all the receivers get the message, the processes are blocked and the flow will not continue. Hence, care should be taken when designing your application if the event handling is to be used.
To listen to a context event, a bean should implement the ApplicationListener interface which has just one method onApplicationEvent(). So let us write an example to see how the events propagates and how you can put your code to do required task based on certain events.
Let us have a working Eclipse IDE in place and take the following steps to create a Spring application −
Here is the content of HelloWorld.java file
package com.tutorialspoint;
public class HelloWorld {
private String message;
public void setMessage(String message){
this.message = message;
}
public void getMessage(){
System.out.println("Your Message : " + message);
}
}
Following is the content of the CStartEventHandler.java file
package com.tutorialspoint;
import org.springframework.context.ApplicationListener;
import org.springframework.context.event.ContextStartedEvent;
public class CStartEventHandler
implements ApplicationListener<ContextStartedEvent>{
public void onApplicationEvent(ContextStartedEvent event) {
System.out.println("ContextStartedEvent Received");
}
}
Following is the content of the CStopEventHandler.java file
package com.tutorialspoint;
import org.springframework.context.ApplicationListener;
import org.springframework.context.event.ContextStoppedEvent;
public class CStopEventHandler
implements ApplicationListener<ContextStoppedEvent>{
public void onApplicationEvent(ContextStoppedEvent event) {
System.out.println("ContextStoppedEvent Received");
}
}
Following is the content of the MainApp.java file
package com.tutorialspoint;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
public class MainApp {
public static void main(String[] args) {
ConfigurableApplicationContext context =
new ClassPathXmlApplicationContext("Beans.xml");
// Let us raise a start event.
context.start();
HelloWorld obj = (HelloWorld) context.getBean("helloWorld");
obj.getMessage();
// Let us raise a stop event.
context.stop();
}
}
Following is the configuration file Beans.xml
<?xml version = "1.0" encoding = "UTF-8"?>
<beans xmlns = "http://www.springframework.org/schema/beans"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
<bean id = "helloWorld" class = "com.tutorialspoint.HelloWorld">
<property name = "message" value = "Hello World!"/>
</bean>
<bean id = "cStartEventHandler" class = "com.tutorialspoint.CStartEventHandler"/>
<bean id = "cStopEventHandler" class = "com.tutorialspoint.CStopEventHandler"/>
</beans>
Once you are done creating the source and bean configuration files, let us run the application. If everything is fine with your application, it will print the following message −
ContextStartedEvent Received
Your Message : Hello World!
ContextStoppedEvent Received
If you like, you can publish your own custom events and later you can capture the same to take any action against those custom events. If you are interested in writing your own custom events, you can check Custom Events in Spring.
102 Lectures
8 hours
Karthikeya T
39 Lectures
5 hours
Chaand Sheikh
73 Lectures
5.5 hours
Senol Atac
62 Lectures
4.5 hours
Senol Atac
67 Lectures
4.5 hours
Senol Atac
69 Lectures
5 hours
Senol Atac
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2652,
"s": 2292,
"text": "You have seen in all the chapters that the core of Spring is the ApplicationContext, which manages the complete life cycle of the beans. The ApplicationContext publishes certain types of events when loading the beans. For example, a ContextStartedEvent ... |
JavaScript | Function binding | 04 Jan, 2022
In JavaScript function binding happens using Bind() method. With this method, we can bind an object to a common function, so that the function gives different result when needed. otherwise it gives the same result or gives an error while the code is executing.We use the Bind() method to call a function with the this value, this keyword refers to the same object which is currently selected . In other words, bind() method allows us to easily set which object will be bound by the this keyword when a function or method is invoked.The need for bind usually occurs, when we use the this keyword in a method and we call that method from a receiver object, then sometimes this is not bound to the object that we expect to be bound to. This results in errors in our program.
Now, a simple program to print the name which is called by this keyword when the function printFunc() is invoked.
<script>var geeks = {name : "ABC",printFunc: function(){ document.write(this.name);} } geeks.printFunc();</script>
Output:
ABC
Here is no problem to access the name “ABC”, this keyword bind the name variable to the function. It is known as default binding. this keyword refers geeks object.
Now see the below code,
<script>var geeks = {name : "ABC",printFunc: function(){ document.write(this.name);} } var printFunc2= geeks.printFunc; printFunc2();</script>
Output:
//no output is produced by this code//
Here we made a new variable function printFunc2 which refers to the function printFunc() of object geeks. Here the binding of this is lost, so no output is produced.To make sure that any binding of this is not to be lost, we are using Bind() method.By using bind() method we can set the context of this to a particular object. So we can use other variables also to call binded function.
Use bind() method in the previous example:
<script>var geeks = {name : "ABC",printFunc: function(){ document.write(this.name);} } var printFunc2= geeks.printFunc.bind(geeks); //using bind() // bind() takes the object "geeks" as parameter// printFunc2();</script>
Output:
ABC
The bind() method creates a new function where this keyword refers to the parameter in the parenthesis in the above case geeks. This way the bind() method enables calling a function with a specified this value.
Example 4:In this example there is 3 objects, and each time we call each object by using bind()method.
<script>//object geeks1var geeks1 = {name : "ABC",article: "C++"}//object geeks2 var geeks2 = {name : "CDE",article: "JAVA"} //object geeks3 var geeks3 = {name : "IJK",article: "C#"} function printVal(){ document.write(this.name+" contributes about "+this.article+"<br>"); } var printFunc2= printVal.bind(geeks1); //using bind() // bind() takes the object "geeks1" as parameter// printFunc2(); var printFunc3= printVal.bind(geeks2); printFunc3(); var printFunc4= printVal.bind(geeks3); printFunc4(); //uniquely defines each objects</script>
Output:
ABC contributes about C++
CDE contributes about JAVA
IJK contributes about C#
ujjwalsingh108
javascript-functions
Picked
JavaScript
Technical Scripter
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Remove elements from a JavaScript Array
Roadmap to Learn JavaScript For Beginners
Difference Between PUT and PATCH Request
JavaScript | Promises
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ? | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n04 Jan, 2022"
},
{
"code": null,
"e": 824,
"s": 52,
"text": "In JavaScript function binding happens using Bind() method. With this method, we can bind an object to a common function, so that the function gives different result when nee... |
Program for conversion of 32 Bits Single Precision IEEE 754 Floating Point Representation | 04 Aug, 2021
Pre-Requisite: IEEE Standard 754 Floating Point NumbersWrite a program to find out the 32 Bits Single Precision IEEE 754 Floating-Point representation of a given real value and vice versa.
Examples:
Input: real number = 16.75
Output: 0 | 10000011 | 00001100000000000000000
Input: floating point number = 0 | 10000011 | 00001100000000000000000
Output: 16.75
Approach: This implementation is based on Union Datatype in C and using the concept of Bit Fields. Bit Fields are assigned when we don’t require the full memory that is usually allocated to some variables but we want to limit the amount of memory taken up by those variables. In C, members of a Union share the common memory space and taken we can access the members only one at a time. Below is the implementation of the above approach:Program 1: Convert a real value to its floating point representation
C++
C
Python3
// C++ program to convert a real value// to IEEE 754 floating point representation#include<bits/stdc++.h>using namespace std; void printBinary(int n, int i){ // Prints the binary representation // of a number n up to i-bits. int k; for (k = i - 1; k >= 0; k--) { if ((n >> k) & 1) cout << "1"; else cout << "0"; }} typedef union { float f; struct { // Order is important. // Here the members of the union data structure // use the same memory (32 bits). // The ordering is taken // from the LSB to the MSB. unsigned int mantissa : 23; unsigned int exponent : 8; unsigned int sign : 1; } raw;} myfloat; // Function to convert real value// to IEEE floating point representationvoid printIEEE(myfloat var){ // Prints the IEEE 754 representation // of a float value (32 bits) cout << var.raw.sign << " | "; printBinary(var.raw.exponent, 8); cout << " | "; printBinary(var.raw.mantissa, 23); cout << "\n";} // Driver Codeint main(){ // Instantiate the union myfloat var; // Get the real value var.f = -2.25; // Get the IEEE floating point representation cout << "IEEE 754 representation of "; cout << fixed << setprecision(6) << var.f << " is : " << endl; printIEEE(var); return 0;} //This code is contributed by shubhamsingh10
// C program to convert a real value// to IEEE 754 floating point representation #include <stdio.h> void printBinary(int n, int i){ // Prints the binary representation // of a number n up to i-bits. int k; for (k = i - 1; k >= 0; k--) { if ((n >> k) & 1) printf("1"); else printf("0"); }} typedef union { float f; struct { // Order is important. // Here the members of the union data structure // use the same memory (32 bits). // The ordering is taken // from the LSB to the MSB. unsigned int mantissa : 23; unsigned int exponent : 8; unsigned int sign : 1; } raw;} myfloat; // Function to convert real value// to IEEE floating point representationvoid printIEEE(myfloat var){ // Prints the IEEE 754 representation // of a float value (32 bits) printf("%d | ", var.raw.sign); printBinary(var.raw.exponent, 8); printf(" | "); printBinary(var.raw.mantissa, 23); printf("\n");} // Driver Codeint main(){ // Instantiate the union myfloat var; // Get the real value var.f = -2.25; // Get the IEEE floating point representation printf("IEEE 754 representation of %f is : \n", var.f); printIEEE(var); return 0;}
# Python program to convert a real value# to IEEE 754 Floating Point Representation. # Function to convert a# fraction to binary form.def binaryOfFraction(fraction): # Declaring an empty string # to store binary bits. binary = str() # Iterating through # fraction until it # becomes Zero. while (fraction): # Multiplying fraction by 2. fraction *= 2 # Storing Integer Part of # Fraction in int_part. if (fraction >= 1): int_part = 1 fraction -= 1 else: int_part = 0 # Adding int_part to binary # after every iteration. binary += str(int_part) # Returning the binary string. return binary # Function to get sign bit,# exp bits and mantissa bits,# from given real no.def floatingPoint(real_no): # Setting Sign bit # default to zero. sign_bit = 0 # Sign bit will set to # 1 for negative no. if(real_no < 0): sign_bit = 1 # converting given no. to # absolute value as we have # already set the sign bit. real_no = abs(real_no) # Converting Integer Part # of Real no to Binary int_str = bin(int(real_no))[2 : ] # Function call to convert # Fraction part of real no # to Binary. fraction_str = binaryOfFraction(real_no - int(real_no)) # Getting the index where # Bit was high for the first # Time in binary repres # of Integer part of real no. ind = int_str.index('1') # The Exponent is the no. # By which we have right # Shifted the decimal and # it is given below. # Also converting it to bias # exp by adding 127. exp_str = bin((len(int_str) - ind - 1) + 127)[2 : ] # getting mantissa string # By adding int_str and fraction_str. # the zeroes in MSB of int_str # have no significance so they # are ignored by slicing. mant_str = int_str[ind + 1 : ] + fraction_str # Adding Zeroes in LSB of # mantissa string so as to make # it's length of 23 bits. mant_str = mant_str + ('0' * (23 - len(mant_str))) # Returning the sign, Exp # and Mantissa Bit strings. return sign_bit, exp_str, mant_str # Driver Codeif __name__ == "__main__": # Function call to get # Sign, Exponent and # Mantissa Bit Strings. sign_bit, exp_str, mant_str = floatingPoint(-2.250000) # Final Floating point Representation. ieee_32 = str(sign_bit) + '|' + exp_str + '|' + mant_str # Printing the ieee 32 representation. print("IEEE 754 representation of -2.250000 is :") print(ieee_32)
IEEE 754 representation of -2.250000 is :
1 | 10000000 | 00100000000000000000000
Program 2: Convert a floating point representation to its real value
C++
C
Python3
// C++ program to convert// IEEE 754 floating point representation// into real value #include<bits/stdc++.h>using namespace std; typedef union { float f; struct { // Order is important. // Here the members of the union data structure // use the same memory (32 bits). // The ordering is taken // from the LSB to the MSB. unsigned int mantissa : 23; unsigned int exponent : 8; unsigned int sign : 1; } raw;} myfloat; // Function to convert a binary array// to the corresponding integerunsigned int convertToInt(unsigned int* arr, int low, int high){ unsigned int f = 0, i; for (i = high; i >= low; i--) { f = f + arr[i] * pow(2, high - i); } return f;} // Driver Codeint main(){ // Get the 32-bit floating point number unsigned int ieee[32] = { 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; myfloat var; // Convert the least significant // mantissa part (23 bits) // to corresponding decimal integer unsigned int f = convertToInt(ieee, 9, 31); // Assign integer representation of mantissa var.raw.mantissa = f; // Convert the exponent part (8 bits) // to a corresponding decimal integer f = convertToInt(ieee, 1, 8); // Assign integer representation // of the exponent var.raw.exponent = f; // Assign sign bit var.raw.sign = ieee[0]; cout << "The float value of the given" " IEEE-754 representation is : \n"; cout << fixed << setprecision(6) << var.f <<endl; return 0;} // This code is contributed by ShubhamSingh10
// C program to convert// IEEE 754 floating point representation// into real value #include <math.h>#include <stdio.h> typedef union { float f; struct { // Order is important. // Here the members of the union data structure // use the same memory (32 bits). // The ordering is taken // from the LSB to the MSB. unsigned int mantissa : 23; unsigned int exponent : 8; unsigned int sign : 1; } raw;} myfloat; // Function to convert a binary array// to the corresponding integerunsigned int convertToInt(int* arr, int low, int high){ unsigned f = 0, i; for (i = high; i >= low; i--) { f = f + arr[i] * pow(2, high - i); } return f;} // Driver Codeint main(){ // Get the 32-bit floating point number unsigned int ieee[32] = { 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; myfloat var; // Convert the least significant // mantissa part (23 bits) // to corresponding decimal integer unsigned f = convertToInt(ieee, 9, 31); // Assign integer representation of mantissa var.raw.mantissa = f; // Convert the exponent part (8 bits) // to a corresponding decimal integer f = convertToInt(ieee, 1, 8); // Assign integer representation // of the exponent var.raw.exponent = f; // Assign sign bit var.raw.sign = ieee[0]; printf("The float value of the given" " IEEE-754 representation is : \n"); printf("%f", var.f);}
# Python program to convert# IEEE 754 floating point representation# into real value # Function to convert Binary# of Mantissa to float value.def convertToInt(mantissa_str): # variable to make a count # of negative power of 2. power_count = -1 # variable to store # float value of mantissa. mantissa_int = 0 # Iterations through binary # Number. Standard form of # Mantissa is 1.M so we have # 0.M therefore we are taking # negative powers on 2 for # conversion. for i in mantissa_str: # Adding converted value of # Binary bits in every # iteration to float mantissa. mantissa_int += (int(i) * pow(2, power_count)) # count will decrease by 1 # as we move toward right. power_count -= 1 # returning mantissa in 1.M form. return (mantissa_int + 1) if __name__ == "__main__": # Floating Point Representation # to be converted into real # value. ieee_32 = '1|10000000|00100000000000000000000' # First bit will be sign bit. sign_bit = int(ieee_32[0]) # Next 8 bits will be # Exponent Bits in Biased # form. exponent_bias = int(ieee_32[2 : 10], 2) # In 32 Bit format bias # value is 127 so to have # unbiased exponent # subtract 127. exponent_unbias = exponent_bias - 127 # Next 23 Bits will be # Mantissa (1.M format) mantissa_str = ieee_32[11 : ] # Function call to convert # 23 binary bits into # 1.M real no. form mantissa_int = convertToInt(mantissa_str) # The final real no. obtained # by sign bit, mantissa and # Exponent. real_no = pow(-1, sign_bit) * mantissa_int * pow(2, exponent_unbias) # Printing the obtained # Real value of floating # Point Representation. print("The float value of the given IEEE-754 representation is :",real_no)
The float value of the given IEEE-754 representation is :
-2.250000
amit_mangal_
anikaseth98
varshagumber28
SHUBHAMSINGH10
simmytarika5
C Programs
Computer Organization & Architecture
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n04 Aug, 2021"
},
{
"code": null,
"e": 245,
"s": 54,
"text": "Pre-Requisite: IEEE Standard 754 Floating Point NumbersWrite a program to find out the 32 Bits Single Precision IEEE 754 Floating-Point representation of a given real value a... |
Find Maximum number possible by doing at-most K swaps | 10 Jun, 2022
Given a positive integer, find the maximum integer possible by doing at-most K swap operations on its digits.Examples:
Input: M = 254, K = 1
Output: 524
Swap 5 with 2 so number becomes 524
Input: M = 254, K = 2
Output: 542
Swap 5 with 2 so number becomes 524
Swap 4 with 2 so number becomes 542
Input: M = 68543, K = 1
Output: 86543
Swap 8 with 6 so number becomes 86543
Input: M = 7599, K = 2
Output: 9975
Swap 9 with 5 so number becomes 7995
Swap 9 with 7 so number becomes 9975
Input: M = 76543, K = 1
Output: 76543
Explanation: No swap is required.
Input: M = 129814999, K = 4
Output: 999984211
Swap 9 with 1 so number becomes 929814991
Swap 9 with 2 so number becomes 999814291
Swap 9 with 8 so number becomes 999914281
Swap 1 with 8 so number becomes 999984211
Naive Solution:Approach: The idea is to consider every digit and swap it with digits following it one at a time and see if it leads to the maximum number. The process is repeated K times. The code can be further optimized, if the current digit is swapped with a digit less than the following digit.Algorithm:
Create a global variable which will store the maximum string or number.Define a recursive function that takes the string as number and value of kRun a nested loop, the outer loop from 0 to length of string -1 and inner loop from i+1 to end of the string.Swap the ith and jth character and check if the string is now maximum and update the maximum string.Call the function recursively with parameters: string and k-1.Now again swap back the ith and jth character.
Create a global variable which will store the maximum string or number.
Define a recursive function that takes the string as number and value of k
Run a nested loop, the outer loop from 0 to length of string -1 and inner loop from i+1 to end of the string.
Swap the ith and jth character and check if the string is now maximum and update the maximum string.
Call the function recursively with parameters: string and k-1.
Now again swap back the ith and jth character.
C++
Java
Python3
C#
Javascript
// C++ program to find maximum// integer possible by doing// at-most K swap operations// on its digits.#include <bits/stdc++.h>using namespace std; // Function to find maximum// integer possible by// doing at-most K swap// operations on its digitsvoid findMaximumNum( string str, int k, string& max){ // Return if no swaps left if (k == 0) return; int n = str.length(); // Consider every digit for (int i = 0; i < n - 1; i++) { // Compare it with all digits after it for (int j = i + 1; j < n; j++) { // if digit at position i // is less than digit // at position j, swap it // and check for maximum // number so far and recurse // for remaining swaps if (str[i] < str[j]) { // swap str[i] with str[j] swap(str[i], str[j]); // If current num is more // than maximum so far if (str.compare(max) > 0) max = str; // recurse of the other k - 1 swaps findMaximumNum(str, k - 1, max); // Backtrack swap(str[i], str[j]); } } }} // Driver codeint main(){ string str = "129814999"; int k = 4; string max = str; findMaximumNum(str, k, max); cout << max << endl; return 0;}
// Java program to find maximum// integer possible by doing// at-most K swap operations// on its digits.import java.util.*;class GFG{ static String max;// Function to find maximum// integer possible by// doing at-most K swap// operations on its digitsstatic void findMaximumNum(char[] str, int k){ // Return if no swaps left if (k == 0) return; int n = str.length; // Consider every digit for (int i = 0; i < n - 1; i++) { // Compare it with all digits // after it for (int j = i + 1; j < n; j++) { // if digit at position i // is less than digit // at position j, swap it // and check for maximum // number so far and recurse // for remaining swaps if (str[i] < str[j]) { // swap str[i] with // str[j] char t = str[i]; str[i] = str[j]; str[j] = t; // If current num is more // than maximum so far if (String.valueOf(str).compareTo(max) > 0) max = String.valueOf(str); // recurse of the other // k - 1 swaps findMaximumNum(str, k - 1); // Backtrack char c = str[i]; str[i] = str[j]; str[j] = c; } } }} // Driver codepublic static void main(String[] args){ String str = "129814999"; int k = 4; max = str; findMaximumNum(str.toCharArray(), k); System.out.print(max + "\n");}} // This code is contributed by 29AjayKumar
# Python3 program to find maximum# integer possible by doing at-most# K swap operations on its digits. # utility function to swap two# characters of a stringdef swap(string, i, j): return (string[:i] + string[j] + string[i + 1:j] + string[i] + string[j + 1:]) # function to find maximum integer# possible by doing at-most K swap# operations on its digitsdef findMaximumNum(string, k, maxm): # return if no swaps left if k == 0: return n = len(string) # consider every digit for i in range(n - 1): # and compare it with all digits after it for j in range(i + 1, n): # if digit at position i is less than # digit at position j, swap it and # check for maximum number so far and # recurse for remaining swaps if string[i] < string[j]: # swap string[i] with string[j] string = swap(string, i, j) # If current num is more than # maximum so far if string > maxm[0]: maxm[0] = string # recurse of the other k - 1 swaps findMaximumNum(string, k - 1, maxm) # backtrack string = swap(string, i, j) # Driver Codeif __name__ == "__main__": string = "129814999" k = 4 maxm = [string] findMaximumNum(string, k, maxm) print(maxm[0]) # This code is contributed# by vibhu4agarwal
// C# program to find maximum// integer possible by doing// at-most K swap operations// on its digits.using System;class GFG{ static String max;// Function to find maximum// integer possible by// doing at-most K swap// operations on its digitsstatic void findMaximumNum(char[] str, int k){ // Return if no swaps left if (k == 0) return; int n = str.Length; // Consider every digit for (int i = 0; i < n - 1; i++) { // Compare it with all digits // after it for (int j = i + 1; j < n; j++) { // if digit at position i // is less than digit // at position j, swap it // and check for maximum // number so far and recurse // for remaining swaps if (str[i] < str[j]) { // swap str[i] with // str[j] char t = str[i]; str[i] = str[j]; str[j] = t; // If current num is more // than maximum so far if (String.Join("", str).CompareTo(max) > 0) max = String.Join("", str); // recurse of the other // k - 1 swaps findMaximumNum(str, k - 1); // Backtrack char c = str[i]; str[i] = str[j]; str[j] = c; } } }} // Driver codepublic static void Main(String[] args){ String str = "129814999"; int k = 4; max = str; findMaximumNum(str.ToCharArray(), k); Console.Write(max + "\n");}} // This code is contributed by gauravrajput1
<script>// Javascript program to find maximum// integer possible by doing// at-most K swap operations// on its digits. let max; // Function to find maximum// integer possible by// doing at-most K swap// operations on its digitsfunction findMaximumNum(str,k){ // Return if no swaps left if (k == 0) return; let n = str.length; // Consider every digit for (let i = 0; i < n - 1; i++) { // Compare it with all digits // after it for (let j = i + 1; j < n; j++) { // if digit at position i // is less than digit // at position j, swap it // and check for maximum // number so far and recurse // for remaining swaps if (str[i] < str[j]) { // swap str[i] with // str[j] let t = str[i]; str[i] = str[j]; str[j] = t; // If current num is more // than maximum so far if ((str).join("")>(max) ) max = (str).join(""); // recurse of the other // k - 1 swaps findMaximumNum(str, k - 1); // Backtrack let c = str[i]; str[i] = str[j]; str[j] = c; } } }} // Driver codelet str = "129814999";let k = 4;max = str;findMaximumNum(str.split(""), k);document.write(max + "<br>"); // This code is contributed by unknown2108</script>
999984211
Complexity Analysis:
Time Complexity: O((n^2)^k). For every recursive call n^2 recursive calls is generated until the value of k is 0. So total recursive calls are O((n^2)^k).
Space Complexity:O(n). This is the space required to store the output string.
Efficient Solution:Approach: The above approach traverses the whole string at each recursive call which is highly inefficient and unnecessary. Also, pre-computing the maximum digit after the current at a recursive call avoids unnecessary exchanges with each digit. It can be observed that to make the maximum string, the maximum digit is shifted to the front. So, instead of trying all pairs, try only those pairs where one of the elements is the maximum digit which is not yet swapped to the front. There is an improvement by 27580 microseconds for each test case.Algorithm:
Create a global variable which will store the maximum string or number.Define a recursive function that takes the string as a number, the value of k, and the current index.Find the index of the maximum element in the range current index to end.if the index of the maximum element is not equal to the current index then decrement the value of k.Run a loop from the current index to the end of the arrayIf the ith digit is equal to the maximum elementSwap the ith and element at the current index and check if the string is now maximum and update the maximum string.Call the function recursively with parameters: string and k.Now again swap back the ith and element at the current index.
Create a global variable which will store the maximum string or number.
Define a recursive function that takes the string as a number, the value of k, and the current index.
Find the index of the maximum element in the range current index to end.
if the index of the maximum element is not equal to the current index then decrement the value of k.
Run a loop from the current index to the end of the array
If the ith digit is equal to the maximum element
Swap the ith and element at the current index and check if the string is now maximum and update the maximum string.
Call the function recursively with parameters: string and k.
Now again swap back the ith and element at the current index.
C++
Java
Python3
C#
// C++ program to find maximum// integer possible by doing// at-most K swap operations on// its digits.#include <bits/stdc++.h>using namespace std; // Function to find maximum// integer possible by// doing at-most K swap operations// on its digitsvoid findMaximumNum( string str, int k, string& max, int ctr){ // return if no swaps left if (k == 0) return; int n = str.length(); // Consider every digit after // the cur position char maxm = str[ctr]; for (int j = ctr + 1; j < n; j++) { // Find maximum digit greater // than at ctr among rest if (maxm < str[j]) maxm = str[j]; } // If maxm is not equal to str[ctr], // decrement k if (maxm != str[ctr]) --k; // search this maximum among the rest from behind //first swap the last maximum digit if it occurs more then 1 time //example str= 1293498 and k=1 then max string is 9293418 instead of 9213498 for (int j = n-1; j >=ctr; j--) { // If digit equals maxm swap // the digit with current // digit and recurse for the rest if (str[j] == maxm) { // swap str[ctr] with str[j] swap(str[ctr], str[j]); // If current num is more than // maximum so far if (str.compare(max) > 0) max = str; // recurse other swaps after cur findMaximumNum(str, k, max, ctr + 1); // Backtrack swap(str[ctr], str[j]); } }} // Driver codeint main(){ string str = "129814999"; int k = 4; string max = str; findMaximumNum(str, k, max, 0); cout << max << endl; return 0;}
// Java program to find maximum// integer possible by doing// at-most K swap operations on// its digits. import java.io.*;class Res { static String max = "";} class Solution { // Function to set highest possible digits at given // index. public static void findMaximumNum(char ar[], int k, Res r) { if (k == 0) return; int n = ar.length; for (int i = 0; i < n - 1; i++) { for (int j = i + 1; j < n; j++) { // if digit at position i is less than digit // at position j, we swap them and check for // maximum number so far. if (ar[j] > ar[i]) { char temp = ar[i]; ar[i] = ar[j]; ar[j] = temp; String st = new String(ar); // if current number is more than // maximum so far if (r.max.compareTo(st) < 0) { r.max = st; } // calling recursive function to set the // next digit. findMaximumNum(ar, k - 1, r); // backtracking temp = ar[i]; ar[i] = ar[j]; ar[j] = temp; } } } } // Function to find the largest number after k swaps. public static void main(String[] args) { String str = "129814999"; int k = 4; Res r = new Res(); r.max = str; findMaximumNum(str.toCharArray(), k, r); //Print the answer stored in res class System.out.println(r.max); }}
# Python3 program to find maximum# integer possible by doing at-most# K swap operations on its digits. # function to find maximum integer# possible by doing at-most K swap# operations on its digitsdef findMaximumNum(string, k, maxm, ctr): # return if no swaps left if k == 0: return n = len(string) # Consider every digit after # the cur position mx = string[ctr] for i in range(ctr+1,n): # Find maximum digit greater # than at ctr among rest if int(string[i]) > int(mx): mx=string[i] # If maxm is not equal to str[ctr], # decrement k if(mx!=string[ctr]): k=k-1 # search this maximum among the rest from behind # first swap the last maximum digit if it occurs more then 1 time # example str= 1293498 and k=1 then max string is 9293418 instead of 9213498 for i in range(ctr,n): # If digit equals maxm swap # the digit with current # digit and recurse for the rest if(string[i]==mx): # swap str[ctr] with str[j] string[ctr], string[i] = string[i], string[ctr] new_str = "".join(string) # If current num is more than # maximum so far if int(new_str) > int(maxm[0]): maxm[0] = new_str # recurse of the other k - 1 swaps findMaximumNum(string, k , maxm, ctr+1) # backtrack string[ctr], string[i] = string[i], string[ctr] # Driver Codeif __name__ == "__main__": string = "129814999" k = 4 maxm = [string] string = [char for char in string] findMaximumNum(string, k, maxm, 0) print(maxm[0]) # This code is contributed Aarti_Rathi
// C# program to find maximum// integer possible by doing// at-most K swap operations on// its digits.using System; class Res { public String max = "";} public class Solution { // Function to set highest possible digits at given // index. static void findMaximumNum(char []ar, int k, Res r) { if (k == 0) return; int n = ar.Length; for (int i = 0; i < n - 1; i++) { for (int j = i + 1; j < n; j++) { // if digit at position i is less than digit // at position j, we swap them and check for // maximum number so far. if (ar[j] > ar[i]) { char temp = ar[i]; ar[i] = ar[j]; ar[j] = temp; String st = new String(ar); // if current number is more than // maximum so far if (r.max.CompareTo(st) < 0) { r.max = st; } // calling recursive function to set the // next digit. findMaximumNum(ar, k - 1, r); // backtracking temp = ar[i]; ar[i] = ar[j]; ar[j] = temp; } } } } // Function to find the largest number after k swaps. public static void Main(String[] args) { String str = "129814999"; int k = 4; Res r = new Res(); r.max = str; findMaximumNum(str.ToCharArray(), k, r); // Print the answer stored in res class Console.WriteLine(r.max); }} // This code is contributed by shikhasingrajput
999984211
Complexity Analysis:
Time Complexity: O(n^k). For every recursive call n recursive calls is generated until the value of k is 0. So total recursive calls are O((n)^k).
Space Complexity: O(n). The space required to store the output string.
Exercise:
Find minimum integer possible by doing at-least K swap operations on its digits.Find maximum/minimum integer possible by doing exactly K swap operations on its digits.
Find minimum integer possible by doing at-least K swap operations on its digits.
Find maximum/minimum integer possible by doing exactly K swap operations on its digits.
This article is contributed by Aarti Rathi and Aditya Goel.If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
vibhu4agarwal
machinepainter
andrew1234
29AjayKumar
GauravRajput1
RohitOberoi
unknown2108
harsh_walia
shikhasingrajput
adi1212
Amazon
MakeMyTrip
Morgan Stanley
Walmart
Backtracking
Morgan Stanley
Amazon
MakeMyTrip
Walmart
Backtracking
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n10 Jun, 2022"
},
{
"code": null,
"e": 172,
"s": 52,
"text": "Given a positive integer, find the maximum integer possible by doing at-most K swap operations on its digits.Examples: "
},
{
"code": null,
"e": 827,
"s": 172... |
HTML | <input> multiple Attribute | 28 May, 2019
The HTML <input> multiple Attribute is a Boolean Attribute. It specifies that the user is allowed to select more than one value that presents in an element. The multiple attributes work with many input fields such as email, file, etc.
Syntax:
<input multiple>
Example:
<!DOCTYPE html><html> <body> <center> <h1 style="color:green;font-style:italic;"> GeeksForGeeks </h1> <h2 style="color:green;font-style:italic;"> HTML input multiple Attribute </h2> <form action=" "> Select images: <input type="file" name="img" multiple> <input type="submit"> </form> </center> </body> </html>
Output:
Supported Browsers:
Google Chrome 6.0
Firefox 3.6
Edge 10.0
Opera 11.0
Apple Safari 5.0
HTML-Attributes
HTML
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n28 May, 2019"
},
{
"code": null,
"e": 263,
"s": 28,
"text": "The HTML <input> multiple Attribute is a Boolean Attribute. It specifies that the user is allowed to select more than one value that presents in an element. The multiple attri... |
PHP file_get_contents() Function | 30 Nov, 2021
In this article, we will see how to read the entire file into a string using the file_get_contents() function, along with understanding their implementation through the example.
The file_get_contents() function in PHP is an inbuilt function that is used to read a file into a string. The function uses memory mapping techniques that are supported by the server and thus enhance the performance making it a preferred way of reading the contents of a file. The path of the file to be read, which is sent as a parameter to the function and it returns the data read on success and FALSE on failure.
Syntax:
file_get_contents($path, $include_path, $context,
$start, $max_length)
Parameters: The file_get_contents() function in PHP accepts one mandatory parameter and four optional parameters.
$path: It specifies the path of the file or directory you want to check.
$include_path: It is an optional parameter that searches for a file in the file in the include_path (in php.ini) also if it is set to 1.
$context: It is an optional parameter that is used to specify a custom context.
$start: It is an optional parameter that is used to specify the starting point in the file for reading.
$max_length: It is an optional parameter that is used to specify the number of bytes to be read.
Return Value: It returns the read data on success and FALSE on failure.
Approach: For getting the file into the string, we will use the file_get_contents() function. For the 1st example, we will specify the URL link as an argument that will redirect to the given site. For the 2nd example, generate the file name “gfg.txt” that contains the data. This function will read the file into a string & render the content accordingly.
Errors And Exceptions:
If you want to open a file with special characters, such as spaces, it needs to be encoded first using urlencode().
The file_get_contents() function returns Boolean FALSE, but may also return a non-Boolean value which evaluates to FALSE.
An E_WARNING level error is generated if filename cannot be found, maxlength is less than zero, or if seeking the specified offset in the stream fails.
Consider the following example.
Input: file_get_contents('https://www.geeksforgeeks.org/');
Output: A computer science portal for geeks
Input: file_get_contents('gfg.txt', FALSE, NULL, 0, 14);
Output: A computer science portal for geeks
Example 1: The below example illustrate the file_get_contents() function.
PHP
<!DOCTYPE html> <body> <?php // Reading contents from the // geeksforgeeks homepage $homepage = file_get_contents( "https://www.geeksforgeeks.org/"); echo $homepage; ?></body></html>
Output:
Example 2: This example illustrates getting the file into a string.
PHP
<!DOCTYPE html><body> <?php // Reading 36 bytes starting from // the 0th character from gfg.txt $text = file_get_contents('gfg.txt', false, NULL, 0, 36); echo $text; ?></body></html>
Output:
A computer science portal for geeks
Reference: http://php.net/manual/en/function.file-get-contents.php
arorakashish0911
bhaskargeeksforgeeks
PHP-file-handling
PHP-function
PHP
Web Technologies
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n30 Nov, 2021"
},
{
"code": null,
"e": 206,
"s": 28,
"text": "In this article, we will see how to read the entire file into a string using the file_get_contents() function, along with understanding their implementation through the exampl... |
Sum of shortest distance on source to destination and back having at least a common vertex | 28 Jun, 2022
Given a directed weighted graph and the source and destination vertex. The task is to find the sum of shortest distance on the path going from source to destination and then from destination to source such that both the paths have at least a common vertex other than the source and the destination. Note: On going from destination to source, all the directions of the edges are reversed.Examples:
Input: src = 0, des = 1
Output: 17 Explanation: Common vertex is 4 and path is 0 -> 4 -> 3 -> 1 -> 4 -> 0
Approach: The idea is to use Dijkstra’s algorithm. On finding the shortest path from source to destination and shortest path from destination to the source using Dijkstra’s algorithm, it may not result in a path where there is at least one node in common except the source and destination vertex.
Let s be the source vertex and d be destination vertex and v be the intermediate node common in both the paths from source to destination and destination to source. The shortest pair of paths, so that v is in intersection of this two paths is a path: s -> v -> d -> v -> s and it’s length is
dis[s][v] + dis[v][d] + dis[d][v] + dis[v][s]
Since s and d are fixed, just find v such that it gives shortest path.
In order to find such v, follow the below steps: Find shortest distance from all vertices to s and d which gives us the values of dis[v][s] and dis[v][d]. For finding the shortest path from all the vertices to a given node refer Shortest paths from all vertices to a destination.Find shortest distance of all vertex from s and d which gives us d[s][v] and d[d][v].Iterate for all v and find minimum of d[s][v] + d[v][d] + d[d][v] + d[v][s].
Find shortest distance from all vertices to s and d which gives us the values of dis[v][s] and dis[v][d]. For finding the shortest path from all the vertices to a given node refer Shortest paths from all vertices to a destination.Find shortest distance of all vertex from s and d which gives us d[s][v] and d[d][v].Iterate for all v and find minimum of d[s][v] + d[v][d] + d[d][v] + d[v][s].
Find shortest distance from all vertices to s and d which gives us the values of dis[v][s] and dis[v][d]. For finding the shortest path from all the vertices to a given node refer Shortest paths from all vertices to a destination.
Find shortest distance of all vertex from s and d which gives us d[s][v] and d[d][v].
Iterate for all v and find minimum of d[s][v] + d[v][d] + d[d][v] + d[v][s].
Below is the implementation of the above approach:
CPP
Python3
// CPP implementation of the approach #include <bits/stdc++.h>using namespace std;#define INF 0x3f3f3f3f // iPair represents the Integer Pairtypedef pair<int, int> iPair; // This class represents// a directed graph using// adjacency list representationclass Graph { // Number of vertices int V; // In a weighted graph, store vertex // and weight pair for every edge list<pair<int, int> >* adj; public: // Constructor Graph(int V); // Function to add an edge to graph void addEdge(int u, int v, int w); // Find shortest path from // source vertex to all vertex void shortestPath(int src, vector<int>& dist);}; // Allocates memory for adjacency listGraph::Graph(int V){ this->V = V; adj = new list<iPair>[V];} // Function to add an edge to the graphvoid Graph::addEdge(int u, int v, int w){ adj[v].push_back(make_pair(u, w));} // Function to find the shortest paths// from source to all other verticesvoid Graph::shortestPath(int src, vector<int>& dist){ // Create a priority queue to // store vertices that // are being preprocessed priority_queue<iPair, vector<iPair>, greater<iPair> > pq; // Insert source itself in priority // queue and initialize // its distance as 0 pq.push(make_pair(0, src)); dist[src] = 0; // Loop till priority queue // becomes empty (or all // distances are not finalized) while (!pq.empty()) { // The first vertex in pair // is the minimum distance // vertex, extract it from // priority queue int u = pq.top().second; pq.pop(); // 'i' is used to get all // adjacent vertices of a vertex list<pair<int, int> >::iterator i; for (i = adj[u].begin(); i != adj[u].end(); ++i) { // Get vertex label and // weight of current // adjacent of u int v = (*i).first; int weight = (*i).second; // If there is shorted // path to v through u if (dist[v] > dist[u] + weight) { // Updating distance of v dist[v] = dist[u] + weight; pq.push(make_pair(dist[v], v)); } } }} // Function to return the// required minimum pathint minPath(int V, int src, int des, Graph g, Graph r){ // Create a vector for // distances and // initialize all distances // as infinite (INF) // To store distance of all // vertex from source vector<int> dist(V, INF); // To store distance of all // vertex from destination vector<int> dist2(V, INF); // To store distance of source // from all vertex vector<int> dist3(V, INF); // To store distance of // destination from all vertex vector<int> dist4(V, INF); // Computing shortest path from // source vertex to all vertices g.shortestPath(src, dist); // Computing shortest path from // destination vertex to all vertices g.shortestPath(des, dist2); // Computing shortest path from // all the vertices to source r.shortestPath(src, dist3); // Computing shortest path from // all the vertices to destination r.shortestPath(des, dist4); // Finding the intermediate node (IN) // such that the distance of path // src -> IN -> des -> IN -> src is minimum // To store the shortest distance int ans = INT_MAX; for (int i = 0; i < V; i++) { // Intermediate node should not be // the source and destination if (i != des && i != src) ans = min( ans, dist[i] + dist2[i] + dist3[i] + dist4[i]); } // Return the minimum path required return ans;} // Driver codeint main(){ // Create the graph int V = 5; int src = 0, des = 1; // To store the original graph Graph g(V); // To store the reverse graph // and compute distance from all // vertex to a particular vertex Graph r(V); // Adding edges g.addEdge(0, 2, 1); g.addEdge(0, 4, 5); g.addEdge(1, 4, 1); g.addEdge(2, 0, 10); g.addEdge(2, 3, 5); g.addEdge(3, 1, 1); g.addEdge(4, 0, 5); g.addEdge(4, 2, 100); g.addEdge(4, 3, 5); // Adding edges in reverse direction r.addEdge(2, 0, 1); r.addEdge(4, 0, 5); r.addEdge(4, 1, 1); r.addEdge(0, 2, 10); r.addEdge(3, 2, 5); r.addEdge(1, 3, 1); r.addEdge(0, 4, 5); r.addEdge(2, 4, 100); r.addEdge(3, 4, 5); cout << minPath(V, src, des, g, r); return 0;}
# Python implementation of the approachfrom typing import Listfrom queue import PriorityQueuefrom sys import maxsize as INT_MAXINF = 0x3f3f3f3f # This class represents# a directed graph using# adjacency list representationclass Graph: def __init__(self, V: int) -> None: # Number of vertices self.V = V # In a weighted graph, store vertex # and weight pair for every edge self.adj = [[] for _ in range(V)] # Function to add an edge to the graph def addEdge(self, u: int, v: int, w: int) -> None: self.adj[v].append((u, w)) # Function to find the shortest paths # from source to all other vertices def shortestPath(self, src: int, dist: List[int]) -> None: # Create a priority queue to # store vertices that # are being preprocessed pq = PriorityQueue() # Insert source itself in priority # queue and initialize # its distance as 0 pq.put((0, src)) dist[src] = 0 # Loop till priority queue # becomes empty (or all # distances are not finalized) while not pq.empty(): # The first vertex in pair # is the minimum distance # vertex, extract it from # priority queue u = pq.get()[1] # 'i' is used to get all # adjacent vertices of a vertex for i in self.adj[u]: # Get vertex label and # weight of current # adjacent of u v = i[0] weight = i[1] # If there is shorted # path to v through u if dist[v] > dist[u] + weight: # Updating distance of v dist[v] = dist[u] + weight pq.put((dist[v], v)) # Function to return the# required minimum pathdef minPath(V: int, src: int, des: int, g: Graph, r: Graph) -> int: # Create a vector for # distances and # initialize all distances # as infinite (INF) # To store distance of all # vertex from source dist = [INF for _ in range(V)] # To store distance of all # vertex from destination dist2 = [INF for _ in range(V)] # To store distance of source # from all vertex dist3 = [INF for _ in range(V)] # To store distance of # destination from all vertex dist4 = [INF for _ in range(V)] # Computing shortest path from # source vertex to all vertices g.shortestPath(src, dist) # Computing shortest path from # destination vertex to all vertices g.shortestPath(des, dist2) # Computing shortest path from # all the vertices to source r.shortestPath(src, dist3) # Computing shortest path from # all the vertices to destination r.shortestPath(des, dist4) # Finding the intermediate node (IN) # such that the distance of path # src -> IN -> des -> IN -> src is minimum # To store the shortest distance ans = INT_MAX for i in range(V): # Intermediate node should not be # the source and destination if (i != des and i != src): ans = min(ans, dist[i] + dist2[i] + dist3[i] + dist4[i]) # Return the minimum path required return ans # Driver codeif __name__ == "__main__": # Create the graph V = 5 src = 0 des = 1 # To store the original graph g = Graph(V) # To store the reverse graph # and compute distance from all # vertex to a particular vertex r = Graph(V) # Adding edges g.addEdge(0, 2, 1) g.addEdge(0, 4, 5) g.addEdge(1, 4, 1) g.addEdge(2, 0, 10) g.addEdge(2, 3, 5) g.addEdge(3, 1, 1) g.addEdge(4, 0, 5) g.addEdge(4, 2, 100) g.addEdge(4, 3, 5) # Adding edges in reverse direction r.addEdge(2, 0, 1) r.addEdge(4, 0, 5) r.addEdge(4, 1, 1) r.addEdge(0, 2, 10) r.addEdge(3, 2, 5) r.addEdge(1, 3, 1) r.addEdge(0, 4, 5) r.addEdge(2, 4, 100) r.addEdge(3, 4, 5) print(minPath(V, src, des, g, r)) # This code is contributed by sanjeev2552
17
sanjeev2552
surinderdawra388
Dijkstra
Shortest Path
Graph
Graph
Shortest Path
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n28 Jun, 2022"
},
{
"code": null,
"e": 451,
"s": 52,
"text": "Given a directed weighted graph and the source and destination vertex. The task is to find the sum of shortest distance on the path going from source to destination and then ... |
Support Vector Machine Algorithm | 22 Jan, 2021
Support Vector Machine(SVM) is a supervised machine learning algorithm used for both classification and regression. Though we say regression problems as well its best suited for classification. The objective of SVM algorithm is to find a hyperplane in an N-dimensional space that distinctly classifies the data points. The dimension of the hyperplane depends upon the number of features. If the number of input features is two, then the hyperplane is just a line. If the number of input features is three, then the hyperplane becomes a 2-D plane. It becomes difficult to imagine when the number of features exceeds three.
Let’s consider two independent variables x1, x2 and one dependent variable which is either a blue circle or a red circle.
Linearly Separable Data points
From the figure above its very clear that there are multiple lines (our hyperplane here is a line because we are considering only two input features x1, x2) that segregates our data points or does a classification between red and blue circles. So how do we choose the best line or in general the best hyperplane that segregates our data points.
Selecting the best hyper-plane:
One reasonable choice as the best hyperplane is the one that represents the largest separation or margin between the two classes.
So we choose the hyperplane whose distance from it to the nearest data point on each side is maximized. If such a hyperplane exists it is known as the maximum-margin hyperplane/hard margin. So from the above figure, we choose L2.
Let’s consider a scenario like shown below
Here we have one blue ball in the boundary of the red ball. So how does SVM classify the data? It’s simple! The blue ball in the boundary of red ones is an outlier of blue balls. The SVM algorithm has the characteristics to ignore the outlier and finds the best hyperplane that maximizes the margin. SVM is robust to outliers.
So in this type of data points what SVM does is, it finds maximum margin as done with previous data sets along with that it adds a penalty each time a point crosses the margin. So the margins in these type of cases are called soft margin. When there is a soft margin to the data set, the SVM tries to minimize (1/margin+∧(∑penalty)). Hinge loss is a commonly used penalty. If no violations no hinge loss.If violations hinge loss proportional to the distance of violation.
Till now, we were talking about linearly separable data(the group of blue balls and red balls are separable by a straight line/linear line). What to do if data are not linearly separable?
Say, our data is like shown in the figure above.SVM solves this by creating a new variable using a kernel. We call a point xi on the line and we create a new variable yi as a function of distance from origin o.so if we plot this we get something like as shown below
In this case, the new variable y is created as a function of distance from the origin. A non-linear function that creates a new variable is referred to as kernel.
SVM Kernel:
The SVM kernel is a function that takes low dimensional input space and transforms it into higher-dimensional space, ie it converts not separable problem to separable problem. It is mostly useful in non-linear separation problems. Simply put the kernel, it does some extremely complex data transformations then finds out the process to separate the data based on the labels or outputs defined.
Advantages of SVM:
Effective in high dimensional cases
Its memory efficient as it uses a subset of training points in the decision function called support vectors
Different kernel functions can be specified for the decision functions and its possible to specify custom kernels
SVM implementation in python:
Objective: Predict if cancer is beningn or malignant.
Using historical data about patients diagnosed with cancer, enable the doctors to differentiate malignant cases and benign given the independent attributes.
Dataset: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Original)
Python
# import librariesimport pandas as pdimport numpy as npimport seaborn as snsimport matplotlib.pyplot as plt%matplotlib inline# Importing Data filedata = pd.read_csv('bc2.csv')dataset = pd.DataFrame(data)dataset.columns
Output:
Index(['ID', 'ClumpThickness', 'Cell Size', 'Cell Shape', 'Marginal Adhesion',
'Single Epithelial Cell Size', 'Bare Nuclei', 'Normal Nucleoli', 'Bland Chromatin',
'Mitoses', 'Class'], dtype='object')
Python
dataset.info()
Output:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 699 entries, 0 to 698
Data columns (total 11 columns):
ID 699 non-null int64
ClumpThickness 699 non-null int64
Cell Size 699 non-null int64
Cell Shape 699 non-null int64
Marginal Adhesion 699 non-null int64
Single Epithelial Cell Size 699 non-null int64
Bare Nuclei 699 non-null object
Normal Nucleoli 699 non-null int64
Bland Chromatin 699 non-null int64
Mitoses 699 non-null int64
Class 699 non-null int64
dtypes: int64(10), object(1)
memory usage: 60.1+ KB
Python
dataset.describe().transpose()
Output:
Python
dataset = dataset.replace('?', np.nan)dataset = dataset.apply(lambda x: x.fillna(x.median()),axis=0) # converting the hp column from object 'Bare Nuclei'/ string type to floatdataset['Bare Nuclei'] = dataset['Bare Nuclei'].astype('float64') dataset.isnull().sum()
Output:
ID 0
ClumpThickness 0
Cell Size 0
Cell Shape 0
Marginal Adhesion 0
Single Epithelial Cell Size 0
Bare Nuclei 0
Normal Nucleoli 0
Bland Chromatin 0
Mitoses 0
Class 0
dtype: int64
Python
from sklearn.model_selection import train_test_split # To calculate the accuracy score of the modelfrom sklearn.metrics import accuracy_score, confusion_matrix target = dataset["Class"]features = dataset.drop(["ID","Class"], axis=1)X_train, X_test, y_train, y_test = train_test_split(features,target, test_size = 0.2, random_state = 10)from sklearn.svm import SVC # Building a Support Vector Machine on train datasvc_model = SVC(C= .1, kernel='linear', gamma= 1)svc_model.fit(X_train, y_train) prediction = svc_model .predict(X_test)# check the accuracy on the training setprint(svc_model.score(X_train, y_train))print(svc_model.score(X_test, y_test))
Output:
0.9749552772808586
0.9642857142857143
Python
print("Confusion Matrix:\n",confusion_matrix(prediction,y_test))
Output:
Confusion Matrix:
[[95 2]
[ 3 40]]
Python
# Building a Support Vector Machine on train datasvc_model = SVC(kernel='rbf')svc_model.fit(X_train, y_train)
Output:
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='auto_deprecated',
kernel='rbf', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False)
Python
print(svc_model.score(X_train, y_train))print(svc_model.score(X_test, y_test))
Output:
0.998211091234347
0.9571428571428572
Python
#Building a Support Vector Machine on train data(changing the kernel)svc_model = SVC(kernel='poly')svc_model.fit(X_train, y_train) prediction = svc_model.predict(X_test) print(svc_model.score(X_train, y_train))print(svc_model.score(X_test, y_test))
Output:
1.0
0.9357142857142857
Python
svc_model = SVC(kernel='sigmoid')svc_model.fit(X_train, y_train) prediction = svc_model.predict(X_test) print(svc_model.score(X_train, y_train))print(svc_model.score(X_test, y_test))
Output:
0.3434704830053667
0.32857142857142857
Machine Learning
Python
Machine Learning
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n22 Jan, 2021"
},
{
"code": null,
"e": 677,
"s": 54,
"text": "Support Vector Machine(SVM) is a supervised machine learning algorithm used for both classification and regression. Though we say regression problems as well its best suited ... |
Delete Google Browser History using Python - GeeksforGeeks | 16 Jul, 2020
In this article, you will learn to write a Python program which will take input from the user as a keyword like Facebook, amazon, geeksforgeeks, Flipkart, youtube, etc. and then search your google chrome browser history for that keyword and if the keyword is found in any of the URL then it will delete it. For example, suppose you have entered the keyword ‘geeksforgeeks’, so it will search your google chrome history, like ‘www.geekforgeeks.org’, it’s very obvious that this URL contains the keyword ‘geeksforgeeks’, then it will delete it, it will also search for articles (like “Is geeksforgeeks a good portal to prepare for competitive programming interview?”) containing ‘geeksforgeeks’ in their title and delete it. First of all, get the location in your system where the google chrome history file is located.
Note: Google chrome history file location in windows generally is: C:\Users\manishkc\AppData\Local\Google\Chrome\User Data\Default\History.
Implementation:
import sqlite3 # establish the connection with# history database file which is # located at given location# you can search in your system # for that location and provide # the path hereconn = sqlite3.connect("/path/to/History") # point out at the cursorc = conn.cursor() # create a variable id # and assign 0 initiallyid = 0 # create a variable result # initially as True, it will# be used to run while loopresult = True # create a while loop and put# result as our conditionwhile result: result = False # a list which is empty at first, # this is where all the urls will # be stored ids = [] # we will go through our database and # search for the given keyword for rows in c.execute("SELECT id,url FROM urls\ WHERE url LIKE '%geeksforgeeks%'"): # this is just to check all # the urls that are being deleted print(rows) # we are first selecting the id id = rows[0] # append in ids which was initially # empty with the id of the selected url ids.append((id,)) # execute many command which is delete # from urls (this is the table) # where id is ids (list having all the urls) c.executemany('DELETE from urls WHERE id = ?',ids) # commit the changes conn.commit() # close the connection conn.close()
Output:
(16886, 'https://www.geeksforgeeks.org/')
Python-projects
python-utility
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
Read a file line by line in Python
Enumerate() in Python
How to Install PIP on Windows ?
Iterate over a list in Python
Different ways to create Pandas Dataframe
Python String | replace()
Create a Pandas DataFrame from Lists
Reading and Writing to text files in Python
*args and **kwargs in Python | [
{
"code": null,
"e": 24980,
"s": 24952,
"text": "\n16 Jul, 2020"
},
{
"code": null,
"e": 25798,
"s": 24980,
"text": "In this article, you will learn to write a Python program which will take input from the user as a keyword like Facebook, amazon, geeksforgeeks, Flipkart, youtube,... |
How to save a Python Dictionary to CSV file? | CSV (Comma Separated Values) is a most common file format that is widely supported by many platforms and applications.
Use csv module from Python's standard library. Easiest way is to open a csv file in 'w' mode with the help of open() function and write key value pair in comma separated form.
import csv
my_dict = {'1': 'aaa', '2': 'bbb', '3': 'ccc'}
with open('test.csv', 'w') as f:
for key in my_dict.keys():
f.write("%s,%s\n"%(key,my_dict[key]))
The csv module contains DictWriter method that requires name of csv file to write and a list object containing field names. The writeheader() method writes first line in csv file as field names. The subsequent for loop writes each row in csv form to the csv file.
import csv
csv_columns = ['No','Name','Country']
dict_data = [
{'No': 1, 'Name': 'Alex', 'Country': 'India'},
{'No': 2, 'Name': 'Ben', 'Country': 'USA'},
{'No': 3, 'Name': 'Shri Ram', 'Country': 'India'},
{'No': 4, 'Name': 'Smith', 'Country': 'USA'},
{'No': 5, 'Name': 'Yuva Raj', 'Country': 'India'},
]
csv_file = "Names.csv"
try:
with open(csv_file, 'w') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=csv_columns)
writer.writeheader()
for data in dict_data:
writer.writerow(data)
except IOError:
print("I/O error") | [
{
"code": null,
"e": 1181,
"s": 1062,
"text": "CSV (Comma Separated Values) is a most common file format that is widely supported by many platforms and applications."
},
{
"code": null,
"e": 1358,
"s": 1181,
"text": "Use csv module from Python's standard library. Easiest way is t... |
CSS | scaleX() Function - GeeksforGeeks | 20 Aug, 2019
The scaleX() function is an inbuilt function which is used to resize an element along the x-axis in a 2D plane. It scales the elements in a horizontal direction.
Syntax:
scaleX() = scaleX( number )
Parameters: This function accepts single parameter number which holds the scaling factor along x-axis.
Below examples illustrate the scaleX() function in CSS:
Example 1:
<!DOCTYPE html> <html> <head> <title>CSS scaleX() function</title> <style> body { text-align:center; } h1 { color:green; } .scaleX_image { transform: scaleX(2); } </style> </head> <body> <h1>GeeksforGeeks</h1> <h2>CSS scaleX() function</h2> <br><br> <img class="scaleX_image" src= "https://media.geeksforgeeks.org/wp-content/cdn-uploads/20190710102234/download3.png" alt="GeeksforGeeks logo"> </body> </html>
Output:
Example 2:
<!DOCTYPE html> <html> <head> <title>CSS scaleX() function</title> <style> body { text-align:center; } h1 { color:green; } .GFG { font-size:35px; font-weight:bold; color:green; transform: scaleX(1.5); } </style> </head> <body> <h1>GeeksforGeeks</h1> <h2>CSS scaleX() function</h2> <div class="GFG">Welcome to GeeksforGeeks</div> </body> </html>
Output:
Supported Browsers: The browsers supported by scaleX() Function are listed below:
Google Chrome
Internet Explorer
Firefox
Opera
Safari
CSS-Functions
CSS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to create footer to stay at the bottom of a Web page?
Types of CSS (Cascading Style Sheet)
How to position a div at the bottom of its container using CSS?
Create a Responsive Navbar using ReactJS
Design a web page using HTML and CSS
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript | [
{
"code": null,
"e": 24846,
"s": 24818,
"text": "\n20 Aug, 2019"
},
{
"code": null,
"e": 25008,
"s": 24846,
"text": "The scaleX() function is an inbuilt function which is used to resize an element along the x-axis in a 2D plane. It scales the elements in a horizontal direction."
... |
The new kid on the statistics-in-Python block: pingouin | by Eryk Lewinson | Towards Data Science | Python has a few very well developed and mature libraries used for statistical analysis, with the biggest two being statsmodels and scipy. These two contain a lot (and I mean a LOT) of statistical functions and classes that will in 99% of the times cover all your use-cases. So why are there still any new libraries being released?
The newcomers often try to fill in a niche or to provide something extra that the established competition does not have. Recently, I stumbled upon a relatively new library called pingouin. Some key features of the library include:
The library is written in Python 3 and is based mostly on pandas and numpy. Operating directly on DataFrames is something that can definitely come in handy and simplify the workflow.
pingouin tries to strike a balance between complexity and simplicity, both in terms of coding and the generated output. In some cases, the output of statsmodels can be overwhelming (especially for new data scientists), while scipy can be a bit too concise (for example, in the case of the t-test, it reports only the t-statistic and the p-value).
Many of pingouin’s implementations are direct ports from popular R packages for statistical analysis.
The library provides a few new functionalities not found in the other libraries, such as calculating different effect sizes and converting between them, pairwise t-tests and correlations, circular statistics, and more!
In this article, I provide a brief introduction to some of the most popular functionalities available in pingouin and compare them to the already established and mature equivalents.
To start, we need to install the library by running pip install pingouin. Then, we import all the libraries that we will use in this article.
In this part, we explore a selection of functionalities available in pingouin, while highlighting the differences as compared to the other libraries.
Probably the most popular use-case of the statistical libraries (or at least on par with linear regression) is the t-test, which is most often used for hypothesis testing when running A/B tests.
Starting with scipy, we calculate the results of the t-test by running:
ttest_ind(x, y)
What generates the following output:
In the case of pingouin, we use the following syntax:
pg.ttest(x, y)
And receive:
scipy reports only the t-statistic and the p-value, while pingouin additionally reports the following:
degrees of freedom (dof),
95% confidence intervals (CI95%),
the effect size measured by Cohen’s d (cohen-d),
the Bayes factor, which indicates the strength of evidence in favor of the considered hypothesis (BF10),
the statistical power (power).
Note: statsmodels also contains a class for calculating the t-test (statsmodels.stats.weightstats.ttest_ind), which is essentially a wrapper around scipy’s ttest_ind, with a few modifications to the input parameters.
pingouin also contains more variants/extensions of the standard t-test, such as:
Pairwise t-Tests,
Mann-Whitney U test (a non-parametric version of the independent t-test),
Wilcoxon signed-rank test (a non-parametric version of the paired t-test).
Another very popular application of statistical libraries is to calculate the required sample size for an A/B test. To do so, we use power analysis. Please refer to my previous article if you are interested in more details.
In this example, we will focus on the case of calculating the required sample size for a t-test. However, you can easily adjust the code to calculate any of the other components (significance level, power, effect size).
For simplicity, we fix the other 3 parameters to some standard values.
Running the code generates the following output:
Required sample size (statsmodels): 64Required sample size (pingouin): 64
To be honest, there is not much difference in terms of power analysis for a standard t-test. We can additionally customize the type of the alternative hypothesis (whether it’s a one- or two-sided test). What is worth mentioning is that pingouin enables us to run power analysis for a few tests, which are not available in other libraries, such as the balanced one-way repeated measures ANOVA or the correlation test.
pingouin contains a selection of really nicely implemented visualizations, however, most of them are pretty domain-specific and might not be that interesting to the general reader (I do encourage you to take a look at them in the documentation).
However, one of the plots can definitely come in handy. In one of my previous articles, I described how to create QQ-plots in Python. pingouin definitely simplifies the process, as we can create a really nice QQ-plot with just one line of code.
np.random.seed(42)x = np.random.normal(size=100)ax = pg.qqplot(x, dist='norm')
What’s more, pingouin automatically handles how to display the reference line, while in the case of statsmodels, we need to do the same thing by providing an argument to theqqplot method of the ProbPlot class. A clear example of simplifying the task!
Quoting pg.qqplot’s documentation:
In addition, the function also plots a best-fit line (linear regression) for the data and annotates the plot with the coefficient of determination.
We will now investigate how to run ANOVA (analysis of variance). To do so, we will employ one of the built-in datasets describing the pain threshold per hair color (interesting idea!). First, we load and slightly transform the dataset.
We dropped one unnecessary column and replaced the space in column names with an underscore (this will make implementing ANOVA in statsmodels easier).
First, we present the pingouin approach.
What generates the following output:
Before moving further, we should mention the potential benefit of using pingouin: it adds an extra method (anova) directly to the pd.DataFrame, so we can skip calling pg.anova and specifying the data argument.
For comparison’s sake, we also carry out ANOVA using statsmodels. Using this library, it is a two-step process. First, we need to fit an OLS regression, and only then carry out ANOVA.
The outputs are very similar, although the one from pingouin contains an additional column, which is the effect size measure as partial eta-squared.
Similarly to the t-test, there are different variants of ANOVA included in pingouin.
As the last example, we inspect one of the most basic machine learning models — the linear regression. To do so, we first load the famous Boston Housing dataset from scikit-learn:
from sklearn.datasets import load_bostonX, y = load_boston(return_X_y=True)
Note: scikit-learn also contains the class to train linear regression, however, the output is the most basic of all the known to me Python libraries. In some cases, that is perfectly fine. However, coming from R, I prefer a more detailed output, which we will see soon enough.
To fit a linear regression model in pingouin, we need to run the following line of code.
lm = pg.linear_regression(X, y)lm
It can’t get simpler than that! And the output looks as follows:
The table is quite big and detailed. Personally, I don’t think including R2 and the adjusted variant makes a lot of sense here, as it results in a lot of repetition. My guess is that it was done to keep the output in the form of a DataFrame. Alternatively, we can turn that behavior off by setting as_dataframe=False, what results in creating a dictionary instead. This way, we additionally get the residuals of the model.
One extra thing about pingouin‘s implementation is that we can extract a measure of feature importance, which is expressed as “partitioning of the total R2 of the model into individual R2 contribution”. To display them, we need to set relimp to True.
It is time to move forward to the statsmodels implementation.
The code is a bit lengthier, as we also need to manually add the constant (a column of ones) to the DataFrame containing the independent variables. Please note that this step is not necessary when using the functional syntax (as used in the ANOVA example), however, in that case, the features and the target need to be in one object.
The following image presents the output of running the summary method on the fitted object:
For me, the extra two lines of code are definitely worth it, as the output is much more comprehensive and often saves us the trouble of running a few extra statistical tests or calculating a few measures of the goodness of fit.
In this article, I only presented a selection of the functionalities of the pingouin library. Some of the interesting features available are:
A wide range of functions for circular statistics,
Pairwise post-hocs tests,
Different Bayes Factors,
A selection of different measures of effect size and a function for converting between them.
And more!
In this article, I presented a brief overview of pingouin, a new library for statistical analysis. I do like the approach taken by the authors, in which they try to simplify the process as much as possible (also by making some things happen automatically in the background, like choosing the best reference line for a QQ-plot or applying corrections to the t-test), while at the same time keeping the output as thorough and complete as possible.
While I am still a fan of the statsmodels’s approach to summarizing the output of linear regression, I do find pingouin a nice tool that can save us some time and trouble in day-to-day data science tasks. I am looking forward to seeing how the library develops with time!
You can find the code used for this article on my GitHub. As always, any constructive feedback is welcome. You can reach out to me on Twitter or in the comments.
Found this article interesting? Become a Medium member to continue learning by reading without limits. If you use this link to become a member, you will support me at no extra cost to you. Thanks in advance and see you around! | [
{
"code": null,
"e": 503,
"s": 171,
"text": "Python has a few very well developed and mature libraries used for statistical analysis, with the biggest two being statsmodels and scipy. These two contain a lot (and I mean a LOT) of statistical functions and classes that will in 99% of the times cover ... |
How to Combine Data in Pandas — 5 Functions You Should Know | by Yong Cui | Towards Data Science | When we use Pandas to process data, one common task is to combine data from different sources. In this article, I’ll review 5 Pandas functions that you can use for data merging, as listed below. Each of these functions has its link to the official documentation if you want to take a look.
* concat* join* merge* combine* append
For the current tutorial, let’s use two simple DataFrame objects, as shown below. Please note that I’ll make slight modifications where applicable to better illustrate the corresponding functions.
The concat function is named after concatenation, which allows you to combine data side by side horizontally or vertically.
When you combine data that have the same columns (or most of them are the same, practically), you can call concat by specifying axis to 0, which is actually the default value too.
>>> pd.concat([df0, df1.rename(columns={"c": "a", "d": "b"})], axis=0) a b0 1 41 2 52 3 60 2 51 3 62 4 7
When you combine data with rows indicating the same entities (i.e., research data for the same ordered subjects) and columns of different data, you can concatenate them side by side.
>>> pd.concat([df0, df1], axis=1) a b c d0 1 4 2 51 2 5 3 62 3 6 4 7
By default, when you combine data horizontally (i.e., along the columns), Pandas tries to use the index. When they’re not identical, you’ll see NaNs to fill the non-overlapping ones, as shown below.
>>> df2 = df1.copy()>>> df2.index = [1, 2, 3]>>> pd.concat([df0, df2], axis=1) a b c d0 1.0 4.0 NaN NaN1 2.0 5.0 2.0 5.02 3.0 6.0 3.0 6.03 NaN NaN 4.0 7.0
If this isn’t the desired behavior and you want to them to align perfectly by ignoring the index, you should reset the index before concatenation:
>>> pd.concat([df0.reset_index(drop=True), df2.reset_index(drop=True)], axis=1) a b c d0 1 4 2 51 2 5 3 62 3 6 4 7
Please note that resetting index for df0 here is optional, because its index happens to be 0-based, which will match the reset index of df2.
Compared to concat, join is specialized in joining the columns between DataFrame objects using index.
>>> df0.join(df1) a b c d0 1 4 2 51 2 5 3 62 3 6 4 7
When the index are different, the joining keeps the rows from the left DataFrame by default. The rows from the right DataFrame with no matching index in the left DataFrame are removed, as shown below:
>>> df0.join(df2) a b c d0 1 4 NaN NaN1 2 5 2.0 5.02 3 6 3.0 6.0
However, this behavior can be changed by setting the how parameter. The available options are: left, right, outer, and inner, and their behaviors are shown below.
# "right" uses df2’s index>>> df0.join(df2, how="right") a b c d1 2.0 5.0 2 52 3.0 6.0 3 63 NaN NaN 4 7# "outer" uses the union>>> df0.join(df2, how="outer") a b c d0 1.0 4.0 NaN NaN1 2.0 5.0 2.0 5.02 3.0 6.0 3.0 6.03 NaN NaN 4.0 7.0# "inner" uses the intersection>>> df0.join(df2, how="inner") a b c d1 2 5 2 52 3 6 3 6
The merge function is mostly like joining actions in a database if you have such experience. Compared to join, merge is more general, which can execute merging operations on columns and indexes. Because we’ve covered join using index-based merging, we’ll be more focused on column-based merging. Let’s see a quick example.
>>> df0.merge(df1.rename(columns={"c": "a"}), on="a", how="inner") a b d0 2 5 51 3 6 6
The on parameter defines which columns the two DataFrame objects will merge on. Please note that you can specify a single column as a string or multiple columns in a list. These columns must be present in both DataFrame objects. More generally, you can specify the merging columns from the left DataFrame and the right DataFrame, respectively, as shown below.
>>> df0.merge(df1, left_on="a", right_on="c") a b c d0 2 5 2 51 3 6 3 6
It’s pretty much the same result as the previous merging, except with separate columns of a and c.
There are many other optional parameters you can play with the merging. I want to highlight two here.
how: it defines what type of merging that is to be performed. Supported types include left, right, inner (the default one), outer, and cross. Everything should be straightforward except the last one cross, which creates the cartesian product from both frames, as shown below.
>>> df0.merge(df1, how="cross") a b c d0 1 4 2 51 1 4 3 62 1 4 4 73 2 5 2 54 2 5 3 65 2 5 4 76 3 6 2 57 3 6 3 68 3 6 4 7
suffixes: when the two DataFrame objects have the same columns other than to be merged, this parameter sets how these columns should be renamed to using the suffixes. By default, the suffixes for the left and right data frame is “_x” and “_y”, and you can supply your custom ones. Here’s an example.
>>> df0.merge(df1.rename(columns={"c": "a", "d": "b"}), on="a", how="outer", suffixes=("_l", "_r")) a b_l b_r0 1 4.0 NaN1 2 5.0 5.02 3 6.0 6.03 4 NaN 7.0
The combine function perform column-wise combination between two DataFrame object, and it is very different from the previous ones. What makes combine special is that it takes a function parameter. This function takes two Series with each corresponding to the merging column from each DataFrame and returns a Series to be the final values for element-wise operations for the same columns. Sounds confusing? Let’s see an example in the following code snippet.
The taking_larger_square function operates on column a in df0 and df1 and column b in df0 and df1. Between the two a and two b columns, taking_larger_square takes the squares of the values from the larger column. In this case, df1’s columns a and b will be taken for squares, which produces the final values as shown in the above code snippet. For your reference, the renamed df1 is shown below.
>>> df1.rename(columns={"c": "a", "d": "b"}) a b0 2 51 3 62 4 7
Although it’s not covered here, there is another closely related function combine_first, which simply combines two frames using the first frame’s non-null values.
So far, most of the operations we have discussed are towards combining the data column-wise. How about row-wise operations? The append function is specialized in appending rows to an existing DataFrame object, creating a new one. Let’s see an example first.
>>> df0.append(df1.rename(columns={"c": "a", "d": "b"})) a b0 1 41 2 52 3 60 2 51 3 62 4 7
Does the above operation look familiar to you? I hope it does, because it’s like what you can achieve with concat when you set axis=0. However, what makes append unique is that you can actually append a dict object, which provides us with the flexibility of appending different kinds of data.
>>> df0.append({"a": 1, "b": 2}, ignore_index=True) a b0 1 41 2 52 3 63 1 2
A trivial example is shown above. Please note that you must set ignore_index to True, because a dictionary object doesn’t have index information that the DataFrame can use.
In this article, we reviewed 5 mostly used functions for combining data in Pandas. Here’s a quick recap.
concat: combine data row-wise and column-wise
join: combine data row-wise using index
merge: combine data column-wise, like the database joining operations
combine: combine data column-wise having between-column (the same columns) element-wise operations
append: append data in the form of DataFrame or dict object row-wise
Thanks for reading this article. Stay connected by signing up my newsletter. Not a Medium member yet? Support my writing by using my membership link. | [
{
"code": null,
"e": 337,
"s": 47,
"text": "When we use Pandas to process data, one common task is to combine data from different sources. In this article, I’ll review 5 Pandas functions that you can use for data merging, as listed below. Each of these functions has its link to the official document... |
SAP HANA Admin - Table Replication | In SAP HANA system, it is also possible to replicate tables on multiple hosts. When you need to join the tables or partition tables on multiple hosts, table replication is useful to improve the performance, to reduce the load on the network in a distributed environment.
SAP HANA table replication has certain limitations −
You can’t replicate Partitioned Tables.
You can’t replicate Partitioned Tables.
When you are using SAP BW on HANA, it doesn’t support Table replication.
When you are using SAP BW on HANA, it doesn’t support Table replication.
When you perform table replication, it consumes the main memory and disk space to store persistence of each replica.
When you perform table replication, it consumes the main memory and disk space to store persistence of each replica.
Column store tables with history tables and text columns without a primary key can’t be replicated.
Column store tables with history tables and text columns without a primary key can’t be replicated.
CREATE COLUMN TABLE Table_Name (I INT PRIMARY KEY) REPLICA AT ALL LOCATIONS
This command will create a column store table with a replica on each host. You can also replicate an existing column base table on each available host using ALTER table command as follows −
ALTER TABLE Table_Name ADD REPLICA AT ALL LOCATIONS
It is also possible to drop replica of an existing table using ALTER table drop replica command as follows.
ALTER TABLE Table_name DROP REPLICA AT ALL LOCATIONS
Note −
You can perform Table Replication on row store tables.
You can perform Table Replication on row store tables.
In a distributed environment, you can perform table replications on row store tables stored in master node.
In a distributed environment, you can perform table replications on row store tables stored in master node.
In SAP HANA system, you can also perform consistency check on replicated tables using the following SQL command −
CALL CHECK_TABLE_CONSISTENCY('CHECK_REPLICATION', '<schema>', '<table'>)
25 Lectures
6 hours
Sanjo Thomas
26 Lectures
2 hours
Neha Gupta
30 Lectures
2.5 hours
Sumit Agarwal
30 Lectures
4 hours
Sumit Agarwal
14 Lectures
1.5 hours
Neha Malik
13 Lectures
1.5 hours
Neha Malik
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2595,
"s": 2324,
"text": "In SAP HANA system, it is also possible to replicate tables on multiple hosts. When you need to join the tables or partition tables on multiple hosts, table replication is useful to improve the performance, to reduce the load on the network in a distrib... |
Extracting Data for Machine Learning | by Rebecca Vickery | Towards Data Science | The most important first step in any machine learning project is to obtain good quality data. As a data scientist, you will often have to use a variety of different methods to extract data sets. You might be using publically available data, data available via an API, data found in a database or in many cases a combination of these methods.
In the following post, I am going to give a brief introduction to three different methods in python for extracting data. For the purpose of this post, I am going to be covering how to extract data whilst working in a Jupyter Notebook. I previously covered how to use some of these methods from the command line in an earlier post.
If you need to obtain data from a relational database the chances are that you will need to use SQL. You can connect a Jupyter Notebook to most common database types using a library called SQLAlchemy. This link provides a description of which databases are supported and how to connect to each type.
You can use SQLAlchemy directly to view and query tables or you can write raw queries. To connect to your database you will need a URL which includes your credentials. You can then use thecreate_engine command to create the connection.
from sqlalchemy import create_engineengine = create_engine('dialect+driver://username:password@host:port/database')
You can now write database queries and return the result.
connection = engine.connect()result = connection.execute("select * from my_table")
Web scraping is used to download data from a website and extract the required information from those pages. There are a number of python libraries that can be used for this but one of the simplest to use is Beautiful Soup.
You can install this package via pip.
pip install BeautifulSoup4
Let’s work through a simple example of how to use this. We are going to use Beautiful Soup and the urllib library to scrape hotel names and prices from the TripAdvisor website.
Let’s import all the libraries we will be working with.
from bs4 import BeautifulSoupimport urllib.request
Next, we want to download the content of the page that we want to scrape. I am going to scrape prices for hotels on the Greek Island, Crete so I am using a URL that contains hotel listings for that destination.
The code below defines the URL as a variable, uses the urllib library to open the page and Beautiful Soup to read the page and return the results in an easy to read format. Part of the output is shown below the code.
URL = 'https://www.tripadvisor.co.uk/Hotels-g189413-Crete-Hotels.html'page = urllib.request.urlopen(URL)soup = BeautifulSoup(page, 'html.parser')print(soup.prettify())
Next, let’s obtain a list of hotel names on the page. We are going to use the find_all function which allows you to extract the parts of the document that you are interested in. You can filter the document using find_all in a number of ways. By passing in a string, regular expression or a list. You can also filter on one of the tag’s attributes which is the method we will use here. If you are unfamiliar with HTML tags and attributes this article gives a good overview.
To understand how best to access the data point you need to inspect the code for that element on the web page. To find the code for the hotel name we right click on the name in the listing as shown in the image below.
When you click inspect the code will appear and the section that contains the hotel name will be highlighted as shown below.
We can see that the hotel name is the only piece of text in the class named listing_title . The code below passes the class and the name of this attribute to the find_all function, along with the div tag.
content_name = soup.find_all('div', attrs={'class': 'listing_title'})print(content_name)
This returns each section of code containing the hotel name as a list.
To extract the hotel names from this code we can use Beautiful Soup’s getText function.
content_name_list = []for div in content_name: content_name_list.append(div.getText().split('\n')[0])print(content_name_list)
This returns the hotel names as a list.
We can get the price in a similar way. Inspecting the code for a price we can see it has the following structure.
So we can use very similar code to extract this section.
content_price = soup.find_all('div', attrs={'class': 'price-wrap'})print(content_price)
There is a slight complication with the price if we run the following code we will see it.
content_price_list = []for div in content_price: content_price_list.append(div.getText().split('\n')[0])print(content_price_list)
The output is shown below. Where a hotel listing has a price cut shown, both the original price and the sale price is returned in addition to some text. In order to make this useful we just want to return the actual price the hotel would be should we book it today.
We can use some simple logic to obtain the last price shown in the text.
content_price_list = []for a in content_price: a_split = a.getText().split('\n')[0] if len(a_split) > 5: content_price_list.append(a_split[-4:]) else: content_price_list.append(a_split) print(content_price_list)
This gives the following output.
API, which stands for application programming interface, in terms of data extraction is a web-based system that provides an endpoint for data which you can connect to via some programming. Typically the data will be returned in JSON or XML format.
In machine learning, you may need to obtain data using this method. I will give a simple example of how you might obtain weather data from a publically available API known as Dark Sky. To access this API you will need to sign up, 1,000 calls are provided per day for free which should be plenty to try this out.
To access the data from Dark Sky I will be using the requests library. To start with I need to obtain the correct URL to request data from. Dark Sky provides both forecasted and historical weather data. For this example, I am going to be using the historical data, I can obtain the correct URL for this from the documentation.
The URL has the following structure.
https://api.darksky.net/forecast/[key]/[latitude],[longitude],[time]
We will use the requests library to obtain the results for a particular latitude and longitude, and date and time. Let’s imagine after obtaining daily prices for hotels in Crete we wanted to find out if price correlated with the weather in some way. As an example let’s choose the coordinates for one of the hotels in the listing Mitsis Laguna Resort and Spa.
First, we construct the URL with the correct coordinates and date time we require. Using the requests library we can access the data in JSON format.
import requestsrequest_url = 'https://api.darksky.net/forecast/fd82a22de40c6dca7d1ae392ad83eeb3/35.3378,-25.3741,2019-07-01T12:00:00'result = requests.get(request_url).json()result
We can normalize the results into a data frame to make it easier to read and analyse.
import pandas as pddf = pd.DataFrame.from_dict(json_normalize(result), orient='columns')df.head()
There is a lot more you can do to automate the extraction of this data using these methods. For the web scraping and API methods, it is possible to write functions to automate the process to easily extract this data for a larger number of days and/or locations. In this post, I wanted to simply give an overview with enough code to explore these methods. In future posts, I will be writing more in-depth articles covering how to build complete data sets and analyse them using these methods.
Thanks for reading! | [
{
"code": null,
"e": 514,
"s": 172,
"text": "The most important first step in any machine learning project is to obtain good quality data. As a data scientist, you will often have to use a variety of different methods to extract data sets. You might be using publically available data, data available... |
How to build a Recommendation Engine quick and simple | by Jan Teichmann | Towards Data Science | This article is meant to be a light introduction to the topic and provide the first steps to get you into production with a recommendation engine in a week. I will also tell you what the steps are to go from basics to fairly advanced.
We won’t tab into the cutting edge yet but we won’t be far off either.
Read part 2 for the more advanced use-cases when quick and simple is no longer good enough:
towardsdatascience.com
Recommendation systems are everywhere and for many online platforms their recommendation engines are the actual business. That’s what made Amazon big: they were very good at recommending you which books to read. There are many other companies which are all build around recommendation systems: YouTube, Netflix, Spotify, Social Media platforms. In parallel, consumers came to expect a personalised experience and sophisticated recommendation systems to find relevant products and content, all to save consumers time and money.
But how do you build a recommendation engine?
Recommendation engines have been around for a while and there have been some key learnings to leverage:
A user’s actions are the best indicator of user intent. Ratings and feedback tends to be very biased and lower volumes.Past actions and purchases drive new purchases and the overlap with other people’s purchases and actions is a fantastic predictor.
A user’s actions are the best indicator of user intent. Ratings and feedback tends to be very biased and lower volumes.
Past actions and purchases drive new purchases and the overlap with other people’s purchases and actions is a fantastic predictor.
Recommendation systems generally look for overlap or co-occurrence to make a recommendation. Like in the following example where we recommend Ethan a puppy based on a similarity of Ethan with Sophia:
In practise, a recommendation engine computes a co-occurrence matrix from a history matrix of events and actions. This is simple enough but there are challenges to overcome in real world scenarios. What if everyone wants a unicorn? Does the high co-occurrence of unicorns in the following example make a good recommendation?
After the recommendation system has computed the co-occurrence matrix we have to apply statistics to filter out the sufficiently anomalous signals to be interesting as a recommendation.
The algorithms and statistics which can extract relevant indicators from the co-occurrence matrix are what makes a good recommendation system. The path of creating an item-to-item indicator matrix is called an item-item model. There is obviously also an user-item model:
To create a user-item model we could apply a simple matrix factorisation or train a neural network to predict the scores of a user-item input. Usually, item-item models are more robust and produce better results when we do not invest more into feature engineering and model tuning etc.
However, there are a few more challenges a good recommendation system has to overcome. Recommending the same things over and over is boring. Even worse, recommending the same things produces bad data and causes content fatigue.
Two simple and intuitive strategies to improve the value of recommendations are
Anti-Flood: Penalise the second and third recommendations if they have the same similarity scores to the top recommendation.Dithering: Add a wildcard recommendation to create interesting new data points for the recommendation system to keep learning about other content.
Anti-Flood: Penalise the second and third recommendations if they have the same similarity scores to the top recommendation.
Dithering: Add a wildcard recommendation to create interesting new data points for the recommendation system to keep learning about other content.
These steps ensure an interesting user experience and new data on alternative recommendations.
Recommendation systems work best with unbiased explicit user feedback like the purchase of products, watching a video or listening to a song. If you’re lucky and you have such a data set even a simple approach will work very well. However, in many use cases you have mainly implicit feedback like page views, clicks or search queries. Unfortunately, that data is also heavily biased e.g. the click through rate is heavily dependent on the position of content on a page. Implicit feedback also tends to perform less well, e.g. a click on a search result might just teaching you about how clickbaity a headline or CTA was rather than the relevance of the actual content. This results in high bounce rates after initially high click through rates (a very common case!)
A simple approach like collaborative filtering requires co-occurrence between items. This means collaborative filtering is suitable and will lead to great results if
Your product catalogue is not too big, items are very long lived and can be interacted with easily by multiple users. Let’s use Zoopla as an example where we get into trouble: Zoopla.co.uk is a property portal and has over 1.3+ million listings live at any point in time. A rental listing in London is very short lived and it can just take days before a property has been rented and is taken off the market. Obviously, you cannot rent out or sell a flat to multiple people at the same time! With the size of the Zoopla catalogue it is really difficult to generate a significant amount of co-occurrence even at Zoopla’s traffic volumes.
You do not depend on recommendations for the discovery of new products. Because collaborative filtering requires co-occurrence to generate signals the algorithm has a big cold start problem. Any new item in the product catalogue has no co-occurrence and cannot be recommended without some initial engagement of users with the new item. This could be acceptable if your business uses e.g. a lot of CRM and marketing as a strategy to promote new products.
One option would be to use Spark and the alternating least squares (ALS) algorithm (link) which is a simple solution for model training but does not provide an immediate solution for deployment and scoring. I recommend a different approach to get started:
As it turns out the maths of search and recommendation problems are strikingly similar. Most importantly, a good user experience in search and recommendations are almost indistinguishable. Basically, search results are recommendations if we can formulate recommendations as search queries. It’s an ideal solution as many websites and businesses already operate search engines in their backends and we can leverage existing infrastructure to build our recommendation system. Elasticsearch scales well and exists as fully managed deployments e.g. on AWS. There is no safer bet if you want to deploy your recommendation engine into production fast!
How do you create a recommendation with a search engine?
We store all user-item interactions in a search index.When a user is on a page for apples we search for all users who have apples in elasticsearch. This defines our foreground population.We look for co-occurrence in our foreground which gives us puppies.We search for puppies in the background population.We calculate some kind of score for our puppy recommendation.
We store all user-item interactions in a search index.
When a user is on a page for apples we search for all users who have apples in elasticsearch. This defines our foreground population.
We look for co-occurrence in our foreground which gives us puppies.
We search for puppies in the background population.
We calculate some kind of score for our puppy recommendation.
The good news: Elasticsearch implements all 5 steps for us in a single query!
If we store our user-item interactions in elastic search as follows
{ "_id": "07700f84163df9ee23a4827fd847896c", "user": "user_1", "products": ["apple", "book", "lemon", "puppy"]}
with a document mapping like this:
{ "user": {"type": "keyword"}, "products": {"type": "keyword"}}
then all what’s needed to produce some recommendations is the following query e.g. using Python:
from elasticsearch import Elasticsearch, RequestsHttpConnectionfrom aws_requests_auth.boto_utils import BotoAWSRequestsAuthes = Elasticsearch( host=host, port=port, connection_class=RequestsHttpConnection, http_auth=BotoAWSRequestsAuth(), scheme=scheme)es.search( index=index, doc_type=doc_type, body={ "query": { "bool": { "must": { "term": {"products": "apple"} } } }, "aggs": { "recommendations": { "significant_terms": { "field": "products", "exclude": "apple", "min_doc_count": 100 } } } })
Elasticsearch will return the recommendations with a JLH score by default but there is a range of scores available (documentation).
{ ... "aggregations": { "recommendations": { "doc_count": 12200, "bg_count": 130000, "buckets": [ { "key": "puppy", "doc_count": 250, "score": 0.15, "bg_count": 320, } ] } }}
In our example, the search for apple returned a foreground population of 12,200 users with a background population of 130,000 users who did not have any apples in their products. Puppy co-occurred 250 times in the foreground and 320 times in the background. The JLH score is a simple magnitude of change between the background collection to the local search results given by (fg_percentage - bg_percentage) * (fg_percentage / bg_percentage) which gives a score of 0.15
As the JLH score is a magnitude change it’s important to remember that fleas jump higher than elephants and the JLH score is very volatile for small data sets. You can adjust the min_doc_count parameter in the query to quality assure your recommendation results.
This is it, a simple but powerful first iteration of a recommendation engine which can be live within a week or less! Importantly, any version 1 shouldn’t be more complex than this. Time to production is much more important in the early stages. Commonly, your first recommendation engine needs a few iterations to optimise the UI and UX rather than the maths.
Elasticsearch is not just a very powerful backend for recommendations, it is also highly flexible! There’re many options to improve our recommendation system while keeping the elasticsearch backend. Win!
Read part 2 for the more advanced use-cases when quick and simple is no longer good enough:
towardsdatascience.com
Step 1:
We can use more sophisticated algorithms such as ALS to create the indicators for our recommendations and we put these indicators into elasticsearch. This simplifies the recommendation scoring to a simple look-up as we do the heavy lifting in the training phase e.g. using Spark. This way elasticsearch is just a performant presentation layer of our ahead-of-time computed indicators for relevant recommendations. You can add this easily to an existing product catalogue as new metadata.
Step 2:
At the moment we use a binary flag in the products array which means each product in the product array contributes to the JLH score equally. Without many changes we could use some metric to score the product occurrence itself to capture a richer signal. We could use a click count. Or even better we could use a click score by normalising a click by the average expected click through rate of the page location generating the click. E.g. in a list of search results we can calculate an expected CTR for items in first position, second etc. We can then calculate the JLH magnitude change from the sum of item metric scores instead of their simple counts.
Step 3:
Users usually generate a series of events relevant for recommendations, e.g. clicking on multiple items or adding multiple products to a basket. It’s worth adding a user interaction cache to your recommendation engine (1) to create more complex search queries using a series of events and (2) create a delta between the batch ETL process which updates your elasticsearch index and the user interactions which occurred since the last refresh of your recommendation engine.
Using the event sequence to produce recommendations can help with (1) creating more relevant results and (2) increase the foreground population to generate a bigger number of co-occurrences in case of low traffic volumes or very big catalogues. It only needs a minor change to the elasticsearch query to switch to a should query:
es.search( index=index, doc_type=doc_type, body={ "query": { "bool": { "should": [ {"term": {"products": "apple"}}, {"term": {"products": "pony"}}, ], "minimum_should_match": 1, } }, "aggs": { "recommendations": { "significant_terms": { "field": "products", "exclude": ["apple", "pony"], "min_doc_count": 10 } } } })
The minimum_should_match parameter allows you to optimise between increasing the foreground population size or making the results more relevant by matching users with increasing similarity.
Step 4:
Currently, our search is a precise lookup of items. This has some consequences: everything we learn from user interactions in terms of co-occurrence is bound to their specific items. When an item is taken off the product catalogue we lose everything we learned from it. We also cannot generalise anything to similar items, e.g. red apples and green apples are distinctive items and co-occurrence is limited to precise matches of red apples or green apples. To overcome this we need to describe items mathematically to compute a similarity between items. This is called an embedding. Read my previous blog post were I create a geographic area embedding. Other options to create embeddings are auto-encoders or the matrix factorisation in the user-item model as described above. After we turned a simple product_id into an embedding we can use probabilistic or fuzzy search to find our foreground population and/or co-occurrences.
This should get you started with recommendations. It also gives you ample opportunity to build on your first iteration as you learn from production feedback.
The early steps into recommendations stand or fall much more often by UI and UX rather than the simplicity of the maths.
Usually, products have a wealth of metadata we should use, e.g. price, descriptions, images, review ratings, seasonality, tags and categories. After we turn a rich metadata set into an embedding describing our products we can train a Neural Network to map the input embeddings into a recommendations embedding which has (1) lower dimensionality and (2) a desired behaviour of the cosine similarity of suitable recommendations being high. One great solution for this are Siamese Neural Networks.
The input is a high dimensional vector of concatenated embeddings of a product’s metadata. The output of the Neural Network is a much more compact recommendation embedding vector. The error function is given by the cosine similarity of the output vectors. We can use the collaborative filtering data to create our supervised learning labels for combinations which should be similar or not. Importantly, in siamese neural networks the weights of both networks are always identical which gives them their name. Such a recommendation engine would have no more cold start issue! Finally, producing a recommendation can be done with a k-nearest-neighbour search of the output recommendation embeddings.
You can read more on next steps in the follow up:
towardsdatascience.com
Jan is a successful thought leader and consultant in the data transformation of companies and has a track record of bringing data science into commercial production usage at scale. He has recently been recognised by dataIQ as one of the 100 most influential data and analytics practitioners in the UK.
Connect on LinkedIn: https://www.linkedin.com/in/janteichmann/
Read other articles: https://medium.com/@jan.teichmann | [
{
"code": null,
"e": 406,
"s": 171,
"text": "This article is meant to be a light introduction to the topic and provide the first steps to get you into production with a recommendation engine in a week. I will also tell you what the steps are to go from basics to fairly advanced."
},
{
"code"... |
How to Customize Bash Colors and Content in Linux Terminal Prompt - GeeksforGeeks | 14 Apr, 2022
If you are using the Linux operating system, that means you use the CLI most of the time. And do more work on the terminal. By default, most Linux Operating systems provide you the bash shell. Shell provides the interface between the user and kernel and executes commands. In this article, we are going to see how to customize the bash shell prompt
Before customizing the bash shell prompt first understand the default bash prompt. This default prompt looks like follows:
username@hostname:~$
or
[username@hostaname ~]$
The first part i.e string before @ character of bash prompt indicates the username of the current user. The last part of the bash prompt indicates the hostname of the system. Then the ~ sign indicates the current path of the prompt. If the sign after the : or ] character is $ that means the account is standard or if this character is # then the account is the root
To customize the bash prompt, first, we should understand how the bash prompt works. Bash provides the Prompt Statement. There are four bash prompt statement
PS1 – This is the primary prompt statement. We will customize this prompt.
PS2 – This is the secondary prompt statement. Basically, it is used when the user provides the long command separated by \ characters.
PS3 – This prompt is used to select the command.
PS4 – This prompt is used for running a shell script in debug mode.
To see the value of your current PS1 prompt statement, you can use the following command:
echo $PS1
The ps1 has contained the backslash and other alphabetic characters which has a special meaning which is listed in the PROMPTING section of the man page. In the above output, we can see that \u \h and \W are the prompting characters and @ and # are special characters.
To customize the bash prompt, we are going to work on the PS1 prompt and PS2 prompt. Generally, the PS2 prompt contains only one character >. To view the content of the PS2 prompt use the echo command:
echo $PS2
And the $PS3 will be blank and PS4 will contain the + character
The bash prompt can be customized from the ~/.bashrc file. This file contains the prompt Statement. This file is present in the home directory of the user.
~/.bashrc
Before editing this file, make the backup of the ~/.bashrc file. Use the following command to make a backup of the ~/.bashrc file
cp ~/.bashrc ~/.bashrc.bak
To change the bash prompt permanently, we can edit the file ~/.bashrc and change the values of the PS1. To edit this file you can use any editor, but in this tutorial, we are going with nano editor, because it is easy to use. Now to open the ~/.bashrc file, use the following command:
nano ~/.bashrc
Then you will see there is a PS1 variable.
You can edit the value of this variable to change your prompt. For now, let’s change the value of this prompt to bashprompt>. Then save the file using the ctrl+s and then close the file using ctrl+x. Then use the following command to see changes in prompt
source ~/.bashrc
Now we have changes our bash prompt permanently.
We can change our bash prompt temporarily using the export command, this prompt will work for the current session. To change the temporary bash shell prompt, use the following command:
export PS1="bashprompt>"
Or you can just run to enter the PS1 variable with value as a command:
Most of the Linux distributions contain the username@hostname as a bash prompt. We can change it to anything we want. We have to just modify the value of the PS1 variable. In the above two sections, we have seen how to modify the value of the PS1 characters permanently and temporarily. Change the value of PS1 according to your need. So now to change the username@hostname to “myprompt@linux> ” we can set the value of PS1 to
export PS1="myprompt@linux> "
Now let’s see how to add emojis in the bash prompt. To add the emojis to the prompt, first, make sure that you have installed any emoji font on the system. To use the emoji in the prompt, just put the emoji in the PS1 variable. Here is one example:
PS1="???? ~ "
To show the version of bash shell in the prompt, put the \v prompting character in the PS1 variable:
PS1="Bash \v>"
And to show the current bash version with the patch level, use the \V prompting character:
PS1="Bash \V>"
To customize the PS1 prompt, we need to edit the content of the PS1 prompt. The PS1 contains some characters followed by the backslash characters. Following are the same characters that are written in the PS1 prompt:
\u: This character indicates the username of the current user.
\h: This character indicates the hostname till the first ‘ . ‘ Character in the Fully-Qualified Domain Name
\W: This character shows the base path of the current working directory. For the home directory, the value will be tilde (~) character.
\$: This character is used to separate the command and prompt. If the account is standard then this field contains $ character, or if the account is root then this field contains the # character.
Now let’s add some other options in PS1 and check how our prompt looks like the \! Character shows the number of the current commands and \H character shows the Full Fully-Qualified Domain hostname instead of showing till ‘ . ‘ Character. Here is the prompt now:
PS1="[\u@\H \W \!]$"
In the next sections, we are going to explore more prompting options or characters.
Now let’s see how can we customize the bash prompt using the options provided by the bash shell for the prompt. Before adding any option to the prompt, use the \ character before the options.
Bash prompt provides two options, by using these we can show hostname and username in prompt.
To show username in prompt, use u character followed by \ character.
To show hostname in prompt, use h character in PS1.
Here is one example:
export PS1="\u \h >"
We can add the special character in the bash prompt. Just arrange them in order how you want to customize the prompt. Here is one example:
export PS1="\u@\h> "
You should always use the special character at the end of the prompt, which will be useful to separate the command and prompt.
Now let’s see how we can add the time to the bash prompt. Following are the options which will be used to display date and time in prompt
d – This option will show the date in “Weekday Month Date” format
t – This option will show the current time in 24-hour HH:MM:SS format
T – This option will show the current time in 12-hour HH:MM:SS format
A – This option will show the current time in 24-hour HH:MM format
To prevent the showing username and hostname into prompt just don’t use the h and u characters in the PS1 variable. Just use the W character to display the path of the current directory.
The bash prompt is differentiated using the $ and # characters at the end of the prompt. The $ character is used for the standard user and the # character is used for the root user.
export PS1="\u@\H \W:\$ "
To know all color options, you read the PROMPTING section of the man page of bash. Using the man command.
tput is a command that provides the terminal dependent information to the shell .tput command queries the term info database for the information. Now let’s see how we can use the tput command to change the prompt color. Now let’s see how to change the color of the background and foreground of the prompt.
export PS1=”\[$(tput setaf 1)\]\[$(tput setab 7)\]\u@\h:\w $ \[$(tput sgr0)\]”
Following are the options that can be used with the tput command:
tput bold –To apply the bold effect
tput rev – To display inverse color
tput sgr0 – To reset everything
tput setaf {code}– To set the foreground color. See the table below to know the value of {code}
tput setab {code}– To set background color, See the table below to know value of {code}
Color codes that are used with tput command:
We can change the color of the bash prompt. Here is one example:
export PS1="\e[0;32m[\u@\h \W]\$ \e[0m"
Now let’s see how we can change the color of the bash prompt:
\e[ – This string tells bash prompt to apply color from next character.
0;32m – This string represents the colors. The number before the; represent typeface. And the number after the ; represent color code.
\e[0m – This string will tell the bash prompt to apply the color to the previous character.
Following are the values for the typeface:
0 – Normal
1 – Bold
2 – Dim
4 – Underlined
Following are the values for the color codes:
30 – Black
31 – Red
32 – Green
33 – Brown
34 – Blue
35 – Purple
36 – Cyan
37 – Light gray
You can create the themes using different combinations of the above colors.
If you want the shell back as it is, then we can do that. At the start of this article, we have created the backup file the ~/.bashrc file. Now to get back our original bash prompt, we can use that file. Use the following command:
cat ~/.bashrc.bak > ~/.bashrc
To know more about the bash prompt, read the man page of the bash.
man bash
shivamshubham
Picked
How To
Linux-Unix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install FFmpeg on Windows?
How to Add External JAR File to an IntelliJ IDEA Project?
How to Set Git Username and Password in GitBash?
How to Install Jupyter Notebook on MacOS?
How to Create and Setup Spring Boot Project in Eclipse IDE?
Sed Command in Linux/Unix with examples
AWK command in Unix/Linux with examples
grep command in Unix/Linux
cut command in Linux with examples
cp command in Linux with examples | [
{
"code": null,
"e": 26197,
"s": 26169,
"text": "\n14 Apr, 2022"
},
{
"code": null,
"e": 26546,
"s": 26197,
"text": "If you are using the Linux operating system, that means you use the CLI most of the time. And do more work on the terminal. By default, most Linux Operating system... |
How to create a Disabled Option Select using jQuery Mobile ? - GeeksforGeeks | 20 Dec, 2020
jQuery Mobile is a web-based technology used to make responsive content that can be accessed on all smartphones, tablets, and desktops. In this article, we will be creating a Disabled Option Select using jQuery Mobile.
Approach: First, add jQuery Mobile scripts needed for your project.
<link rel=”stylesheet” href=”http://code.jquery.com/mobile/1.4.5/jquery.mobile-1.4.5.min.css”/><script src=”http://code.jquery.com/jquery-1.11.1.min.js”></script><script src=”http://code.jquery.com/mobile/1.4.5/jquery.mobile-1.4.5.min.js”></script>
Example:
HTML
<!DOCTYPE html><html> <head> <link rel="stylesheet" href="http://code.jquery.com/mobile/1.4.5/jquery.mobile-1.4.5.min.css" /> <script src= "http://code.jquery.com/jquery-1.11.1.min.js"> </script> <script src="http://code.jquery.com/mobile/1.4.5/jquery.mobile-1.4.5.min.js"> </script></head> <body> <center> <h1>GeeksforGeeks</h1> <h4> Disabled Option Select using jQuery Mobile </h4> </center> <form> <div data-role="fieldcontain"> <label for="Geeks"> Select Element: </label> <select name="Geeks" id="Geeks"> <option value="1">Geeks1</option> <option value="2" disabled="disabled"> Geeks2 </option> <option value="3">Geeks3</option> <option value="4">Geeks4</option> </select> </div> </form></body> </html>
Output:
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
jQuery-Mobile
HTML
JQuery
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
REST API (Introduction)
HTML Cheat Sheet - A Basic Guide to HTML
Design a web page using HTML and CSS
Form validation using jQuery
Angular File Upload
JQuery | Set the value of an input text field
Form validation using jQuery
How to change selected value of a drop-down list using jQuery?
How to change the background color after clicking the button in JavaScript ?
How to fetch data from JSON file and display in HTML table using jQuery ? | [
{
"code": null,
"e": 26202,
"s": 26174,
"text": "\n20 Dec, 2020"
},
{
"code": null,
"e": 26421,
"s": 26202,
"text": "jQuery Mobile is a web-based technology used to make responsive content that can be accessed on all smartphones, tablets, and desktops. In this article, we will be... |
Node.js process.mainModule Property - GeeksforGeeks | 12 Oct, 2021
The process.mainModule property is an inbuilt application programming interface of the process module which is used to get the main module. This is an alternative way to get require.main but unlike require.main, process.mainModule dynamically changes in runtime. Gennerally we can assume those two modules are the same.
Syntax:
process.mainModule
Return Value: This property returns an object that contains the reference of main module.
Below examples illustrate the use of process.mainModule property in Node.js:
Example 1:
// Node.js program to demonstrate the// process.mainModule Property // Include process moduleconst process = require('process'); // Printing process.mainModule property valueconsole.log(process.mainModule);
Output:
Module {
id: '.',
exports: {},
parent: null,
filename: 'C:\\nodejs\\g\\process\\mainmodule_1.js',
loaded: false,
children: [],
paths:
[ 'C:\\nodejs\\g\\process\\node_modules',
'C:\\nodejs\\g\\node_modules',
'C:\\nodejs\\node_modules',
'C:\\node_modules'
]
}
Example 2:
// Node.js program to demonstrate the// process.mainModule Property // Include process moduleconst process = require('process'); // Printing process.mainModule property valuevar mainModule = process.mainModule;for(mod in mainModule) { console.log(mod + ":" + mainModule[mod]);}
Output:
id:.
exports:[object Object]
parent:null
filename:/home/cg/root/6720369/main.js
loaded:false
children:
paths:/home/cg/root/6720369/node_modules, /home/cg/root/node_modules,
/home/cg/node_modules, /home/node_modules, /node_modules
load:function (filename) {
debug('load %j for module %j', filename, this.id);
assert(!this.loaded);
this.filename = filename;
this.paths = Module._nodeModulePaths(path.dirname(filename));
var extension = path.extname(filename) || '.js';
if (!Module._extensions[extension]) extension = '.js';
Module._extensions[extension](this, filename);
this.loaded = true;
}
require:function (path) {
assert(path, 'missing path');
assert(typeof path === 'string', 'path must be a string');
return Module._load(path, this, /* isMain */ false);
}
_compile:function (content, filename) {
// Remove shebang
var contLen = content.length;
if (contLen >= 2) {
if (content.charCodeAt(0) === 35/*#*/ &&
content.charCodeAt(1) === 33/*!*/) {
if (contLen === 2) {
// Exact match
content = '';
} else {
// Find end of shebang line and slice it off
var i = 2;
for (; i < contLen; ++i) {
var code = content.charCodeAt(i);
if (code === 10/*\n*/ || code === 13/*\r*/)
break;
}
if (i === contLen)
content = '';
else {
// Note that this actually includes the newline character(s) in the
// new output. This duplicates the behavior of the regular expression
// that was previously used to replace the shebang line
content = content.slice(i);
}
}
}
}
// create wrapper function
var wrapper = Module.wrap(content);
var compiledWrapper = vm.runInThisContext(wrapper, {
filename: filename,
lineOffset: 0,
displayErrors: true
});
if (process._debugWaitConnect && process._eval == null) {
if (!resolvedArgv) {
// we enter the repl if we're not given a filename argument.
if (process.argv[1]) {
resolvedArgv = Module._resolveFilename(process.argv[1], null);
} else {
resolvedArgv = 'repl';
}
}
// Set breakpoint on module start
if (filename === resolvedArgv) {
delete process._debugWaitConnect;
const Debug = vm.runInDebugContext('Debug');
Debug.setBreakPoint(compiledWrapper, 0, 0);
}
}
var dirname = path.dirname(filename);
var require = internalModule.makeRequireFunction.call(this);
var args = [this.exports, require, this, filename, dirname];
var depth = internalModule.requireDepth;
if (depth === 0) stat.cache = new Map();
var result = compiledWrapper.apply(this.exports, args);
if (depth === 0) stat.cache = null;
return result;
}
Note: The above program will compile and run by using the node filename.js command.
Reference: https://nodejs.org/api/process.html#process_process_mainmodule
Node.js-process-module
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Node.js fs.writeFile() Method
How to install the previous version of node.js and npm ?
Difference between promise and async await in Node.js
How to use an ES6 import in Node.js?
Express.js res.render() Function
Remove elements from a JavaScript Array
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 26375,
"s": 26347,
"text": "\n12 Oct, 2021"
},
{
"code": null,
"e": 26695,
"s": 26375,
"text": "The process.mainModule property is an inbuilt application programming interface of the process module which is used to get the main module. This is an alternative ... |
jQuery | param() Method - GeeksforGeeks | 21 Feb, 2019
The param() Method in jQuery is used to create a serialized representation of an object.
Syntax:
$.param( object, trad )
Parameters: This method accepts two parameters as mentioned above and described below:
object: It is a mandatory parameter which is used to specify an array or object to serialize.
trad: It is an optional parameter and used to specify whether or not to use the traditional style of param serialization.
Example 1: This example use param() method to create serialized representation of an object.
<!DOCTYPE html><html> <head> <title> jQuery param() Method </title> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> </script></head> <body style="text-align:center;"> <h1 style = "color:green;" > GeeksForGeeks </h1> <h2>jQuery param() Method</h2> <button>Click</button> <div></div> <!-- Script using param() method --> <script> $(document).ready(function() { personObj = new Object(); personObj.Firstword = "Geeks"; personObj.Secondword = "For"; personObj.Thirdword = "Geeks"; personObj.Wordcolor = "Green"; $("button").click(function() { $("div").text($.param(personObj)); }); }); </script></body> </html>
Output:Before click on the button:After click on the button:
Example 2: This example use param() method to create serialized representation of an object.
<!DOCTYPE html><html> <head> <title> jQuery param() Method </title> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> </script></head> <body style="text-align:center;"> <h1 style = "color:green;" > GeeksForGeeks </h1> <h2>jQuery param() Method</h2> <button>Click</button> <div></div> <!-- Script using param() method --> <script> $(document).ready(function() { personObj = new Object(); personObj.Fullword = "GeeksForGeeks "; personObj.Wordcolor = " Green"; $("button").click(function(){ $("div").text($.param(personObj)); }); }); </script></body> </html>
Output:Before click on the button:After click on the button:
jQuery-Basics
JQuery
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
JQuery | Set the value of an input text field
Form validation using jQuery
How to change selected value of a drop-down list using jQuery?
How to change the background color after clicking the button in JavaScript ?
How to fetch data from JSON file and display in HTML table using jQuery ?
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 41968,
"s": 41940,
"text": "\n21 Feb, 2019"
},
{
"code": null,
"e": 42057,
"s": 41968,
"text": "The param() Method in jQuery is used to create a serialized representation of an object."
},
{
"code": null,
"e": 42065,
"s": 42057,
"text": "S... |
Flat Panel Display - GeeksforGeeks | 22 Jul, 2019
Flat-Panel Devices are the devices that have less volume, weight, and power consumption compared to Cathode Ray Tube (CRT). Due to the advantages of the Flat-Panel Display, use of CRT decreased. As Flat Panel Devices are light in weights that’s why they can be hang on walls and wear them on our wrist as a watch. Flat Panel Display (FPD) allow users to view data, graphics, text and images.
Types of Flat Panel Display:
Emissive Display:The Emissive Display or Emitters are the devices that convert electrical energy into light energy.Examples: Plasma Panel, LED (Light Emitting Diode), Flat CRT. Non-Emissive Display:Non-Emissive Display or Non-Emitters are the devices that use optical effects to convert sunlight or some other source into graphic patterns.Examples: LCD (Liquid Crystal Display)
Emissive Display:The Emissive Display or Emitters are the devices that convert electrical energy into light energy.Examples: Plasma Panel, LED (Light Emitting Diode), Flat CRT.
Examples: Plasma Panel, LED (Light Emitting Diode), Flat CRT.
Non-Emissive Display:Non-Emissive Display or Non-Emitters are the devices that use optical effects to convert sunlight or some other source into graphic patterns.Examples: LCD (Liquid Crystal Display)
Examples: LCD (Liquid Crystal Display)
Advantages of Flat Panel Devices:
Flat Panel Devices like LCD produces high quality digital images.
Flat Panel monitor are stylish and have very space saving design.
Flat Panel Devices consumes less power and give maximum image size in minimum space.
Flat Panel Devices use its full color display capability.
Full motion video can be viewed on Flat Panel Devices without artifacts or contrast loss.
computer-graphics
Misc
Misc
Misc
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to write Regular Expressions?
Minimax Algorithm in Game Theory | Set 3 (Tic-Tac-Toe AI - Finding optimal move)
fgets() and gets() in C language
Association Rule
Recursive Functions
Activation Functions
Java Math min() method with Examples
Software Engineering | Prototyping Model
OOPs | Object Oriented Design
Characteristics of Internet of Things | [
{
"code": null,
"e": 25789,
"s": 25761,
"text": "\n22 Jul, 2019"
},
{
"code": null,
"e": 26181,
"s": 25789,
"text": "Flat-Panel Devices are the devices that have less volume, weight, and power consumption compared to Cathode Ray Tube (CRT). Due to the advantages of the Flat-Panel... |
Print all valid words that are possible using Characters of Array - GeeksforGeeks | 17 Jan, 2022
Given a dictionary and a character array, print all valid words that are possible using characters from the array. Examples:
Input : Dict - {"go","bat","me","eat","goal",
"boy", "run"}
arr[] = {'e','o','b', 'a','m','g', 'l'}
Output : go, me, goal.
Asked In : Microsoft Interview
The idea is to use Trie data structure to store dictionary, then search words in Trie using characters of given array.
Create an empty Trie and insert all words of given dictionary into the Trie.After that, we have pick only those characters in ‘Arr[]’ which are a child of the root of Trie.To quickly find whether a character is present in array or not, we create a hash of character arrays.
Create an empty Trie and insert all words of given dictionary into the Trie.
After that, we have pick only those characters in ‘Arr[]’ which are a child of the root of Trie.
To quickly find whether a character is present in array or not, we create a hash of character arrays.
Below is the implementation of above idea
C++
Java
C#
// C++ program to print all valid words that// are possible using character of array#include<bits/stdc++.h>using namespace std; // Converts key current character into index// use only 'a' through 'z'#define char_int(c) ((int)c - (int)'a') //converts current integer into character#define int_to_char(c) ((char)c + (char)'a') // Alphabet size#define SIZE (26) // trie Nodestruct TrieNode{ TrieNode *Child[SIZE]; // isLeaf is true if the node represents // end of a word bool leaf;}; // Returns new trie node (initialized to NULLs)TrieNode *getNode(){ TrieNode * newNode = new TrieNode; newNode->leaf = false; for (int i =0 ; i< SIZE ; i++) newNode->Child[i] = NULL; return newNode;} // If not present, inserts key into trie// If the key is prefix of trie node, just// marks leaf nodevoid insert(TrieNode *root, char *Key){ int n = strlen(Key); TrieNode * pChild = root; for (int i=0; i<n; i++) { int index = char_int(Key[i]); if (pChild->Child[index] == NULL) pChild->Child[index] = getNode(); pChild = pChild->Child[index]; } // make last node as leaf node pChild->leaf = true;} // A recursive function to print all possible valid// words present in arrayvoid searchWord(TrieNode *root, bool Hash[], string str){ // if we found word in trie / dictionary if (root->leaf == true) cout << str << endl ; // traverse all child's of current root for (int K =0; K < SIZE; K++) { if (Hash[K] == true && root->Child[K] != NULL ) { // add current character char c = int_to_char(K); // Recursively search reaming character of word // in trie searchWord(root->Child[K], Hash, str + c); } }} // Prints all words present in dictionary.void PrintAllWords(char Arr[], TrieNode *root, int n){ // create a 'has' array that will store all present // character in Arr[] bool Hash[SIZE]; for (int i = 0 ; i < n; i++) Hash[char_int(Arr[i])] = true; // temporary node TrieNode *pChild = root ; // string to hold output words string str = ""; // Traverse all matrix elements. There are only 26 // character possible in char array for (int i = 0 ; i < SIZE ; i++) { // we start searching for word in dictionary // if we found a character which is child // of Trie root if (Hash[i] == true && pChild->Child[i] ) { str = str+(char)int_to_char(i); searchWord(pChild->Child[i], Hash, str); str = ""; } }} //Driver program to test above functionint main(){ // Let the given dictionary be following char Dict[][20] = {"go", "bat", "me", "eat", "goal", "boy", "run"} ; // Root Node of Trie TrieNode *root = getNode(); // insert all words of dictionary into trie int n = sizeof(Dict)/sizeof(Dict[0]); for (int i=0; i<n; i++) insert(root, Dict[i]); char arr[] = {'e', 'o', 'b', 'a', 'm', 'g', 'l'} ; int N = sizeof(arr)/sizeof(arr[0]); PrintAllWords(arr, root, N); return 0;}
// Java program to print all valid words that// are possible using character of arraypublic class SearchDict_charArray { // Alphabet size static final int SIZE = 26; // trie Node static class TrieNode { TrieNode[] Child = new TrieNode[SIZE]; // isLeaf is true if the node represents // end of a word boolean leaf; // Constructor public TrieNode() { leaf = false; for (int i =0 ; i< SIZE ; i++) Child[i] = null; } } // If not present, inserts key into trie // If the key is prefix of trie node, just // marks leaf node static void insert(TrieNode root, String Key) { int n = Key.length(); TrieNode pChild = root; for (int i=0; i<n; i++) { int index = Key.charAt(i) - 'a'; if (pChild.Child[index] == null) pChild.Child[index] = new TrieNode(); pChild = pChild.Child[index]; } // make last node as leaf node pChild.leaf = true; } // A recursive function to print all possible valid // words present in array static void searchWord(TrieNode root, boolean Hash[], String str) { // if we found word in trie / dictionary if (root.leaf == true) System.out.println(str); // traverse all child's of current root for (int K =0; K < SIZE; K++) { if (Hash[K] == true && root.Child[K] != null ) { // add current character char c = (char) (K + 'a'); // Recursively search reaming character // of word in trie searchWord(root.Child[K], Hash, str + c); } } } // Prints all words present in dictionary. static void PrintAllWords(char Arr[], TrieNode root, int n) { // create a 'has' array that will store all // present character in Arr[] boolean[] Hash = new boolean[SIZE]; for (int i = 0 ; i < n; i++) Hash[Arr[i] - 'a'] = true; // temporary node TrieNode pChild = root ; // string to hold output words String str = ""; // Traverse all matrix elements. There are only // 26 character possible in char array for (int i = 0 ; i < SIZE ; i++) { // we start searching for word in dictionary // if we found a character which is child // of Trie root if (Hash[i] == true && pChild.Child[i] != null ) { str = str+(char)(i + 'a'); searchWord(pChild.Child[i], Hash, str); str = ""; } } } //Driver program to test above function public static void main(String args[]) { // Let the given dictionary be following String Dict[] = {"go", "bat", "me", "eat", "goal", "boy", "run"} ; // Root Node of Trie TrieNode root = new TrieNode(); // insert all words of dictionary into trie int n = Dict.length; for (int i=0; i<n; i++) insert(root, Dict[i]); char arr[] = {'e', 'o', 'b', 'a', 'm', 'g', 'l'} ; int N = arr.length; PrintAllWords(arr, root, N); }}// This code is contributed by Sumit Ghosh
// C# program to print all valid words that// are possible using character of arrayusing System; public class SearchDict_charArray{ // Alphabet size static readonly int SIZE = 26; // trie Node public class TrieNode { public TrieNode[] Child = new TrieNode[SIZE]; // isLeaf is true if the node represents // end of a word public Boolean leaf; // Constructor public TrieNode() { leaf = false; for (int i =0 ; i< SIZE ; i++) Child[i] = null; } } // If not present, inserts key into trie // If the key is prefix of trie node, just // marks leaf node static void insert(TrieNode root, String Key) { int n = Key.Length; TrieNode pChild = root; for (int i = 0; i < n; i++) { int index = Key[i] - 'a'; if (pChild.Child[index] == null) pChild.Child[index] = new TrieNode(); pChild = pChild.Child[index]; } // make last node as leaf node pChild.leaf = true; } // A recursive function to print all possible valid // words present in array static void searchWord(TrieNode root, Boolean []Hash, String str) { // if we found word in trie / dictionary if (root.leaf == true) Console.WriteLine(str); // traverse all child's of current root for (int K = 0; K < SIZE; K++) { if (Hash[K] == true && root.Child[K] != null ) { // add current character char c = (char) (K + 'a'); // Recursively search reaming character // of word in trie searchWord(root.Child[K], Hash, str + c); } } } // Prints all words present in dictionary. static void PrintAllWords(char []Arr, TrieNode root, int n) { // create a 'has' array that will store all // present character in Arr[] Boolean[] Hash = new Boolean[SIZE]; for (int i = 0 ; i < n; i++) Hash[Arr[i] - 'a'] = true; // temporary node TrieNode pChild = root ; // string to hold output words String str = ""; // Traverse all matrix elements. There are only // 26 character possible in char array for (int i = 0 ; i < SIZE ; i++) { // we start searching for word in dictionary // if we found a character which is child // of Trie root if (Hash[i] == true && pChild.Child[i] != null ) { str = str+(char)(i + 'a'); searchWord(pChild.Child[i], Hash, str); str = ""; } } } // Driver code public static void Main(String []args) { // Let the given dictionary be following String []Dict = {"go", "bat", "me", "eat", "goal", "boy", "run"} ; // Root Node of Trie TrieNode root = new TrieNode(); // insert all words of dictionary into trie int n = Dict.Length; for (int i = 0; i < n; i++) insert(root, Dict[i]); char []arr = {'e', 'o', 'b', 'a', 'm', 'g', 'l'} ; int N = arr.Length; PrintAllWords(arr, root, N); }} /* This code is contributed by PrinciRaj1992 */
Output:
go
goal
me
This article is contributed by Nishant Singh . If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
princiraj1992
sagartomar9927
Trie
Advanced Data Structure
Strings
Strings
Trie
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Decision Tree Introduction with example
Ordered Set and GNU C++ PBDS
Red-Black Tree | Set 2 (Insert)
Disjoint Set Data Structures
Binary Indexed Tree or Fenwick Tree
Write a program to reverse an array or string
Reverse a string in Java
Write a program to print all permutations of a given string
C++ Data Types
Longest Common Subsequence | DP-4 | [
{
"code": null,
"e": 26115,
"s": 26087,
"text": "\n17 Jan, 2022"
},
{
"code": null,
"e": 26242,
"s": 26115,
"text": "Given a dictionary and a character array, print all valid words that are possible using characters from the array. Examples: "
},
{
"code": null,
"e":... |
Digit Separator in C++14 - GeeksforGeeks | 21 Jan, 2021
In this article, we will discuss the use of a digit separator in C++. Sometimes, it becomes difficult to read numbers that contain many digits. For example, 1000 is readable but what if more zeros are added to it, let’s say 1000000, now it becomes a little difficult to read, and what will happen if more zeros are added to it. In real life, commas (, ) are added to the number. For Example: 10, 00, 000. Now it is easy to read, that is ten lakhs.
Now the question arises that C++ doesn’t accept such separators (comma) so how to deal with big numbers. To deal with it, C++14 has introduced a feature and its name is Digit Separator and denoted by a simple quotation mark (‘). This can make it easier for users to read large numbers.
Program 1:
Below is the implementation to show that single quote marks are ignored when determining their value:
C++14
// C++ program to demonstrate// the above approach#include <iostream>using namespace std; // Driver codeint main(){ long long int a = 10'00'000; // Print the value cout << a; return 0;}
1000000
Program 2:
Below is the program to show that Single quote marks are just for users. Using them in any position does not affect the compiler.
C++14
// C++ program to demonstrate// the above approach#include <iostream>using namespace std; // Driver Codeint main(){ long long int a = 1'23'456; long long int b = 12'34'56; long long int c = 123'456; // Print all the value cout << "a:" << a << endl; cout << "b:" << b << endl; cout << "c:" << c << endl; return 0;}
a:123456
b:123456
c:123456
harshit17
Technical Scripter 2020
C++
C++ Programs
Technical Scripter
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Operator Overloading in C++
Polymorphism in C++
Friend class and function in C++
Sorting a vector in C++
std::string class in C++
Header files in C/C++ and its uses
Program to print ASCII Value of a character
How to return multiple values from a function in C or C++?
C++ Program for QuickSort
Sorting a Map by value in C++ STL | [
{
"code": null,
"e": 25369,
"s": 25341,
"text": "\n21 Jan, 2021"
},
{
"code": null,
"e": 25818,
"s": 25369,
"text": "In this article, we will discuss the use of a digit separator in C++. Sometimes, it becomes difficult to read numbers that contain many digits. For example, 1000 i... |
React-Bootstrap Nav Component - GeeksforGeeks | 18 May, 2021
React-Bootstrap is a front-end framework that was designed keeping react in mind. Nav Component is a component that is used by all Navigation bits in Bootstrap. It is useful for navigation purposes in applications. We can use the following approach in ReactJS to use the react-bootstrap Nav Component.
Nav Props:
activeKey: It is used to mark the NavItem as active with a matching eventKey/href.
as: It can be used as a custom element type for this component.
cardHeaderBsprefix: It is used to change the underlying component CSS modifier and base class names prefix for card Header.
defaultActiveKey: It is used to indicate the default Active Key.
fill: It makes NavItems proportionately fill all available width.
justify: It makes NavItems fill all available widths.
navbar: It is used to apply to style an alignment for use in a Navbar.
navbarBsPrefix: It is used to change the underlying component CSS modifier and base class names prefix for navbar.
onKeyDown: It is a callback that is triggered on the key down event.
onSelect: It is a callback that is triggered when a NavItem is selected.
role: It is the ARIA role for the Nav.
variant: It indicates the visual variant of the nav items.
bsPrefix: It is an escape hatch for working with strongly customized bootstrap CSS.
Nav.Item Props:
as: It can be used as a custom element type for this component.
role: It is the ARIA role for the Nav.
bsPrefix: It is an escape hatch for working with strongly customized bootstrap CSS.
Nav.Link Props:
active: It is used to mark the NavItem as active.
as: It can be used as a custom element type for this component.
disabled: It is used to disable this component.
eventKey: It is used to uniquely identify the NavItem.
href: It is used to pass the href attribute to this element.
onSelect: It is a callback that is triggered when a NavLink is selected.
role: It is the ARIA role for the Nav.
bsPrefix: It is an escape hatch for working with strongly customized bootstrap CSS.
NavDropdown Props:
active: It is used for styling the toggle NavLink as an active state.
disabled: It is used to disable the toggle NavLink.
id: It is the normal HTML id attribute for the Toggle button.
menuRole: It is used for the ARIA accessible role which is applied to the Menu component.
onClick: It is used to pass a callback function to the toggle component which acts as a handler.
renderMenuOnMount: It is used to indicate whether to render the dropdown menu in the DOM before it is the first time shown.
rootCloseEvent: It is used to close the component when an event is fired outside it.
title: It is used for the non-toggle Button content.
bsPrefix: It is an escape hatch for working with strongly customized bootstrap CSS.
Creating React Application And Installing Module:
Step 1: Create a React application using the following command:npx create-react-app foldername
Step 1: Create a React application using the following command:
npx create-react-app foldername
Step 2: After creating your project folder i.e. foldername, move to it using the following command:cd foldername
Step 2: After creating your project folder i.e. foldername, move to it using the following command:
cd foldername
Step 3: After creating the ReactJS application, Install the required module using the following command:npm install react-bootstrap
npm install bootstrap
Step 3: After creating the ReactJS application, Install the required module using the following command:
npm install react-bootstrap
npm install bootstrap
Project Structure: It will look like the following.
Project Structure
Example: Now write down the following code in the App.js file. Here, App is our default component where we have written our code.
App.js
import React from 'react';import 'bootstrap/dist/css/bootstrap.css';import Nav from 'react-bootstrap/Nav'; export default function App() { return ( <div style={{ display: 'block', width: 700, padding: 30 }}> <h4>React-Bootstrap Nav Component</h4> <Nav activeKey="/homeLink" onSelect={(selectedKey) => alert(`You just selected ${selectedKey} !`)}> <Nav.Item> <Nav.Link href="/homeLink">Active</Nav.Link> </Nav.Item> <Nav.Item> <Nav.Link eventKey="/OtherLink">Link</Nav.Link> </Nav.Item> <Nav.Item> <Nav.Link eventKey="disabled" disabled>Disabled</Nav.Link> </Nav.Item> </Nav> </div> );}
Step to Run Application: Run the application using the following command from the root directory of the project:
npm start
Output: Now open your browser and go to http://localhost:3000/, you will see the following output:
Reference: https://react-bootstrap.github.io/components/navs/
React-Bootstrap
JavaScript
ReactJS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Remove elements from a JavaScript Array
Difference between var, let and const keywords in JavaScript
Difference Between PUT and PATCH Request
JavaScript | Promises
How to get character array from string in JavaScript?
How to fetch data from an API in ReactJS ?
How to redirect to another page in ReactJS ?
How to pass data from child component to its parent in ReactJS ?
How to pass data from one component to other component in ReactJS ?
ReactJS Functional Components | [
{
"code": null,
"e": 26557,
"s": 26529,
"text": "\n18 May, 2021"
},
{
"code": null,
"e": 26859,
"s": 26557,
"text": "React-Bootstrap is a front-end framework that was designed keeping react in mind. Nav Component is a component that is used by all Navigation bits in Bootstrap. It... |
Python | Check if two lists have any element in common - GeeksforGeeks | 18 Feb, 2019
Sometimes we encounter the problem of checking if one list contains any element of another list. This kind of problems is quite popular in competitive programming. Let’s discuss various ways to achieve this particular task.
Method #1: Using any()
# Python code to check if two lists# have any element in common # Initialization of listlist1 = [1, 2, 3, 4, 55]list2 = [2, 3, 90, 22] # using any functionout = any(check in list1 for check in list2) # Checking conditionif out: print("True") else : print("False")
True
Method #2: Using in operator.
# Python code to check if two lists# have any element in common # Initialization of listlist1 = [1, 3, 4, 55]list2 = [90, 22] flag = 0 # Using in to check element of# second list into first listfor elem in list2: if elem in list1: flag = 1 # checking conditionif flag == 1: print("True") else : print("False")
False
Python list-programs
python-list
Python
Python Programs
python-list
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Check if element exists in list in Python
How To Convert Python Dictionary To JSON?
How to drop one or multiple columns in Pandas Dataframe
Python Classes and Objects
Defaultdict in Python
Python | Get dictionary keys as a list
Python | Split string into list of characters
Python | Convert a list to dictionary
How to print without newline in Python? | [
{
"code": null,
"e": 25537,
"s": 25509,
"text": "\n18 Feb, 2019"
},
{
"code": null,
"e": 25761,
"s": 25537,
"text": "Sometimes we encounter the problem of checking if one list contains any element of another list. This kind of problems is quite popular in competitive programming.... |
Reverse tree path - GeeksforGeeks | 02 May, 2022
Given a tree and node data, the task to reverse the path to that particular Node.
Examples:
Input:
7
/ \
6 5
/ \ / \
4 3 2 1
Data = 4
Output: Inorder of tree
7 6 3 4 2 5 1
Input:
7
/ \
6 5
/ \ / \
4 3 2 1
Data = 2
Output : Inorder of tree
4 6 3 2 7 5 1
The idea is to use a map to store path level wise.
Find the Node path as well as store it in the map
the path is
Replace the position with the map nextPos index value
increment the nextpos index and replace the next value
increment the nextpos index and replace the next value
Let’s understand the code:
C++
Java
Python3
C#
Javascript
// C++ program to Reverse Tree path#include <bits/stdc++.h>using namespace std; // A Binary Tree Nodestruct Node { int data; struct Node *left, *right;}; // 'data' is input. We need to reverse path from// root to data.// 'level' is current level.// 'temp' that stores path nodes.// 'nextpos' used to pick next item for reversing.Node* reverseTreePathUtil(Node* root, int data, map<int, int>& temp, int level, int& nextpos){ // return NULL if root NULL if (root == NULL) return NULL; // Final condition // if the node is found then if (data == root->data) { // store the value in it's level temp[level] = root->data; // change the root value with the current // next element of the map root->data = temp[nextpos]; // increment in k for the next element nextpos++; return root; } // store the data in particular level temp[level] = root->data; // We go to right only when left does not // contain given data. This way we make sure // that correct path node is stored in temp[] Node *left, *right; left = reverseTreePathUtil(root->left, data, temp, level + 1, nextpos); if (left == NULL) right = reverseTreePathUtil(root->right, data, temp, level + 1, nextpos); // If current node is part of the path, // then do reversing. if (left || right) { root->data = temp[nextpos]; nextpos++; return (left ? left : right); } // return NULL if not element found return NULL;} // Reverse Tree pathvoid reverseTreePath(Node* root, int data){ // store per level data map<int, int> temp; // it is for replacing the data int nextpos = 0; // reverse tree path reverseTreePathUtil(root, data, temp, 0, nextpos);} // INORDERvoid inorder(Node* root){ if (root != NULL) { inorder(root->left); cout << root->data << " "; inorder(root->right); }} // Utility function to create a new tree nodeNode* newNode(int data){ Node* temp = new Node; temp->data = data; temp->left = temp->right = NULL; return temp;} // Driver program to test above functionsint main(){ // Let us create binary tree shown in above diagram Node* root = newNode(7); root->left = newNode(6); root->right = newNode(5); root->left->left = newNode(4); root->left->right = newNode(3); root->right->left = newNode(2); root->right->right = newNode(1); /* 7 / \ 6 5 / \ / \ 4 3 2 1 */ int data = 4; // Reverse Tree Path reverseTreePath(root, data); // Traverse inorder inorder(root); return 0;}
// Java program to Reverse Tree pathimport java.util.*;class solution{ // A Binary Tree Nodestatic class Node { int data; Node left, right;}; //class for int valuesstatic class INT { int data;}; // 'data' is input. We need to reverse path from// root to data.// 'level' is current level.// 'temp' that stores path nodes.// 'nextpos' used to pick next item for reversing. static Node reverseTreePathUtil(Node root, int data, Map<Integer, Integer> temp, int level, INT nextpos){ // return null if root null if (root == null) return null; // Final condition // if the node is found then if (data == root.data) { // store the value in it's level temp.put(level,root.data); // change the root value with the current // next element of the map root.data = temp.get(nextpos.data); // increment in k for the next element nextpos.data++; return root; } // store the data in particular level temp.put(level,root.data); // We go to right only when left does not // contain given data. This way we make sure // that correct path node is stored in temp[] Node left, right=null; left = reverseTreePathUtil(root.left, data, temp, level + 1, nextpos); if (left == null) right = reverseTreePathUtil(root.right, data, temp, level + 1, nextpos); // If current node is part of the path, // then do reversing. if (left!=null || right!=null) { root.data = temp.get(nextpos.data); nextpos.data++; return (left!=null ? left : right); } // return null if not element found return null;} // Reverse Tree path static void reverseTreePath(Node root, int data){ // store per level data Map< Integer, Integer> temp= new HashMap< Integer, Integer>(); // it is for replacing the data INT nextpos=new INT(); nextpos.data = 0; // reverse tree path reverseTreePathUtil(root, data, temp, 0, nextpos);} // INORDERstatic void inorder(Node root){ if (root != null) { inorder(root.left); System.out.print( root.data + " "); inorder(root.right); }} // Utility function to create a new tree node static Node newNode(int data){ Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // Driver program to test above functionspublic static void main(String args[]){ // Let us create binary tree shown in above diagram Node root = newNode(7); root.left = newNode(6); root.right = newNode(5); root.left.left = newNode(4); root.left.right = newNode(3); root.right.left = newNode(2); root.right.right = newNode(1); /* 7 / \ 6 5 / \ / \ 4 3 2 1 */ int data = 4; // Reverse Tree Path reverseTreePath(root, data); // Traverse inorder inorder(root);}}//contributed by Arnab Kundu
# Python3 program to Reverse Tree path # A Binary Tree Nodeclass Node: def __init__(self, data): self.data = data self.left = None self.right = None # 'data' is input. We need to reverse path from# root to data.# 'level' is current level.# 'temp' that stores path nodes.# 'nextpos' used to pick next item for reversing.def reverseTreePathUtil(root, data,temp, level, nextpos): # return None if root None if (root == None): return None, temp, nextpos; # Final condition # if the node is found then if (data == root.data): # store the value in it's level temp[level] = root.data; # change the root value with the current # next element of the map root.data = temp[nextpos]; # increment in k for the next element nextpos += 1 return root, temp, nextpos; # store the data in particular level temp[level] = root.data; # We go to right only when left does not # contain given data. This way we make sure # that correct path node is stored in temp[] right = None left, temp, nextpos = reverseTreePathUtil(root.left, data, temp, level + 1, nextpos); if (left == None): right, temp, nextpos = reverseTreePathUtil(root.right, data, temp, level + 1, nextpos); # If current node is part of the path, # then do reversing. if (left or right): root.data = temp[nextpos]; nextpos += 1 return (left if left != None else right), temp, nextpos; # return None if not element found return None, temp, nextpos; # Reverse Tree pathdef reverseTreePath(root, data): # store per level data temp = dict() # it is for replacing the data nextpos = 0; # reverse tree path reverseTreePathUtil(root, data, temp, 0, nextpos); # INORDERdef inorder(root): if (root != None): inorder(root.left); print(root.data, end = ' ') inorder(root.right); # Utility function to create a new tree nodedef newNode(data): temp = Node(data) return temp; # Driver codeif __name__=='__main__': # Let us create binary tree shown in above diagram root = newNode(7); root.left = newNode(6); root.right = newNode(5); root.left.left = newNode(4); root.left.right = newNode(3); root.right.left = newNode(2); root.right.right = newNode(1); ''' 7 / \ 6 5 / \ / \ 4 3 2 1 ''' data = 4; # Reverse Tree Path reverseTreePath(root, data); # Traverse inorder inorder(root); # This code is contributed by rutvik_56.
// C# program to Reverse Tree pathusing System;using System.Collections.Generic; class GFG{ // A Binary Tree Nodepublic class Node{ public int data; public Node left, right;} //class for int valuespublic class INT{ public int data;} // 'data' is input. We need to reverse// path from root to data.// 'level' is current level.// 'temp' that stores path nodes.// 'nextpos' used to pick next item for reversing.public static Node reverseTreePathUtil(Node root, int data, IDictionary<int, int> temp, int level, INT nextpos){ // return null if root null if (root == null) { return null; } // Final condition // if the node is found then if (data == root.data) { // store the value in it's level temp[level] = root.data; // change the root value with the // current next element of the map root.data = temp[nextpos.data]; // increment in k for the next element nextpos.data++; return root; } // store the data in particular level temp[level] = root.data; // We go to right only when left does not // contain given data. This way we make sure // that correct path node is stored in temp[] Node left, right = null; left = reverseTreePathUtil(root.left, data, temp, level + 1, nextpos); if (left == null) { right = reverseTreePathUtil(root.right, data, temp, level + 1, nextpos); } // If current node is part of the path, // then do reversing. if (left != null || right != null) { root.data = temp[nextpos.data]; nextpos.data++; return (left != null ? left : right); } // return null if not element found return null;} // Reverse Tree pathpublic static void reverseTreePath(Node root, int data){ // store per level data IDictionary<int, int> temp = new Dictionary<int, int>(); // it is for replacing the data INT nextpos = new INT(); nextpos.data = 0; // reverse tree path reverseTreePathUtil(root, data, temp, 0, nextpos);} // INORDERpublic static void inorder(Node root){ if (root != null) { inorder(root.left); Console.Write(root.data + " "); inorder(root.right); }} // Utility function to create// a new tree nodepublic static Node newNode(int data){ Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // Driver Codepublic static void Main(string[] args){ // Let us create binary tree // shown in above diagram Node root = newNode(7); root.left = newNode(6); root.right = newNode(5); root.left.left = newNode(4); root.left.right = newNode(3); root.right.left = newNode(2); root.right.right = newNode(1); /* 7 / \ 6 5 / \ / \ 4 3 2 1 */ int data = 4; // Reverse Tree Path reverseTreePath(root, data); // Traverse inorder inorder(root);}} // This code is contributed by Shrikant13
<script> // Javascript program to Reverse Tree path // A Binary Tree Nodeclass Node{ constructor() { this.data = 0; this.left = null; this.right = null; }} // Class for int valuesclass INT{ constructor() { this.data = 0; }} // 'data' is input. We need to reverse// path from root to data.// 'level' is current level.// 'temp' that stores path nodes.// 'nextpos' used to pick next item for reversing.function reverseTreePathUtil(root, data, temp, level, nextpos){ // Return null if root null if (root == null) { return null; } // Final condition // if the node is found then if (data == root.data) { // Store the value in it's level temp[level] = root.data; // Change the root value with the // current next element of the map root.data = temp[nextpos.data]; // Increment in k for the next element nextpos.data++; return root; } // Store the data in particular level temp[level] = root.data; // We go to right only when left does not // contain given data. This way we make sure // that correct path node is stored in temp[] var left, right = null; left = reverseTreePathUtil(root.left, data, temp, level + 1, nextpos); if (left == null) { right = reverseTreePathUtil(root.right, data, temp, level + 1, nextpos); } // If current node is part of the path, // then do reversing. if (left != null || right != null) { root.data = temp[nextpos.data]; nextpos.data++; return (left != null ? left : right); } // Return null if not element found return null;} // Reverse Tree pathfunction reverseTreePath(root, data){ // Store per level data var temp = new Map(); // It is for replacing the data var nextpos = new INT(); nextpos.data = 0; // Reverse tree path reverseTreePathUtil(root, data, temp, 0, nextpos);} // INORDERfunction inorder(root){ if (root != null) { inorder(root.left); document.write(root.data + " "); inorder(root.right); }} // Utility function to create// a new tree nodefunction newNode(data){ var temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // Driver Code // Let us create binary tree// shown in above diagramvar root = newNode(7);root.left = newNode(6);root.right = newNode(5);root.left.left = newNode(4);root.left.right = newNode(3);root.right.left = newNode(2);root.right.right = newNode(1);/* 7 / \ 6 5 / \ / \ 4 3 2 1 */var data = 4; // Reverse Tree PathreverseTreePath(root, data); // Traverse inorderinorder(root); // This code is contributed by rrrtnx </script>
7 6 3 4 2 5 1
Use the concept of printing all the root-to-leaf paths. The idea is to keep a track of the path from the root to that particular node upto which the path is to be reversed and once we get that particular node we simply reverse the data of those nodes.Here we will not only try to track all the root the leaf paths but also check for the node up to which we need to reverse the path.
Use a vector to store every path.
Once we get the node up to which the path needs to be reversed we use a simple algorithm to reverse the data of the nodes found in the followed path that is store in the vector.
Implementation of the above approach given below:
C++
Java
Python3
C#
Javascript
// CPP program for the above approach#include <bits/stdc++.h>using namespace std;#define nl "\n"class Node {public: int data; Node* left; Node* right; Node(int value) { data = value; }}; // Function to print inorder// traversal of the treevoid inorder(Node* temp){ if (temp == NULL) return; inorder(temp->left); cout << temp->data << " "; inorder(temp->right);} // Utility function to track// root to leaf pathsvoid reverseTreePathUtil(Node* root, vector<Node*> path, int pathLen, int key){ // Check if root is null then return if (root == NULL) return; // Store the node in path array path[pathLen] = root; pathLen++; // Check if we find the node upto // which path needs to be // reversed if (root->data == key) { // Current path array contains // the path which needs // to be reversed int i = 0, j = pathLen - 1; // Swap the data of two nodes while (i < j) { int temp = path[i]->data; path[i]->data = path[j]->data; path[j]->data = temp; i++; j--; } } // Check if the node is a // leaf node then return if (!root->left and !root->right) return; // Call utility function for // left and right subtree // recursively reverseTreePathUtil(root->left, path, pathLen, key); reverseTreePathUtil(root->right, path, pathLen, key);} // Function to reverse tree pathvoid reverseTreePath(Node* root, int key){ if (root == NULL) return; // Initialize a vector to store paths vector<Node*> path(50, NULL); reverseTreePathUtil(root, path, 0, key);} // Driver Codeint main(){ Node* root = new Node(7); root->left = new Node(6); root->right = new Node(5); root->left->left = new Node(4); root->left->right = new Node(3); root->right->left = new Node(2); root->right->right = new Node(1); /* 7 / \ 6 5 / \ / \ 4 3 2 1 */ int key = 4; reverseTreePath(root, key); inorder(root); return 0;}
// Java program for the above approachimport java.util.*;class GFG{ static class Node { int data; Node left; Node right; Node(int value) { this.data = value; }}; // Function to print inorder// traversal of the treestatic void inorder(Node temp){ if (temp == null) return; inorder(temp.left); System.out.print(temp.data+" "); inorder(temp.right);} // Utility function to track// root to leaf pathsstatic void reverseTreePathUtil(Node root, ArrayList<Node> path, int pathLen, int key){ // Check if root is null then return if (root == null) return; // Store the node in path array path.set(pathLen, root); pathLen++; // Check if we find the node upto // which path needs to be // reversed if (root.data == key) { // Current path array contains // the path which needs // to be reversed int i = 0, j = pathLen - 1; // Swap the data of two nodes while (i < j) { int temp = path.get(i).data; path.get(i).data = path.get(j).data; path.get(j).data = temp; i++; j--; } } // Check if the node is a // leaf node then return if (root.left == null && root.right == null) return; // Call utility function for // left and right subtree // recursively reverseTreePathUtil(root.left, path, pathLen, key); reverseTreePathUtil(root.right, path, pathLen, key);} // Function to reverse tree pathstatic void reverseTreePath(Node root, int key){ if (root == null) return; // Initialize a vector to store paths ArrayList<Node> path = new ArrayList<Node>(); for(int i = 0; i < 50; i++) { path.add(null); } reverseTreePathUtil(root, path, 0, key);} // Driver Codepublic static void main(String []args){ Node root = new Node(7); root.left = new Node(6); root.right = new Node(5); root.left.left = new Node(4); root.left.right = new Node(3); root.right.left = new Node(2); root.right.right = new Node(1); /* 7 / \ 6 5 / \ / \ 4 3 2 1 */ int key = 4; reverseTreePath(root, key); inorder(root);}} // This code is contributed by pratham76.
# Python program for the above approachclass Node: def __init__(self, data): self.data = data; self.left = None; self.right = None; # Function to print inorder# traversal of the treedef inorder(temp): if (temp == None): return; inorder(temp.left); print(temp.data, end=" "); inorder(temp.right); # Utility function to track# root to leaf pathsdef reverseTreePathUtil(root, path, pathLen, key): # Check if root is None then return if (root == None): return; # Store the Node in path array path[pathLen] = root; pathLen+=1; # Check if we find the Node upto # which path needs to be # reversed if (root.data == key): # Current path array contains # the path which needs # to be reversed i = 0; j = pathLen - 1; # Swap the data of two Nodes while (i < j): temp = path[i].data; path[i].data = path[j].data; path[j].data = temp; i += 1; j -= 1; # Check if the Node is a # leaf Node then return if (root.left == None and root.right == None): return; # Call utility function for # left and right subtree # recursively reverseTreePathUtil(root.left, path, pathLen, key); reverseTreePathUtil(root.right, path, pathLen, key); # Function to reverse tree pathdef reverseTreePath(root, key): if (root == None): return; # Initialize a vector to store paths path = [None for i in range(50)]; reverseTreePathUtil(root, path, 0, key); # Driver Codeif __name__ == '__main__': root = Node(7); root.left = Node(6); root.right = Node(5); root.left.left = Node(4); root.left.right = Node(3); root.right.left = Node(2); root.right.right = Node(1); ''' * 7 / \ 6 5 / \ / \ 4 3 2 1 ''' key = 4; reverseTreePath(root, key); inorder(root); # This code is contributed by umadevi9616
// C# program for the above approachusing System;using System.Collections.Generic; public class GFG{ public class Node { public int data; public Node left; public Node right; public Node(int value) { this.data = value; }}; // Function to print inorder// traversal of the treestatic void inorder(Node temp){ if (temp == null) return; inorder(temp.left); Console.Write(temp.data+" "); inorder(temp.right);} // Utility function to track// root to leaf pathsstatic void reverseTreePathUtil(Node root, List<Node> path, int pathLen, int key){ // Check if root is null then return if (root == null) return; // Store the node in path array path[pathLen]= root; pathLen++; // Check if we find the node upto // which path needs to be // reversed if (root.data == key) { // Current path array contains // the path which needs // to be reversed int i = 0, j = pathLen - 1; // Swap the data of two nodes while (i < j) { int temp = path[i].data; path[i].data = path[j].data; path[j].data = temp; i++; j--; } } // Check if the node is a // leaf node then return if (root.left == null && root.right == null) return; // Call utility function for // left and right subtree // recursively reverseTreePathUtil(root.left, path, pathLen, key); reverseTreePathUtil(root.right, path, pathLen, key);} // Function to reverse tree pathstatic void reverseTreePath(Node root, int key){ if (root == null) return; // Initialize a vector to store paths List<Node> path = new List<Node>(); for(int i = 0; i < 50; i++) { path.Add(null); } reverseTreePathUtil(root, path, 0, key);} // Driver Codepublic static void Main(String []args){ Node root = new Node(7); root.left = new Node(6); root.right = new Node(5); root.left.left = new Node(4); root.left.right = new Node(3); root.right.left = new Node(2); root.right.right = new Node(1); /* 7 / \ 6 5 / \ / \ 4 3 2 1 */ int key = 4; reverseTreePath(root, key); inorder(root);}} // This code is contributed by umadevi9616
<script> // JavaScript program for the above approach class Node{ constructor(value) { this.data=value; this.left=this.right=null; }} // Function to print inorder// traversal of the tree function inorder(temp){ if (temp == null) return; inorder(temp.left); document.write(temp.data+" "); inorder(temp.right);} // Utility function to track// root to leaf pathsfunction reverseTreePathUtil(root,path,pathLen,key){ // Check if root is null then return if (root == null) return; // Store the node in path array path[pathLen] = root; pathLen++; // Check if we find the node upto // which path needs to be // reversed if (root.data == key) { // Current path array contains // the path which needs // to be reversed let i = 0, j = pathLen - 1; // Swap the data of two nodes while (i < j) { let temp = path[i].data; path[i].data = path[j].data; path[j].data = temp; i++; j--; } } // Check if the node is a // leaf node then return if (root.left == null && root.right == null) return; // Call utility function for // left and right subtree // recursively reverseTreePathUtil(root.left, path, pathLen, key); reverseTreePathUtil(root.right, path, pathLen, key);} // Function to reverse tree pathfunction reverseTreePath(root,key){ if (root == null) return; // Initialize a vector to store paths let path = []; for(let i = 0; i < 50; i++) { path.push(null); } reverseTreePathUtil(root, path, 0, key);} // Driver Codelet root = new Node(7);root.left = new Node(6);root.right = new Node(5);root.left.left = new Node(4);root.left.right = new Node(3);root.right.left = new Node(2);root.right.right = new Node(1); /* 7 / \ 6 5 / \ / \ 4 3 2 1 */ let key = 4;reverseTreePath(root, key);inorder(root); // This code is contributed by avanitrachhadiya2155 </script>
7 6 3 4 2 5 1
Time Complexity: O(N)
Space Complexity: O(N)
andrew1234
shrikanth13
sameershrivastava46
rutvik_56
pratham76
sooda367
rrrtnx
avanitrachhadiya2155
umadevi9616
sumitgumber28
simmytarika5
cpp-map
Reverse
Tree
Tree
Reverse
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Binary Tree | Set 3 (Types of Binary Tree)
Binary Tree | Set 2 (Properties)
Decision Tree
A program to check if a binary tree is BST or not
Construct Tree from given Inorder and Preorder traversals
Introduction to Tree Data Structure
Lowest Common Ancestor in a Binary Tree | Set 1
Complexity of different operations in Binary tree, Binary Search Tree and AVL tree
Expression Tree
Deletion in a Binary Tree | [
{
"code": null,
"e": 26265,
"s": 26237,
"text": "\n02 May, 2022"
},
{
"code": null,
"e": 26347,
"s": 26265,
"text": "Given a tree and node data, the task to reverse the path to that particular Node."
},
{
"code": null,
"e": 26358,
"s": 26347,
"text": "Examples... |
Random nextLong() method in Java with Examples - GeeksforGeeks | 07 Jan, 2019
The nextGaussian() method of Random class returns the next pseudorandom, uniformly distributed long value from this random number generator’s sequence.
Syntax:
public long nextLong()
Parameters: The function does not accepts any parameter.
Return Value: This method returns the next pseudorandom, uniformly distributed long value.
Exception: The function does not throws any exception.
Program below demonstrates the above mentioned function:
// program to demonstrate the// function java.util.Random.nextLong() import java.util.*;public class GFG { public static void main(String[] args) { // create random object Random r = new Random(); // get next long value and print the value System.out.println("Long value is = " + r.nextLong()); }}
Long value is = -9027907281942573746
// program to demonstrate the// function java.util.Random.nextLong() import java.util.*;public class GFG { public static void main(String[] args) { // create random object Random r = new Random(); // get next long value and print the value System.out.println("Long value is = " + r.nextLong()); }}
Long value is = -2817123387200223163
Java-Functions
Java-Random
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Constructors in Java
Exceptions in Java
Functional Interfaces in Java
Different ways of Reading a text file in Java
Generics in Java
Introduction to Java
Comparator Interface in Java with Examples
Internal Working of HashMap in Java
Strings in Java | [
{
"code": null,
"e": 25225,
"s": 25197,
"text": "\n07 Jan, 2019"
},
{
"code": null,
"e": 25377,
"s": 25225,
"text": "The nextGaussian() method of Random class returns the next pseudorandom, uniformly distributed long value from this random number generator’s sequence."
},
{
... |
Python MySQL - LIKE() operator - GeeksforGeeks | 28 Apr, 2021
In this article, we will discuss the use of LIKE operator in MySQL using Python language.
Sometimes we may require tuples from the database which match certain patterns. For example, we may wish to retrieve all columns where the tuples start with the letter ‘y’, or start with ‘b’ and end with ‘l’, or even more complicated and restrictive string patterns. This is where the LIKE Clause comes to the rescue, often coupled with the WHERE Clause in SQL.
There are two kinds of wildcards used to filter out the results:
The percent sign (%): Used to match zero or more characters. (Variable Length)
The underscore sign (_): Used to match exactly one character. (Fixed Length)
Syntax:
SELECT column1, column2, ...,columnn
FROM table_name
WHERE columnn LIKE pattern;
The following are the rules for pattern matching with the LIKE Clause:
In order to use LIKE operations we are going to use the below table:
Below are various examples that depict how to use LIKE operator in Python MySQL.
Example 1:
Program to display rows where the address starts with the letter G in the itdept table.
Python3
# import mysql.connector moduleimport mysql.connector # establish connectiondatabase = mysql.connector.connect( host="localhost", user="root", password="", database="gfg") # creating cursor objectcur_object = database.cursor()print("like operator address starts with G") # queryfind = "SELECT * from itdept where Address like 'G%' " # execute the querycur_object.execute(find) # fetching all resultsdata = cur_object.fetchall()for i in data: print(i[0], i[1], i[2], i[3], sep="--") # Close database connectiondatabase.close()
Output:
Example 2:
Here we display all the rows where the name begins with the letter H and ends with the letter A in the table.
Python3
# import mysql.connector moduleimport mysql.connector # establish connectiondatabase = mysql.connector.connect( host="localhost", user="root", password="", database="gfg") # creating cursor objectcur_object = database.cursor()print("like operator name starts with H and ends with A") # queryfind = "SELECT * from itdept where Name like 'H%A' " # execute the querycur_object.execute(find) # fetching all resultsdata = cur_object.fetchall()for i in data: print(i[0], i[1], i[2], i[3], sep="--") # close database connectiondatabase.close()
Output:
Example 3:
In this program, we display all the rows having three-lettered addressees in the table.
Python3
# import mysql.connector moduleimport mysql.connector # establish connectiondatabase = mysql.connector.connect( host="localhost", user="root", password="", database="gfg") # creating cursor objectcur_object = database.cursor()print("like operator address has three letters only") # queryfind = "SELECT * from itdept where Address like '___' " # execute the querycur_object.execute(find) # fetching all resultsdata = cur_object.fetchall()for i in data: print(i[0], i[1], i[2], i[3], sep="--") # close database connectiondatabase.close()
Output:
Python-mySQL
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Check if element exists in list in Python
How To Convert Python Dictionary To JSON?
Python Classes and Objects
How to drop one or multiple columns in Pandas Dataframe
Defaultdict in Python
Python | Get unique values from a list
Python | os.path.join() method
Create a directory in Python
Python | Pandas dataframe.groupby() | [
{
"code": null,
"e": 25537,
"s": 25509,
"text": "\n28 Apr, 2021"
},
{
"code": null,
"e": 25627,
"s": 25537,
"text": "In this article, we will discuss the use of LIKE operator in MySQL using Python language."
},
{
"code": null,
"e": 25989,
"s": 25627,
"text": "... |
std::less in C++ with Examples - GeeksforGeeks | 01 May, 2020
The std::less is a is a member of the functional class (<functional.h>) used for performing comparisons. It is defined as a function object class for less than inequality comparison which returns a boolean value depending upon the condition. This can be used to change the functionality of the given function. It can be used with various standard algorithms like sort, lower_bound etc.
Header File:
#include <functional.h>
Template Class:
template <class T> struct less {
// Declaration of the less operation
bool operator() (const T& x,
const T& y)
const
{
return x < y;
}
// Type of first parameter
typedef T first_argument_type;
// Type of second parameter
typedef T second_argument_type;
// The result is returned
// as bool type
typedef bool result_type;
};
Syntax:
std::less()
Parameter: This function accepts the type of the arguments T, as the parameter, to be compared by the functional call.
Return Type: It return a boolean value depending upon condition(let a & b are 2 element):
True: If a is less than b.
False: If a is greater than b.
Below is the illustration of std::less in C++:
Program 1:
// C++ program to illustrate// std::less function#include <algorithm>#include <functional>#include <iostream>using namespace std; // Function to print array arr[]void printArray(int arr[], int N){ for (int i = 0; i < N; i++) { cout << arr[i] << ' '; }} // Driver Codeint main(){ int arr[] = { 26, 23, 21, 22, 28, 27, 25, 24 }; int N = sizeof(arr) / sizeof(arr[0]); // Sort the array in increasing order sort(arr, arr + N, less<int>()); // Print sorted array printArray(arr, N); return 0;}
21 22 23 24 25 26 27 28
Program 2:
// C++ program to illustrate less#include <iostream>#include <algorithm>#include <functional>using namespace std; // Templatetemplate <typename A, typename B, typename U = std::less<int> > // Function to check if a < b or notbool f(A a, B b, U u = U()){ return u(a, b);} // Driver Codeint main(){ int X = 1, Y = 2; // If X is less than Y or not cout << std::boolalpha; cout << f(X, Y) << '\n'; X = 2, Y = -1; // If X is less than Y or not cout << f(X, Y) << '\n'; return 0;}
true
false
Reference: http://www.cplusplus.com/reference/functional/less/
C++
C++ Programs
Write From Home
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Operator Overloading in C++
Polymorphism in C++
Friend class and function in C++
Sorting a vector in C++
std::string class in C++
Header files in C/C++ and its uses
Program to print ASCII Value of a character
How to return multiple values from a function in C or C++?
C++ Program for QuickSort
Sorting a Map by value in C++ STL | [
{
"code": null,
"e": 25367,
"s": 25339,
"text": "\n01 May, 2020"
},
{
"code": null,
"e": 25753,
"s": 25367,
"text": "The std::less is a is a member of the functional class (<functional.h>) used for performing comparisons. It is defined as a function object class for less than ine... |
JavaFX | ToggleButton Class - GeeksforGeeks | 30 Aug, 2018
A ToggleButton is a special control having the ability to be selected. Basically, ToggleButton is rendered similarly to a Button but these two are the different types of Controls. A Button is a “command” button that invokes a function when clicked. But a ToggleButton is a control with a Boolean indicating whether it is selected. It inherits the ButtonBase class.
ToggleButton can also be placed in groups. By default, a ToggleButton is not in a group. When in groups, only one ToggleButton at a time within that group can be selected. To put two ToggleButtons in the same group, simply assign them both the same value for ToggleGroup. Unlike RadioButtons, ToggleButtons in a ToggleGroup does not attempt to force at least one selected ToggleButton in the group. Means, if a ToggleButton is selected, clicking on it will cause it to become unselected. With RadioButton, clicking on the selected button in the group will have no effect.
Constructors of the class:
ToggleButton(): Creates a toggle button with an empty string for its label.ToggleButton(String txt): Creates a toggle button with the specified text as its label.ToggleButton(String txt, Node graphic): Creates a toggle button with the specified text and icon for its label.
ToggleButton(): Creates a toggle button with an empty string for its label.
ToggleButton(String txt): Creates a toggle button with the specified text as its label.
ToggleButton(String txt, Node graphic): Creates a toggle button with the specified text and icon for its label.
Commonly Used Methods:
Below programs illustrate the use of ToggleButton class:
Simple Java program to demonstrate ToggleButton Class: In this program we are trying to select the gender of a person. We first Create HBox and then set the Layout for it. Create a ToggleGroup and new Toggle Buttons named”Male” and “Female“. Set the Toggle Group (in a group only one button can be selected at a time) using setToggleGroup() method. By default Male button is selected. Create the scene and set scene to the stage using setScene() method. And launch the application.// Java program to demonstrate ToggleButton Classimport javafx.application.Application;import javafx.geometry.Insets;import javafx.scene.Scene;import javafx.scene.control.Label;import javafx.scene.control.ToggleButton;import javafx.scene.control.ToggleGroup;import javafx.scene.layout.HBox;import javafx.stage.Stage; public class ToggleButtonExample extends Application { public void start(Stage stage) { // Hbox layout HBox root = new HBox(); root.setPadding(new Insets(10)); root.setSpacing(5); // Gender root.getChildren().add(new Label("Your gender:")); // Creating a ToggleGroup ToggleGroup group = new ToggleGroup(); // Creating new Toggle buttons. ToggleButton maleButton = new ToggleButton("Male"); ToggleButton femaleButton = new ToggleButton("Female"); // Set toggle group // In a group, maximum only // one button is selected maleButton.setToggleGroup(group); femaleButton.setToggleGroup(group); maleButton.setUserData("I am a Male"); femaleButton.setUserData("I am a Female"); // male button is selected at first by default maleButton.setSelected(true); root.getChildren().addAll(maleButton, femaleButton); // create the scene Scene scene = new Scene(root, 450, 300); stage.setTitle("Toggle Button"); stage.setScene(scene); stage.show(); } // Main Method public static void main(String[] args) { launch(args); }}Output:Video Playerhttps://media.geeksforgeeks.org/wp-content/uploads/20180825_200037.mp400:0000:0000:05Use Up/Down Arrow keys to increase or decrease volume.Java program to demonstrate ToggleButton Class using ChangeListener: In this program, we first create a label. Then we will create Toggle Buttons using ToggleButton() and Toggle Group is created using ToggleGroup() method. Add all the Toggle Buttons to the ToggleGroup. Now create a ChangeListener for the ToggleGroup. A ChangeListener is notified whenever the value of an ObservableValue change. An ObservableValue is an entity that wraps a value and allows to observe the value for changes. Now, create the label for the selection of the subjects. Create a HBox using HBox() Now, add ToggleButtons to an HBox. Set the spacing between the buttons using setSpacing() method. Create the VBox, add labels and HBox to the VBox. Set the size of the VBox and set padding of the VBox (e.g border-style, border-width, border-radius, border-insets, border-color). Create the scene and add it to the stage. Set the title of the stage and display.// Java program to demonstrate ToggleButton// Class using ChangeListenerimport javafx.application.Application;import javafx.beans.value.ChangeListener;import javafx.beans.value.ObservableValue;import javafx.scene.Scene;import javafx.scene.control.Label;import javafx.scene.control.Toggle;import javafx.scene.control.ToggleButton;import javafx.scene.control.ToggleGroup;import javafx.scene.layout.HBox;import javafx.scene.layout.VBox;import javafx.stage.Stage; public class ToggleButtonDemo extends Application { // Create the Message Label Label selectionMsg = new Label("Your selection: None"); public void start(Stage stage){ // Create four ToggleButtons ToggleButton csBtn = new ToggleButton("Computer Science"); ToggleButton pBtn = new ToggleButton("Physics"); ToggleButton chemBtn = new ToggleButton("Chemistry"); ToggleButton mBtn = new ToggleButton("Maths"); // Create a ToggleGroup final ToggleGroup group = new ToggleGroup(); // Add all ToggleButtons to a ToggleGroup group.getToggles().addAll(csBtn, pBtn, chemBtn, mBtn); // Create a ChangeListener for the ToggleGroup group.selectedToggleProperty().addListener( new ChangeListener<Toggle>() { public void changed(ObservableValue<? extends Toggle> ov, final Toggle toggle, final Toggle new_toggle) { String toggleBtn = ((ToggleButton)new_toggle).getText(); selectionMsg.setText("Your selection: " + toggleBtn); } }); // Create the Label for the Selection Label selectLbl = new Label("Select the subject :"); // Create a HBox HBox buttonBox = new HBox(); // Add ToggleButtons to an HBox buttonBox.getChildren().addAll(csBtn, pBtn, chemBtn, mBtn); // Set the spacing between children to 10px buttonBox.setSpacing(10); // Create the VBox VBox root = new VBox(); // Add the Labels and HBox to the VBox root.getChildren().addAll(selectionMsg, selectLbl, buttonBox); // Set the spacing between children to 10px root.setSpacing(10); // Set the Size of the VBox root.setMinSize(350, 250); // Set the padding of the VBox // Set the border-style of the VBox // Set the border-width of the VBox // Set the border-insets of the VBox // Set the border-radius of the VBox // Set the border-color of the VBox root.setStyle("-fx-padding: 10;" + "-fx-border-style: solid inside;" + "-fx-border-width: 2;" + "-fx-border-insets: 5;" + "-fx-border-radius: 5;" + "-fx-border-color: blue;"); // Create the Scene Scene scene = new Scene(root); // Add the scene to the Stage stage.setScene(scene); // Set the title of the Stage stage.setTitle("A ToggleButton Example"); // Display the Stage stage.show();} // Main Method public static void main(String[] args) { // launch the application Application.launch(args); }}Output:Video Playerhttps://media.geeksforgeeks.org/wp-content/uploads/20180825_185006-1.mp400:0000:0000:09Use Up/Down Arrow keys to increase or decrease volume.Note: The above programs might not run in an online IDE. Please use an offline compiler.Reference: https://docs.oracle.com/javase/8/javafx/api/javafx/scene/control/ToggleButton.htmlMy Personal Notes
arrow_drop_upSave
Simple Java program to demonstrate ToggleButton Class: In this program we are trying to select the gender of a person. We first Create HBox and then set the Layout for it. Create a ToggleGroup and new Toggle Buttons named”Male” and “Female“. Set the Toggle Group (in a group only one button can be selected at a time) using setToggleGroup() method. By default Male button is selected. Create the scene and set scene to the stage using setScene() method. And launch the application.// Java program to demonstrate ToggleButton Classimport javafx.application.Application;import javafx.geometry.Insets;import javafx.scene.Scene;import javafx.scene.control.Label;import javafx.scene.control.ToggleButton;import javafx.scene.control.ToggleGroup;import javafx.scene.layout.HBox;import javafx.stage.Stage; public class ToggleButtonExample extends Application { public void start(Stage stage) { // Hbox layout HBox root = new HBox(); root.setPadding(new Insets(10)); root.setSpacing(5); // Gender root.getChildren().add(new Label("Your gender:")); // Creating a ToggleGroup ToggleGroup group = new ToggleGroup(); // Creating new Toggle buttons. ToggleButton maleButton = new ToggleButton("Male"); ToggleButton femaleButton = new ToggleButton("Female"); // Set toggle group // In a group, maximum only // one button is selected maleButton.setToggleGroup(group); femaleButton.setToggleGroup(group); maleButton.setUserData("I am a Male"); femaleButton.setUserData("I am a Female"); // male button is selected at first by default maleButton.setSelected(true); root.getChildren().addAll(maleButton, femaleButton); // create the scene Scene scene = new Scene(root, 450, 300); stage.setTitle("Toggle Button"); stage.setScene(scene); stage.show(); } // Main Method public static void main(String[] args) { launch(args); }}Output:Video Playerhttps://media.geeksforgeeks.org/wp-content/uploads/20180825_200037.mp400:0000:0000:05Use Up/Down Arrow keys to increase or decrease volume.
// Java program to demonstrate ToggleButton Classimport javafx.application.Application;import javafx.geometry.Insets;import javafx.scene.Scene;import javafx.scene.control.Label;import javafx.scene.control.ToggleButton;import javafx.scene.control.ToggleGroup;import javafx.scene.layout.HBox;import javafx.stage.Stage; public class ToggleButtonExample extends Application { public void start(Stage stage) { // Hbox layout HBox root = new HBox(); root.setPadding(new Insets(10)); root.setSpacing(5); // Gender root.getChildren().add(new Label("Your gender:")); // Creating a ToggleGroup ToggleGroup group = new ToggleGroup(); // Creating new Toggle buttons. ToggleButton maleButton = new ToggleButton("Male"); ToggleButton femaleButton = new ToggleButton("Female"); // Set toggle group // In a group, maximum only // one button is selected maleButton.setToggleGroup(group); femaleButton.setToggleGroup(group); maleButton.setUserData("I am a Male"); femaleButton.setUserData("I am a Female"); // male button is selected at first by default maleButton.setSelected(true); root.getChildren().addAll(maleButton, femaleButton); // create the scene Scene scene = new Scene(root, 450, 300); stage.setTitle("Toggle Button"); stage.setScene(scene); stage.show(); } // Main Method public static void main(String[] args) { launch(args); }}
Output:
Java program to demonstrate ToggleButton Class using ChangeListener: In this program, we first create a label. Then we will create Toggle Buttons using ToggleButton() and Toggle Group is created using ToggleGroup() method. Add all the Toggle Buttons to the ToggleGroup. Now create a ChangeListener for the ToggleGroup. A ChangeListener is notified whenever the value of an ObservableValue change. An ObservableValue is an entity that wraps a value and allows to observe the value for changes. Now, create the label for the selection of the subjects. Create a HBox using HBox() Now, add ToggleButtons to an HBox. Set the spacing between the buttons using setSpacing() method. Create the VBox, add labels and HBox to the VBox. Set the size of the VBox and set padding of the VBox (e.g border-style, border-width, border-radius, border-insets, border-color). Create the scene and add it to the stage. Set the title of the stage and display.// Java program to demonstrate ToggleButton// Class using ChangeListenerimport javafx.application.Application;import javafx.beans.value.ChangeListener;import javafx.beans.value.ObservableValue;import javafx.scene.Scene;import javafx.scene.control.Label;import javafx.scene.control.Toggle;import javafx.scene.control.ToggleButton;import javafx.scene.control.ToggleGroup;import javafx.scene.layout.HBox;import javafx.scene.layout.VBox;import javafx.stage.Stage; public class ToggleButtonDemo extends Application { // Create the Message Label Label selectionMsg = new Label("Your selection: None"); public void start(Stage stage){ // Create four ToggleButtons ToggleButton csBtn = new ToggleButton("Computer Science"); ToggleButton pBtn = new ToggleButton("Physics"); ToggleButton chemBtn = new ToggleButton("Chemistry"); ToggleButton mBtn = new ToggleButton("Maths"); // Create a ToggleGroup final ToggleGroup group = new ToggleGroup(); // Add all ToggleButtons to a ToggleGroup group.getToggles().addAll(csBtn, pBtn, chemBtn, mBtn); // Create a ChangeListener for the ToggleGroup group.selectedToggleProperty().addListener( new ChangeListener<Toggle>() { public void changed(ObservableValue<? extends Toggle> ov, final Toggle toggle, final Toggle new_toggle) { String toggleBtn = ((ToggleButton)new_toggle).getText(); selectionMsg.setText("Your selection: " + toggleBtn); } }); // Create the Label for the Selection Label selectLbl = new Label("Select the subject :"); // Create a HBox HBox buttonBox = new HBox(); // Add ToggleButtons to an HBox buttonBox.getChildren().addAll(csBtn, pBtn, chemBtn, mBtn); // Set the spacing between children to 10px buttonBox.setSpacing(10); // Create the VBox VBox root = new VBox(); // Add the Labels and HBox to the VBox root.getChildren().addAll(selectionMsg, selectLbl, buttonBox); // Set the spacing between children to 10px root.setSpacing(10); // Set the Size of the VBox root.setMinSize(350, 250); // Set the padding of the VBox // Set the border-style of the VBox // Set the border-width of the VBox // Set the border-insets of the VBox // Set the border-radius of the VBox // Set the border-color of the VBox root.setStyle("-fx-padding: 10;" + "-fx-border-style: solid inside;" + "-fx-border-width: 2;" + "-fx-border-insets: 5;" + "-fx-border-radius: 5;" + "-fx-border-color: blue;"); // Create the Scene Scene scene = new Scene(root); // Add the scene to the Stage stage.setScene(scene); // Set the title of the Stage stage.setTitle("A ToggleButton Example"); // Display the Stage stage.show();} // Main Method public static void main(String[] args) { // launch the application Application.launch(args); }}Output:Video Playerhttps://media.geeksforgeeks.org/wp-content/uploads/20180825_185006-1.mp400:0000:0000:09Use Up/Down Arrow keys to increase or decrease volume.Note: The above programs might not run in an online IDE. Please use an offline compiler.Reference: https://docs.oracle.com/javase/8/javafx/api/javafx/scene/control/ToggleButton.htmlMy Personal Notes
arrow_drop_upSave
// Java program to demonstrate ToggleButton// Class using ChangeListenerimport javafx.application.Application;import javafx.beans.value.ChangeListener;import javafx.beans.value.ObservableValue;import javafx.scene.Scene;import javafx.scene.control.Label;import javafx.scene.control.Toggle;import javafx.scene.control.ToggleButton;import javafx.scene.control.ToggleGroup;import javafx.scene.layout.HBox;import javafx.scene.layout.VBox;import javafx.stage.Stage; public class ToggleButtonDemo extends Application { // Create the Message Label Label selectionMsg = new Label("Your selection: None"); public void start(Stage stage){ // Create four ToggleButtons ToggleButton csBtn = new ToggleButton("Computer Science"); ToggleButton pBtn = new ToggleButton("Physics"); ToggleButton chemBtn = new ToggleButton("Chemistry"); ToggleButton mBtn = new ToggleButton("Maths"); // Create a ToggleGroup final ToggleGroup group = new ToggleGroup(); // Add all ToggleButtons to a ToggleGroup group.getToggles().addAll(csBtn, pBtn, chemBtn, mBtn); // Create a ChangeListener for the ToggleGroup group.selectedToggleProperty().addListener( new ChangeListener<Toggle>() { public void changed(ObservableValue<? extends Toggle> ov, final Toggle toggle, final Toggle new_toggle) { String toggleBtn = ((ToggleButton)new_toggle).getText(); selectionMsg.setText("Your selection: " + toggleBtn); } }); // Create the Label for the Selection Label selectLbl = new Label("Select the subject :"); // Create a HBox HBox buttonBox = new HBox(); // Add ToggleButtons to an HBox buttonBox.getChildren().addAll(csBtn, pBtn, chemBtn, mBtn); // Set the spacing between children to 10px buttonBox.setSpacing(10); // Create the VBox VBox root = new VBox(); // Add the Labels and HBox to the VBox root.getChildren().addAll(selectionMsg, selectLbl, buttonBox); // Set the spacing between children to 10px root.setSpacing(10); // Set the Size of the VBox root.setMinSize(350, 250); // Set the padding of the VBox // Set the border-style of the VBox // Set the border-width of the VBox // Set the border-insets of the VBox // Set the border-radius of the VBox // Set the border-color of the VBox root.setStyle("-fx-padding: 10;" + "-fx-border-style: solid inside;" + "-fx-border-width: 2;" + "-fx-border-insets: 5;" + "-fx-border-radius: 5;" + "-fx-border-color: blue;"); // Create the Scene Scene scene = new Scene(root); // Add the scene to the Stage stage.setScene(scene); // Set the title of the Stage stage.setTitle("A ToggleButton Example"); // Display the Stage stage.show();} // Main Method public static void main(String[] args) { // launch the application Application.launch(args); }}
Output:
Note: The above programs might not run in an online IDE. Please use an offline compiler.
Reference: https://docs.oracle.com/javase/8/javafx/api/javafx/scene/control/ToggleButton.html
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Constructors in Java
Exceptions in Java
Functional Interfaces in Java
Different ways of Reading a text file in Java
Generics in Java
Introduction to Java
Internal Working of HashMap in Java
Comparator Interface in Java with Examples
Strings in Java | [
{
"code": null,
"e": 25225,
"s": 25197,
"text": "\n30 Aug, 2018"
},
{
"code": null,
"e": 25590,
"s": 25225,
"text": "A ToggleButton is a special control having the ability to be selected. Basically, ToggleButton is rendered similarly to a Button but these two are the different ty... |
Gradient Descent Training With Logistic Regression | by Rina Buoy | Towards Data Science | Gradient descent algorithm and its variants ( Adam, SGD etc. ) have become very popular training (optimisation) algorithm in many machine learning applications. Optimisation algorithms can be informally grouped into two categories — gradient-based and gradient-free(ex. particle swarm, genetic algorithm etc.). As you can guess, gradient descent is a gradient-based algorithm. Why gradient is important in training machine learning?
The objective of training a machine learning model is to minimize the loss or error between ground truths and predictions by changing the trainable parameters. And gradient, which is the extension of derivative in multi-dimensional space, tells the direction along which the loss or error is optimally minimized. If you recall from vector calculus class, gradient is defined as the maximum rate of change. Therefore, the formula for gradient descent is simply:
θj is a trainable parameter, j. α is a learning rate. J(θ) is a cost function.
In the below figure, the shortest from the starting point ( the peak) to the optima ( valley) is along the gradient trajectory. The same principle applies the multi-dimensional space which is generally the case for machine learning training.
To demonstrate how gradient descent is applied in machine learning training, we’ll use logistic regression.
To understand how LR works, let’s imagine the following scenario: we want to predict the sex of a person (male = 0, female = 1) based on age (x1), annual income (x2) and education level (x3). If Y is the predicted value, a logistic regression model for this problem would take the form:
Z = b0 + b1(x1) + b2(x2) + b3(x3)
Y = 1.0 / (1.0 + e^-Z)
b0 is often called ‘bias’ and b1, b2 and b3 are called ‘weights’.
Z has the same form as a linear regression while Y is a sigmoid activation function. Y takes a value between 0 and 1. If Y is less than 0.5, we conclude the predicted output is 0 and if Y is greater than 0.5, you conclude the output is 1.
Now, we are ready to look at a more formal form of LR below:
Φn is the augmented transformation of Xn in feature space. tn is the class label. σ is a sigmoid activation. W is a weight vector (including bias term). p(C1|Φ) and p(C2|Φ) is the probability of assigning to C1 and C2 given Φ, respectively.
Given the above formulation, the main goal here is to maximise the likelihood of observing the data given the weight (W). The likelihood function is a joint-distribution of the observed data and is given below:
Π is a product operator.
From the likelihood function, it can be observed that y is Bernoulli distributed.
When working with probability, it is desirable to convert to logarithm since logarithm turns a product into a sum and thus avoid the issue of taking a product with a very small number(typically for probability). Below are the negative log-likelihood (NLL) and its gradient with respect to weights. NLL is used to turn a maximization into a minimization problem. Essentially, minimizing NLL is equivalent to maximizing the likelihood.
The binary case of LR can be extended to the multiclass case with some changes of notation.
Let’s assume there is K class. So, p(Ck) is the probability of assigning to class k given Φ.
Instead of sigmoid activation, softmax activation is used to convert class score (ak) into proper probability.
W is a weight matrix ( DxK) — D is feature space dimension.
The likelihood function and negative likelihood (NLL) are given below.
y is now Multinoulli distributed.
MNIST is a classical dataset, which consists of black-and-white images of hand-drawn digits (between 0 and 9). We will implement a multiclass logistic regression to classify digits in MNIST using PyTorch. Since we want to demonstrate the gradient descent algorithm, we do not use the bulit-in algorithm from torch.optim. Rather than calculating gradients manually, we will use torch.autograd for simplicity. This demo is taken from PyTorch website.
The below codes download the dataset (train and validation set) and also convert into respective numpy arrays.
from pathlib import Pathimport requestsimport pickleimport gzipDATA_PATH = Path("data")PATH = DATA_PATH / "mnist"PATH.mkdir(parents=True, exist_ok=True)URL = "http://deeplearning.net/data/mnist/"FILENAME = "mnist.pkl.gz"if not (PATH / FILENAME).exists(): content = requests.get(URL + FILENAME).content (PATH / FILENAME).open("wb").write(content)with gzip.open((PATH / FILENAME).as_posix(), "rb") as f: ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1")
When working with PyTorch, we need to convert the above numpy arrays to tensors.
import torchx_train, y_train, x_valid, y_valid = map(torch.tensor, (x_train, y_train, x_valid, y_valid))
Next, we will create and initialize weights and bias tensors. We use Xavier initialisation for weights while initializing bias with zero value. Since we want torch.autograd to take care of gradient calculations, we need to set requires_grad to True so that PyTorch can keep track of operations which are required for gradient calculations.
import mathweights = torch.randn(784, 10) / math.sqrt(784)weights.requires_grad_()bias = torch.zeros(10, requires_grad=True)
We need to evaluate the likelihood which is yk(Φ). To get yk(Φ), we first need to evaluate ak. Instead of returning yk(Φ), it returns log(yk(Φ)) which is useful for calculating loss function later.
def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1)def model(xb): return log_softmax(xb @ weights + bias)
Now, we can use the likelihood to compute the overall negative log-likelihood which is the loss function of MNIST logistic regression.
def nll(input, target): return -input[range(target.shape[0]), target].mean()
Until now, we have implemented all the necessary functions needed for training MNIST logistic regression. We will implement mini-batch training.
bs = 64 # batch sizeloss_func = nlllr = 0.5 # learning rateepochs = 2 # how many epochs to train forfor epoch in range(epochs): for i in range((n - 1) // bs + 1): start_i = i * bs end_i = start_i + bs xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] pred = model(xb) loss = loss_func(pred, yb) loss.backward() with torch.no_grad(): weights -= weights.grad * lr bias -= bias.grad * lr weights.grad.zero_() bias.grad.zero_()
.backward() on loss_func does all the gradient calculations which are required for parameter update. Once gradients are computed with .backward(), weights and bias are updated by the product of gradient and learning rate. Learning rate (LR) is used to control the convergence. Large LR can overshoot while small LR can slow down the convergence.
Once weights and bias are updated, their gradients are set to zero; otherwise, gradients are accumulated in the next batches.
The gradient descent implemented above is very basic, yet enough to demonstrate how it works. Modern machine learning frameworks like PyTorch and TensorFlow have far more sophisticated variants of gradient descent like SGD, Adam etc. Still, understanding how gradient descent works is beneficial when we need to train machine learning models.
The equations in this article are taken from ‘ Pattern recognition and machine learning’ by Christopher M. Bishop. | [
{
"code": null,
"e": 604,
"s": 171,
"text": "Gradient descent algorithm and its variants ( Adam, SGD etc. ) have become very popular training (optimisation) algorithm in many machine learning applications. Optimisation algorithms can be informally grouped into two categories — gradient-based and gra... |
CSS3 - Animation | Animation is process of making shape changes and creating motions with elements.
Keyframes will control the intermediate animation steps in CSS3.
@keyframes animation {
from {background-color: pink;}
to {background-color: green;}
}
div {
width: 100px;
height: 100px;
background-color: red;
animation-name: animation;
animation-duration: 5s;
}
The above example shows height, width, color, name and duration of animation with keyframes syntax.
<html>
<head>
<style type = "text/css">
h1 {
-moz-animation-duration: 3s;
-webkit-animation-duration: 3s;
-moz-animation-name: slidein;
-webkit-animation-name: slidein;
}
@-moz-keyframes slidein {
from {
margin-left:100%;
width:300%
}
to {
margin-left:0%;
width:100%;
}
}
@-webkit-keyframes slidein {
from {
margin-left:100%;
width:300%
}
to {
margin-left:0%;
width:100%;
}
}
</style>
</head>
<body>
<h1>Tutorials Point</h1>
<p>this is an example of moving left animation .</p>
<button onclick = "myFunction()">Reload page</button>
<script>
function myFunction() {
location.reload();
}
</script>
</body>
</html>
It will produce the following result −
this is an example of moving left animation .
<html>
<head>
<style type = "text/css">
h1 {
-moz-animation-duration: 3s;
-webkit-animation-duration: 3s;
-moz-animation-name: slidein;
-webkit-animation-name: slidein;
}
@-moz-keyframes slidein {
from {
margin-left:100%;
width:300%
}
75% {
font-size:300%;
margin-left:25%;
width:150%;
}
to {
margin-left:0%;
width:100%;
}
}
@-webkit-keyframes slidein {
from {
margin-left:100%;
width:300%
}
75% {
font-size:300%;
margin-left:25%;
width:150%;
}
to {
margin-left:0%;
width:100%;
}
}
</style>
</head>
<body>
<h1>Tutorials Point</h1>
<p>This is an example of animation left with an extra keyframe
to make text changes.</p>
<button onclick = "myFunction()">Reload page</button>
<script>
function myFunction() {
location.reload();
}
</script>
</body>
</html>
It will produce the following result −
This is an example of animation left with an extra keyframe to make text changes.
33 Lectures
2.5 hours
Anadi Sharma
26 Lectures
2.5 hours
Frahaan Hussain
44 Lectures
4.5 hours
DigiFisk (Programming Is Fun)
21 Lectures
2.5 hours
DigiFisk (Programming Is Fun)
51 Lectures
7.5 hours
DigiFisk (Programming Is Fun)
52 Lectures
4 hours
DigiFisk (Programming Is Fun)
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2707,
"s": 2626,
"text": "Animation is process of making shape changes and creating motions with elements."
},
{
"code": null,
"e": 2772,
"s": 2707,
"text": "Keyframes will control the intermediate animation steps in CSS3."
},
{
"code": null,
"e":... |
Annotations in Java - GeeksforGeeks | 23 Feb, 2022
Annotations are used to provide supplemental information about a program.
Annotations start with ‘@’.
Annotations do not change the action of a compiled program.
Annotations help to associate metadata (information) to the program elements i.e. instance variables, constructors, methods, classes, etc.
Annotations are not pure comments as they can change the way a program is treated by the compiler. See below code for example.
Annotations basically are used to provide additional information, so could be an alternative to XML and Java marker interfaces.
Implementation:
Note: This program throws compiler error because we have mentioned override, but not overridden, we have overloaded display.
Example:
// Java Program to Demonstrate that Annotations// are Not Barely Comments // Class 1class Base { // Method public void display() { System.out.println("Base display()"); }} // Class 2// Main classclass Derived extends Base { // Overriding method as already up in above class @Override public void display(int x) { // Print statement when this method is called System.out.println("Derived display(int )"); } // Method 2 // Main driver method public static void main(String args[]) { // Creating object of this class inside main() Derived obj = new Derived(); // Calling display() method inside main() obj.display(); }}
Output:
10: error: method does not override or implement
a method from a supertype
If we remove parameter (int x) or we remove @override, the program compiles fine.
There are broadly 5 categories of annotations as listed:
Marker AnnotationsSingle value AnnotationsFull AnnotationsType AnnotationsRepeating Annotations
Marker Annotations
Single value Annotations
Full Annotations
Type Annotations
Repeating Annotations
Let us discuss and we will be appending code where ever required if so.
The only purpose is to mark a declaration. These annotations contain no members and do not consist of any data. Thus, its presence as an annotation is sufficient. Since the marker interface contains no members, simply determining whether it is present or absent is sufficient. @Override is an example of Marker Annotation.
Example
@TestAnnotation()
These annotations contain only one member and allow a shorthand form of specifying the value of the member. We only need to specify the value for that member when the annotation is applied and don’t need to specify the name of the member. However, in order to use this shorthand, the name of the member must be a value.
Example
@TestAnnotation(“testing”);
These annotations consist of multiple data members, names, values, pairs.
Example
@TestAnnotation(owner=”Rahul”, value=”Class Geeks”)
These annotations can be applied to any place where a type is being used. For example, we can annotate the return type of a method. These are declared annotated with @Target annotation.
Example
// Java Program to Demonstrate Type Annotation // Importing required classesimport java.lang.annotation.ElementType;import java.lang.annotation.Target; // Using target annotation to annotate a type@Target(ElementType.TYPE_USE) // Declaring a simple type annotation@interface TypeAnnoDemo{} // Main classpublic class GFG { // Main driver method public static void main(String[] args) { // Annotating the type of a string @TypeAnnoDemo String string = "I am annotated with a type annotation"; System.out.println(string); abc(); } // Annotating return type of a function static @TypeAnnoDemo int abc() { System.out.println("This function's return type is annotated"); return 0; }}
I am annotated with a type annotation
This function's return type is annotated
These are the annotations that can be applied to a single item more than once. For an annotation to be repeatable it must be annotated with the @Repeatable annotation, which is defined in the java.lang.annotation package. Its value field specifies the container type for the repeatable annotation. The container is specified as an annotation whose value field is an array of the repeatable annotation type. Hence, to create a repeatable annotation, firstly the container annotation is created, and then the annotation type is specified as an argument to the @Repeatable annotation.
Example:
// Java Program to Demonstrate a Repeatable Annotation // Importing required classesimport java.lang.annotation.Annotation;import java.lang.annotation.Repeatable;import java.lang.annotation.Retention;import java.lang.annotation.RetentionPolicy;import java.lang.reflect.Method; // Make Words annotation repeatable@Retention(RetentionPolicy.RUNTIME)@Repeatable(MyRepeatedAnnos.class)@interface Words{ String word() default "Hello"; int value() default 0;} // Create container annotation@Retention(RetentionPolicy.RUNTIME)@interface MyRepeatedAnnos{ Words[] value();}public class Main { // Repeat Words on newMethod @Words(word = "First", value = 1) @Words(word = "Second", value = 2) public static void newMethod() { Main obj = new Main(); try { Class<?> c = obj.getClass(); // Obtain the annotation for newMethod Method m = c.getMethod("newMethod"); // Display the repeated annotation Annotation anno = m.getAnnotation(MyRepeatedAnnos.class); System.out.println(anno); } catch (NoSuchMethodException e) { System.out.println(e); } } public static void main(String[] args) { newMethod(); }}
@MyRepeatedAnnos(value={@Words(value=1, word="First"), @Words(value=2, word="Second")})
Java popularly defines seven built-in annotations as we have seen up in the hierarchy diagram.
Four are imported from java.lang.annotation: @Retention, @Documented, @Target, and @Inherited.
Three are included in java.lang: @Deprecated, @Override and @SuppressWarnings
It is a marker annotation. It indicates that a declaration is obsolete and has been replaced by a newer form.
The Javadoc @deprecated tag should be used when an element has been deprecated.
@deprecated tag is for documentation and @Deprecated annotation is for runtime reflection.
@deprecated tag has higher priority than @Deprecated annotation when both are together used.
Example:
public class DeprecatedTest{ @Deprecated public void Display() { System.out.println("Deprecatedtest display()"); } public static void main(String args[]) { DeprecatedTest d1 = new DeprecatedTest(); d1.Display(); }}
Deprecatedtest display()
It is a marker annotation that can be used only on methods. A method annotated with @Override must override a method from a superclass. If it doesn’t, a compile-time error will result (see this for example). It is used to ensure that a superclass method is actually overridden, and not simply overloaded.
Example
// Java Program to Illustrate Override Annotation // Class 1class Base{ public void Display() { System.out.println("Base display()"); } public static void main(String args[]) { Base t1 = new Derived(); t1.Display(); } } // Class 2// Extending above classclass Derived extends Base{ @Override public void Display() { System.out.println("Derived display()"); }}
Derived display()
It is used to inform the compiler to suppress specified compiler warnings. The warnings to suppress are specified by name, in string form. This type of annotation can be applied to any type of declaration.
Java groups warnings under two categories. They are deprecated and unchecked. Any unchecked warning is generated when a legacy code interfaces with a code that uses generics.
Example:
// Java Program to illustrate SuppressWarnings Annotation // Class 1class DeprecatedTest{ @Deprecated public void Display() { System.out.println("Deprecatedtest display()"); }} // Class 2public class SuppressWarningTest{ // If we comment below annotation, program generates // warning @SuppressWarnings({"checked", "deprecation"}) public static void main(String args[]) { DeprecatedTest d1 = new DeprecatedTest(); d1.Display(); }}
Deprecatedtest display()
It is a marker interface that tells a tool that an annotation is to be documented. Annotations are not included in ‘Javadoc’ comments. The use of @Documented annotation in the code enables tools like Javadoc to process it and include the annotation type information in the generated document.
It is designed to be used only as an annotation to another annotation. @Target takes one argument, which must be constant from the ElementType enumeration. This argument specifies the type of declarations to which the annotation can be applied. The constants are shown below along with the type of the declaration to which they correspond.
We can specify one or more of these values in a @Targetannotation. To specify multiple values, we must specify them within a braces-delimited list. For example, to specify that an annotation applies only to fields and local variables, you can use this @Target annotation: @Target({ElementType.FIELD, ElementType.LOCAL_VARIABLE}) @Retention Annotation It determines where and how long the annotation is retent. The 3 values that the @Retention annotation can have:
SOURCE: Annotations will be retained at the source level and ignored by the compiler.
CLASS: Annotations will be retained at compile-time and ignored by the JVM.
RUNTIME: These will be retained at runtime.
@Inherited is a marker annotation that can be used only on annotation declaration. It affects only annotations that will be used on class declarations. @Inherited causes the annotation for a superclass to be inherited by a subclass. Therefore, when a request for a specific annotation is made to the subclass, if that annotation is not present in the subclass, then its superclass is checked. If that annotation is present in the superclass, and if it is annotated with @Inherited, then that annotation will be returned.
User-defined annotations can be used to annotate program elements, i.e. variables, constructors, methods, etc. These annotations can be applied just before the declaration of an element (constructor, method, classes, etc).
Syntax: Declaration
[Access Specifier] @interface<AnnotationName>
{
DataType <Method Name>() [default value];
}
Do keep these certain points as rules for custom annotations before implementing user-defined annotations.
AnnotationName is an interface.The parameter should not be associated with method declarations and throws clause should not be used with method declaration.Parameters will not have a null value but can have a default value.default value is optional.The return type of method should be either primitive, enum, string, class name, or array of primitive, enum, string, or class name type.
AnnotationName is an interface.
The parameter should not be associated with method declarations and throws clause should not be used with method declaration.
Parameters will not have a null value but can have a default value.
default value is optional.
The return type of method should be either primitive, enum, string, class name, or array of primitive, enum, string, or class name type.
Example:
// Java Program to Demonstrate User-defined Annotations package source; import java.lang.annotation.Documented;import java.lang.annotation.Retention;import java.lang.annotation.RetentionPolicy; // User-defined annotation@Documented@Retention(RetentionPolicy.RUNTIME)@ interface TestAnnotation{ String Developer() default "Rahul"; String Expirydate();} // will be retained at runtime // Driver class that uses @TestAnnotationpublic class Test{ @TestAnnotation(Developer="Rahul", Expirydate="01-10-2020") void fun1() { System.out.println("Test method 1"); } @TestAnnotation(Developer="Anil", Expirydate="01-10-2021") void fun2() { System.out.println("Test method 2"); } public static void main(String args[]) { System.out.println("Hello"); }}
Output:
Hello
This article is contributed by Rahul Agrawal. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
CharchitKapoor
solankimayank
prakharsinha2k2
java-basics
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Object Oriented Programming (OOPs) Concept in Java
HashMap in Java with Examples
Interfaces in Java
How to iterate any Map in Java
Initialize an ArrayList in Java
ArrayList in Java
Stack Class in Java
Singleton Class in Java
Multidimensional Arrays in Java
Set in Java | [
{
"code": null,
"e": 25487,
"s": 25459,
"text": "\n23 Feb, 2022"
},
{
"code": null,
"e": 25562,
"s": 25487,
"text": "Annotations are used to provide supplemental information about a program. "
},
{
"code": null,
"e": 25590,
"s": 25562,
"text": "Annotations sta... |
C++ List Library - sort() Function | The C++ function std::list::sort() sorts the elements of the list. The order of equal elements is preserved. It uses comparison function to compare values.
Following is the declaration for std::list::sort() function form std::list header.
template <class Compare>
void sort (Compare comp);
comp − comparison function object which returns boolean. It has following prototype.
bool cmp(const Type1 &ar1, const Type2 &arg2);
None
This member function never throws exception.
Linear i.e. O(n)
The following example shows the usage of std::list::sort() function.
#include <iostream>
#include <list>
using namespace std;
bool comp(int a, int b) {
return (a > b);
}
int main(void) {
list<int> l = {1, 4, 2, 5, 3};
cout << "Contents of list before sort operation" << endl;
for (auto it = l.begin(); it != l.end(); ++it)
cout << *it << endl;
/* Descending sort */
l.sort(comp);
cout << "Contents of list after sort operation" << endl;
for (auto it = l.begin(); it != l.end(); ++it)
cout << *it << endl;
return 0;
}
Let us compile and run the above program, this will produce the following result −
Contents of list before sort operation
1
4
2
5
3
Contents of list after sort operation
5
4
3
2
1
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2759,
"s": 2603,
"text": "The C++ function std::list::sort() sorts the elements of the list. The order of equal elements is preserved. It uses comparison function to compare values."
},
{
"code": null,
"e": 2842,
"s": 2759,
"text": "Following is the declarati... |
How To Train Your Neural Network: The Hidden Code | by Murto Hilali | Towards Data Science | On the flip side, you’re far more likely to have encountered some form of artificial intelligence in your life than you have a dragon — or maybe you have, who knows? I don’t know your life.
Needless to say, AI has become an integral part of our lives, from the Netflix algorithm magically making your weekend disappear to Google Duplex making phone calls for you so you never have to deal with the anxiety of booking an appointment over the phone ever again (human interaction? *gasp*).
But AI is on the path to solving bigger problems than ever before, ranging from tumour detection to self-driving cars. The world is changing rapidly, so it’s important you know how the future is going to work. After all, how the heck did AI get to be so smart in the first place?
The dragons in How to Train Your Dragon and the humans of Earth (that’s us) have a lot in common. We’re both pretty smart, but we don’t start out like that.
Growing up, we’re fed information about the world around us — from our parents, friends, teachers, (in the dragons’ case, trainers) etc. Using this information, we try to make some models for making good choices for ourselves and others (unless you’re some sort of anarchist, in which case Viva la Revolution, I guess?)
Sometimes, the choices we make don’t turn out so well. For instance, not buying Facebook stock in May 2012. Or buying Facebook stock in July 2018. But we learn from our mistakes and adjust our decision-making process accordingly, so we make better decisions in the future. Good for us!
Using some not-too-complex mathematics, computer scientists have developed a process called machine learning that does basically the same thing. Machine learning describes the method of feeding a neural network lots of data, letting it find patterns in it, then having it make choices and predictions based on them. But much like dragons, we have to train these systems first.
So what is a neural network, and how exactly does it learn?
Neural networks are systems of neurons or fancy perceptrons. A perceptron takes in multiple input values and produces a singular output, through a whole process called forward propagation — on a high level, think Power Rangers coming together to form a Megazord.
Let’s say It’s Morphin Time is the function performed on the input values, the Power Rangers, and each of them has a specific value that just corresponds to them, for example:
x1 = 0
x2 = 1
x3 = 0
Now let’s say the Black Power Ranger is more significant than the others. His input will have more importance given to it, or weight. In fact, all of the Rangers have their own respective weights:
w1 = 1
w2 = 2
w3 = 3
Then the neuron is assigned a bias which gives it the ability to change after some iterations to better fit your data. By adding together all the input values multiplied by their respective weight, as well as accounting for b = bias, we get:
(w1 × x1) + (w2 × x2) + (w3 × x3) + (1 × b )
What makes this neuron really useful is an activation function. This basically takes the sum of what we got up there and uses it as the argument (or input) for a transformation function, making it non-linear. This is done with functions like Sigmoid or other complex models of polynomial regression.
Since a lot of data doesn’t always fit into a linear relationship, this helps our neural network model to fit the data better and is the special sauce that turns the Rangers into a Megazord. Or maybe it’s alien technology. I can’t remember.
One neuron on its own isn’t really all that useful. But when we use a whole bunch of them together, we can form a network of neurons, almost like a neural network... :)
These networks can be thought of in layers, with an input layer where all the data comes, an output layer that gives you your prediction, and some hidden layers in between.
All the neurons (or nodes) in a layer connect back to all the neurons in the previous layer. They eventually produce an output through this fancy forward propagation we’ve been talking about. The real magic, however, comes from backpropagation.
Backpropagation is the process of determining the error or loss at the output, then going back into the network to adjust weights and biases based on the error; common algorithms used to achieve this include gradient descent, which involves finding derivatives of your parameters with respect to the weights, and multiplying it by your learning rate, which decides how fast you want to optimize.
In essence, we find the derivative of the loss function and try to minimize it, thereby minimizing error. Make sense?
Every round of this process, or training iteration, is called an epoch. Sounds badass, I know. For more difficult datasets, we would require numerous epochs, like all the different versions of Power Rangers (there’s like what, a million varieties?)
We can build a very simple neural network in Python right now, one that does binary classification, or prediction of 1 or 0. Note: I’m using Python 3.6
Of course, we can’t start without our good friend Numpy:
import numpy as np
Let’s set X as our input array and y as our output array:
X=np.array([[0,1,1,0],[0,1,1,1],[1,0,0,1]])y=np.array([[0],[1],[1]])
Let’s add our Sigmoid function (this will be part of forward propagation) and its derivative (this will be part of backpropagation):
def sigmoid (): return 1/(1 + np.exp(-x))def derivatives_sigmoid (): return x * (1-x)
Now let’s set how many epochs we want, what our learning rate lrwill be, the number of features in our dataset (1), the number of hidden layer neurons (3) and the number of output neurons (1):
epoch=10000lr=0.1 inputlayer_neurons = X.shape[1] hiddenlayer_neurons = 3 output_neurons = 1
Now we can set our initial weights and biases. Let’s let wh and bh be the weights and biases for the hidden layer neuron, and wout and bout be the same weights and biases for the output neuron:
wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neurons))bh=np.random.uniform(size=(1,hiddenlayer_neurons))wout=np.random.uniform(size=(hiddenlayer_neurons,output_neurons))bout=np.random.uniform(size=(1,output_neurons))
Alright, the stage is set: we have our data, our initial weights and biases are set, and our forward and backpropagation functions are ready to calculate like bosses. Let’s set the rest of the pieces into place:
Forward Propagation:
First, we’ll get the product of the weight and the input values.
Next, we’ll add the bias to the products.
Then, well use our sigmoid function to transform/activate it.
We’ll repeat this with the result in the output layer neuron.
hidden_layer_input1=np.dot(X,wh)hidden_layer_input=hidden_layer_input1 + bhhiddenlayer_activations = sigmoid(hidden_layer_input)output_layer_input1=np.dot(hiddenlayer_activations,wout)output_layer_input= output_layer_input1+ boutoutput = sigmoid(output_layer_input)
Back Propagation:
We’ll calculate the error, the difference between y and the output.
Next, we’ll find gradients of the output and transformation/activation.
Now let’s find the delta, or the change factor at the output layer by multiplying it by the error.
Now the error will propagate back into the network, so we have to get the dot product of the delta and weight parameters between the second and third layer.
We’ll do the same for the hidden layer, followed by updating the weights and biases.
E = y-outputslope_output_layer = derivatives_sigmoid(output)slope_hidden_layer = derivatives_sigmoid(hiddenlayer_activations)d_output = E * slope_output_layerError_at_hidden_layer = d_output.dot(wout.T)d_hiddenlayer = Error_at_hidden_layer * slope_hidden_layerwout += hiddenlayer_activations.T.dot(d_output) *lrbout += np.sum(d_output, axis=0,keepdims=True) *lrwh += X.T.dot(d_hiddenlayer) *lrbh += np.sum(d_hiddenlayer, axis=0,keepdims=True) *lr
We can finish off by printing our output:
print (output)
It should return something like this:
[[0.03391414] [0.97065091] [0.9895072 ]]
Our original outputs were 0, 1, 1, and these outputs are super close. It’s good they’re not exact because it allows for flexibility with other data. You can mess around with the number of epochs to how it makes the model less accurate.
We did it! There are, of course, different ways to code a neural network, and you can find plenty of articles that teach you, this is just one of the ways I’ve learned. Now while everyone else is busy tapping away at their screenplay at Starbucks, you can be happy knowing you’re riding the dragon of the future.
(That’s a lot cheesier in writing than it was in my head.)
You’ve trained your dragon. It took 10 000 epochs, but you did it. But this was just the beginning. You’ve got a whole franchise ahead of you.
Today machine learning is being applied in some really exciting ways. Companies like Cogito are using it to help call center workers gauge client interest over the phone just by analyzing their voice. The medical world is using AI to help diagnose malignant blood cells and tumours. Computer vision is helping to put self-driving cars on the road. There’s even AI writing entire books!
In How to Train Your Dragon, partnering with dragons lets the riders of Berk improve life socially and agriculturally. AI is already improving life for us as we know it; it’s disrupting every single industry in the world right now, so it’s important you get onboard ASAP. Understanding how neural networks work, even slightly, puts you ahead of the curve.
So understanding artificial intelligence may not make you as cool as someone who has a dragon, but it’s revolutionary, exciting, and bound to secure your place in the future all the same.
A neuron takes in multiple inputs and produces an output. Inputs have associated weights, and neurons apply their own biases.
Neurons also include an activation function, which makes the data graph non-linear. When we layer a bunch of these together, we have a neural network.
Backpropagation is the process of adjusting weights and biases to minimize error at the output, often using training algorithms like gradient descent.
Data requires numerous rounds of training iterations, or epochs, to be accurate.
Artificial intelligence is changing how the world works, from the medical industry to literature. It’s a disruptive train you want to be on.
Check out my other article on how we’re going to make emotional AI...Connect with me on LinkedIn! If you want to talk about AI/ML or dragons, I’d love to connect!
Check out my other article on how we’re going to make emotional AI...
Connect with me on LinkedIn! If you want to talk about AI/ML or dragons, I’d love to connect! | [
{
"code": null,
"e": 362,
"s": 172,
"text": "On the flip side, you’re far more likely to have encountered some form of artificial intelligence in your life than you have a dragon — or maybe you have, who knows? I don’t know your life."
},
{
"code": null,
"e": 659,
"s": 362,
"text... |
How to sort array of strings by their lengths following shortest to longest pattern in Java | At first, let us create and array of strings:
String[] strArr = { "ABCD", "AB", "ABCDEFG", "ABC", "A", "ABCDE", "ABCDEF", "ABCDEFGHIJ" };
Now, for shortest to longest pattern, for example A, AB, ABC, ABCD, etc.; get the length of both the string arrays and work them like this:
Arrays.sort(strArr, (str1, str2) -> str1.length() - str2.length());
The following is an example to sort array of strings by their lengths with shortest to longest pattern:
import java.util.Arrays;
public class Demo {
public static void main(String[] args) {
String[] strArr = { "ABCD", "AB", "ABCDEFG", "ABC", "A", "ABCDE", "ABCDEF","ABCDEFGHIJ" };
System.out.println("Sorting array on the basis of their lengths (shortest to longest) =");
Arrays.sort(strArr, (str1, str2) -> str1.length() - str2.length());
Arrays.asList(strArr).forEach(System.out::println);
}
}
Sorting array on the basis of their lengths (shortest to longest) =
A
AB
ABC
ABCD
ABCDE
ABCDEF
ABCDEFG
ABCDEFGHIJ | [
{
"code": null,
"e": 1108,
"s": 1062,
"text": "At first, let us create and array of strings:"
},
{
"code": null,
"e": 1200,
"s": 1108,
"text": "String[] strArr = { \"ABCD\", \"AB\", \"ABCDEFG\", \"ABC\", \"A\", \"ABCDE\", \"ABCDEF\", \"ABCDEFGHIJ\" };"
},
{
"code": null,
... |
Sorting an array objects by property having null value in JavaScript | We are required to write a JavaScript function that takes in an array of objects. The objects may have some of their keys that are mapped to null.
Our function should sort the array such that all the objects having keys mapped to null are pushed to the end of the array.
The code for this will be −
const arr = [
{key: 'a', value: 100},
{key: 'a', value: null},
{key: 'a', value: 0}
];
const sortNullishValues = (arr = []) => {
const assignValue = val => {
if(val === null){
return Infinity;
}
else{
return val;
};
};
const sorter = (a, b) => {
return assignValue(a.value) - assignValue(b.value);
};
arr.sort(sorter);
}
sortNullishValues(arr);
console.log(arr);
And the output in the console will be −
[
{ key: 'a', value: 0 },
{ key: 'a', value: 100 },
{ key: 'a', value: null }
] | [
{
"code": null,
"e": 1209,
"s": 1062,
"text": "We are required to write a JavaScript function that takes in an array of objects. The objects may have some of their keys that are mapped to null."
},
{
"code": null,
"e": 1333,
"s": 1209,
"text": "Our function should sort the array ... |
Create a Bordered Button Group with CSS | You can try to run the following code to create a bordered button group with border property
Live Demo
<!DOCTYPE html>
<html>
<head>
<style>
.btn {
color: black;
background-color: yellow;
width: 120px;
text-align: center;
font-size: 15px;
padding: 20px;
float: left;
border: 3px solid blue;
}
.mybtn {
background-color: orange;
}
</style>
</head>
<body>
<h2>Result</h2>
<p>Click below for result:</p>
<div class = "mybtn">
<button class = "btn">Result</button>
<button class = "btn">Result</button>
<button class = "btn">Result</button>
<button class = "btn">Result</button>
<button class = "btn">Result</button>
</div>
</body>
</html> | [
{
"code": null,
"e": 1155,
"s": 1062,
"text": "You can try to run the following code to create a bordered button group with border property"
},
{
"code": null,
"e": 1165,
"s": 1155,
"text": "Live Demo"
},
{
"code": null,
"e": 1935,
"s": 1165,
"text": "<!DOCTYP... |
How to display multiple notifications in android? | This example demonstrate about How to display multiple notifications in android
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<? xml version = "1.0" encoding= "utf-8" ?>
<android.support.constraint.ConstraintLayout
xmlns: android = "http://schemas.android.com/apk/res/android"
xmlns: app = "http://schemas.android.com/apk/res-auto"
xmlns: tools = "http://schemas.android.com/tools"
android :layout_width = "match_parent"
android :layout_height = "match_parent"
android :padding = "16dp"
tools :context = ".MainActivity" >
<Button
android :id = "@+id/btnCreateNotification"
android :layout_width = "0dp"
android :layout_height = "wrap_content"
android :text = "Create notification"
app :layout_constraintBottom_toBottomOf = "parent"
app :layout_constraintEnd_toEndOf = "parent"
app :layout_constraintStart_toStartOf = "parent"
app :layout_constraintTop_toTopOf = "parent" />
</android.support.constraint.ConstraintLayout>
Step 3 − Add a sound into raw folder
Step 4 − Add the following code to src/MainActivity.java
package app.tutorialspoint.com.notifyme ;
import android.app.NotificationChannel ;
import android.app.NotificationManager ;
import android.content.ContentResolver ;
import android.content.Context ;
import android.graphics.Color ;
import android.media.AudioAttributes ;
import android.net.Uri ;
import android.support.v4.app.NotificationCompat ;
import android.support.v7.app.AppCompatActivity ;
import android.os.Bundle ;
import android.view.View ;
import android.widget.Button ;
public class MainActivity extends AppCompatActivity {
public static final String NOTIFICATION_CHANNEL_ID = "10001" ;
private final static String default_notification_channel_id = "default" ;
@Override
protected void onCreate (Bundle savedInstanceState) {
super .onCreate(savedInstanceState) ;
setContentView(R.layout. activity_main ) ;
Button btnCreateNotification = findViewById(R.id. btnCreateNotification ) ;
btnCreateNotification.setOnClickListener( new View.OnClickListener() {
@Override
public void onClick (View v) {
Uri sound = Uri. parse (ContentResolver. SCHEME_ANDROID_RESOURCE + "://" + getPackageName() + "/raw/quite_impressed.mp3" ) ;
NotificationCompat.Builder mBuilder = new NotificationCompat.Builder(MainActivity. this, default_notification_channel_id )
.setSmallIcon(R.drawable. ic_launcher_foreground )
.setContentTitle( "Test" )
.setSound(sound)
.setContentText( "Hello! This is my first push notification" );
NotificationManager mNotificationManager = (NotificationManager) getSystemService(Context. NOTIFICATION_SERVICE );
if (android.os.Build.VERSION. SDK_INT >= android.os.Build.VERSION_CODES. O ) {
AudioAttributes audioAttributes = new AudioAttributes.Builder()
.setContentType(AudioAttributes. CONTENT_TYPE_SONIFICATION )
.setUsage(AudioAttributes. USAGE_ALARM )
.build() ;
int importance = NotificationManager. IMPORTANCE_HIGH ;
NotificationChannel notificationChannel = new
NotificationChannel( NOTIFICATION_CHANNEL_ID , "NOTIFICATION_CHANNEL_NAME" , importance) ;
notificationChannel.enableLights( true ) ;
notificationChannel.setLightColor(Color. RED ) ;
notificationChannel.enableVibration( true ) ;
notificationChannel.setVibrationPattern( new long []{ 100 , 200 , 300 , 400 , 500 , 400 , 300 , 200 , 400 }) ;
notificationChannel.setSound(sound , audioAttributes) ;
mBuilder.setChannelId( NOTIFICATION_CHANNEL_ID ) ;
assert mNotificationManager != null;
mNotificationManager.createNotificationChannel(notificationChannel) ;
}
assert mNotificationManager != null;
mNotificationManager.notify(( int ) System. currentTimeMillis () ,
mBuilder.build()) ;
}
}) ;
}
}
Step 5 − Add the following code to androidManifest.xml
<? xml version = "1.0" encoding = "utf-8" ?>
<manifest xmlns: android = "http://schemas.android.com/apk/res/android" package = "app.tutorialspoint.com.notifyme" >
<application
android :allowBackup = "true"
android :icon = "@mipmap/ic_launcher"
android :label = "@string/app_name"
android :roundIcon = "@mipmap/ic_launcher_round"
android :supportsRtl = "true"
android :theme = "@style/AppTheme" >
<activity android :name = ".MainActivity" >
<intent-filter>
<action android :name = "android.intent.action.MAIN" />
<category android :name = "android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −
Click here to download the project code | [
{
"code": null,
"e": 1142,
"s": 1062,
"text": "This example demonstrate about How to display multiple notifications in android"
},
{
"code": null,
"e": 1271,
"s": 1142,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required detail... |
How to save database in local storage of android webview? | This example demonstrate about How to save database in local storage of android webview.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version = "1.0" encoding = "utf-8"?>
<LinearLayout xmlns:android = "http://schemas.android.com/apk/res/android"
xmlns:app = "http://schemas.android.com/apk/res-auto"
xmlns:tools = "http://schemas.android.com/tools"
android:layout_width = "match_parent"
android:gravity = "center"
android:layout_height = "match_parent"
tools:context = ".MainActivity"
android:orientation = "vertical">
<WebView
android:id = "@+id/web_view"
android:layout_width = "match_parent"
android:layout_height = "match_parent" />
</LinearLayout>
In the above code, we have taken web view to show facebook.com.
Step 3 − Add the following code to src/MainActivity.java
package com.example.myapplication;
import android.app.ProgressDialog;
import android.os.Build;
import android.os.Bundle;
import android.support.annotation.RequiresApi;
import android.support.v7.app.AppCompatActivity;
import android.view.View;
import android.webkit.WebChromeClient;
import android.webkit.WebSettings;
import android.webkit.WebView;
import android.webkit.WebViewClient;
import android.widget.EditText;
public class MainActivity extends AppCompatActivity {
@RequiresApi(api = Build.VERSION_CODES.P)
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
final ProgressDialog progressDialog = new ProgressDialog(this);
progressDialog.setMessage("Loading Data...");
progressDialog.setCancelable(false);
WebView web_view = findViewById(R.id.web_view);
web_view.requestFocus();
web_view.getSettings().setJavaScriptEnabled(true);
web_view.getSettings().setAppCachePath(getApplicationContext().getFilesDir().getAbsolutePath() + "/cache");
web_view.getSettings().setDatabasePath(getApplicationContext().getFilesDir().getAbsolutePath() + "/databases");
web_view.loadUrl("https://touch.facebook.com/");
web_view.setWebViewClient(new WebViewClient() {
@Override
public boolean shouldOverrideUrlLoading(WebView view, String url) {
view.loadUrl(url);
return true;
}
});
web_view.setWebChromeClient(new WebChromeClient() {
public void onProgressChanged(WebView view, int progress) {
if (progress < 100) {
progressDialog.show();
}
if (progress = = 100) {
progressDialog.dismiss();
}
}
});
}
}
Step 4 − Add the following code to AndroidManifest.xml
<?xml version = "1.0" encoding = "utf-8"?>
<manifest xmlns:android = "http://schemas.android.com/apk/res/android"
package = "com.example.myapplication">
<uses-permission android:name = "android.permission.INTERNET"/>
<application
android:allowBackup = "true"
android:icon = "@mipmap/ic_launcher"
android:label = "@string/app_name"
android:roundIcon = "@mipmap/ic_launcher_round"
android:supportsRtl = "true"
android:theme = "@style/AppTheme">
<activity android:name = ".MainActivity">
<intent-filter>
<action android:name = "android.intent.action.MAIN" />
<category android:name = "android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –
Click here to download the project code | [
{
"code": null,
"e": 1151,
"s": 1062,
"text": "This example demonstrate about How to save database in local storage of android webview."
},
{
"code": null,
"e": 1280,
"s": 1151,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all requir... |
Gensim - Quick Guide | This chapter will help you understand history and features of Gensim along with its uses and advantages.
Gensim = “Generate Similar” is a popular open source natural language processing (NLP) library used for unsupervised topic modeling. It uses top academic models and modern statistical machine learning to perform various complex tasks such as −
Building document or word vectors
Corpora
Performing topic identification
Performing document comparison (retrieving semantically similar documents)
Analysing plain-text documents for semantic structure
Apart from performing the above complex tasks, Gensim, implemented in Python and Cython, is designed to handle large text collections using data streaming as well as incremental online algorithms. This makes it different from those machine learning software packages that target only in-memory processing.
In 2008, Gensim started off as a collection of various Python scripts for the Czech Digital Mathematics. There, it served to generate a short list of the most similar articles to a particular given article. But in 2009, RARE Technologies Ltd. released its initial release. Then, later in July 2019, we got its stable release (3.8.0).
Following are some of the features and capabilities offered by Gensim −
Gensim can easily process large and web-scale corpora by using its incremental online training algorithms. It is scalable in nature, as there is no need for the whole input corpus to reside fully in Random Access Memory (RAM) at any one time. In other words, all its algorithms are memory-independent with respect to the corpus size.
Gensim is robust in nature and has been in use in various systems by various people as well as organisations for over 4 years. We can easily plug in our own input corpus or data stream. It is also very easy to extend with other Vector Space Algorithms.
As we know that Python is a very versatile language as being pure Python Gensim runs on all the platforms (like Windows, Mac OS, Linux) that supports Python and Numpy.
In order to speed up processing and retrieval on machine clusters, Gensim provides efficient multicore implementations of various popular algorithms like Latent Semantic Analysis (LSA), Latent Dirichlet Allocation (LDA), Random Projections (RP), Hierarchical Dirichlet Process (HDP).
Gensim is licensed under the OSI-approved GNU LGPL license which allows it to be used for both personal as well as commercial use for free. Any modifications made in Gensim are in turn open-sourced and has abundance of community support too.
Gensim has been used and cited in over thousand commercial and academic applications. It is also cited by various research papers and student theses. It includes streamed parallelised implementations of the following −
fastText, uses a neural network for word embedding, is a library for learning of word embedding and text classification. It is created by Facebook’s AI Research (FAIR) lab. This model, basically, allows us to create a supervised or unsupervised algorithm for obtaining vector representations for words.
Word2vec, used to produce word embedding, is a group of shallow and two-layer neural network models. The models are basically trained to reconstruct linguistic contexts of words.
It is a technique in NLP (Natural Language Processing) that allows us to analyse relationships between a set of documents and their containing terms. It is done by producing a set of concepts related to the documents and terms.
It is a technique in NLP that allows sets of observations to be explained by unobserved groups. These unobserved groups explain, why some parts of the data are similar. That’s the reason, it is a generative statistical model.
tf-idf, a numeric statistic in information retrieval, reflects how important a word is to a document in a corpus. It is often used by search engines to score and rank a document’s relevance given a user query. It can also be used for stop-words filtering in text summarisation and classification.
All of them will be explained in detail in the next sections.
Gensim is a NLP package that does topic modeling. The important advantages of Gensim are as follows −
We may get the facilities of topic modeling and word embedding in other packages like ‘scikit-learn’ and ‘R’, but the facilities provided by Gensim for building topic models and word embedding is unparalleled. It also provides more convenient facilities for text processing.
We may get the facilities of topic modeling and word embedding in other packages like ‘scikit-learn’ and ‘R’, but the facilities provided by Gensim for building topic models and word embedding is unparalleled. It also provides more convenient facilities for text processing.
Another most significant advantage of Gensim is that, it let us handle large text files even without loading the whole file in memory.
Another most significant advantage of Gensim is that, it let us handle large text files even without loading the whole file in memory.
Gensim doesn’t require costly annotations or hand tagging of documents because it uses unsupervised models.
Gensim doesn’t require costly annotations or hand tagging of documents because it uses unsupervised models.
The chapter enlightens about the prerequisites for installing Gensim, its core dependencies and information about its current version.
In order to install Gensim, we must have Python installed on our computers. You can go to the link www.python.org/downloads/ and select the latest version for your OS i.e. Windows and Linux/Unix. You can refer to the link www.tutorialspoint.com/python3/index.htm for basic tutorial on Python. Gensim is supported for Linux, Windows and Mac OS X.
Gensim should run on any platform that supports Python 2.7 or 3.5+ and NumPy. It actually depends on the following software −
Gensim is tested with Python versions 2.7, 3.5, 3.6, and 3.7.
As we know that, NumPy is a package for scientific computing with Python. It can also be used as an efficient multi-dimensional container of generic data. Gensim depends on NumPy package for number crunching. For basic tutorial on Python, you can refer to the link www.tutorialspoint.com/numpy/index.htm.
smart_open, a Python 2 & Python 3 library, is used for efficient streaming of very large files. It supports streaming from/to storages such as S3, HDFS, WebHDFS, HTTP, HTTPS, SFTP, or local filesystems. Gensim depends upon smart_open Python library for transparently opening files on remote storage as well as compressed files.
The current version of Gensim is 3.8.0 which was released in July 2019.
One of the simplest ways to install Gensim, is to run the following command in your terminal −
pip install --upgrade gensim
An alternative way to download Gensim is, to use conda environment. Run the following command in your conda terminal −
conda install –c conda-forge gensim
Suppose, if you have downloaded and unzipped the source package, then you need to run the following commands −
python setup.py test
python setup.py install
Here, we shall learn about the core concepts of Gensim, with main focus on the documents and the corpus.
Following are the core concepts and terms that are needed to understand and use Gensim −
Document − ZIt refers to some text.
Document − ZIt refers to some text.
Corpus − It refers to a collection of documents.
Corpus − It refers to a collection of documents.
Vector − Mathematical representation of a document is called vector.
Vector − Mathematical representation of a document is called vector.
Model − It refers to an algorithm used for transforming vectors from one representation to another.
Model − It refers to an algorithm used for transforming vectors from one representation to another.
As discussed, it refers to some text. If we go in some detail, it is an object of the text sequence type which is known as ‘str’ in Python 3. For example, in Gensim, a document can be anything such as −
Short tweet of 140 characters
Single paragraph, i.e. article or research paper abstract
News article
Book
Novel
Theses
A text sequence type is commonly known as ‘str’ in Python 3. As we know that in Python, textual data is handled with strings or more specifically ‘str’ objects. Strings are basically immutable sequences of Unicode code points and can be written in the following ways −
Single quotes − For example, ‘Hi! How are you?’. It allows us to embed double quotes also. For example, ‘Hi! “How” are you?’
Single quotes − For example, ‘Hi! How are you?’. It allows us to embed double quotes also. For example, ‘Hi! “How” are you?’
Double quotes − For example, "Hi! How are you?". It allows us to embed single quotes also. For example, "Hi! 'How' are you?"
Double quotes − For example, "Hi! How are you?". It allows us to embed single quotes also. For example, "Hi! 'How' are you?"
Triple quotes − It can have either three single quotes like, '''Hi! How are you?'''. or three double quotes like, """Hi! 'How' are you?"""
Triple quotes − It can have either three single quotes like, '''Hi! How are you?'''. or three double quotes like, """Hi! 'How' are you?"""
All the whitespaces will be included in the string literal.
Following is an example of a Document in Gensim −
Document = “Tutorialspoint.com is the biggest online tutorials library and it’s all free also”
A corpus may be defined as the large and structured set of machine-readable texts produced in a natural communicative setting. In Gensim, a collection of document object is called corpus. The plural of corpus is corpora.
A corpus in Gensim serves the following two roles −
The very first and important role a corpus plays in Gensim, is as an input for training a model. In order to initialize model’s internal parameters, during training, the model look for some common themes and topics from the training corpus. As discussed above, Gensim focuses on unsupervised models, hence it doesn’t require any kind of human intervention.
Once the model is trained, it can be used to extract topics from the new documents. Here, the new documents are the ones that are not used in the training phase.
The corpus can include all the tweets by a particular person, list of all the articles of a newspaper or all the research papers on a particular topic etc.
Following is an example of small corpus which contains 5 documents. Here, every document is a string consisting of a single sentence.
t_corpus = [
"A survey of user opinion of computer system response time",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
]
Once we collect the corpus, a few preprocessing steps should be taken to keep corpus simple. We can simply remove some commonly used English words like ‘the’. We can also remove words that occur only once in the corpus.
For example, the following Python script is used to lowercase each document, split it by white space and filter out stop words −
import pprint
t_corpus = [
"A survey of user opinion of computer system response time",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
]
stoplist = set('for a of the and to in'.split(' '))
processed_corpus = [[word for word in document.lower().split() if word not in stoplist]
for document in t_corpus]
pprint.pprint(processed_corpus)
]
[['survey', 'user', 'opinion', 'computer', 'system', 'response', 'time'],
['relation', 'user', 'perceived', 'response', 'time', 'error', 'measurement'],
['generation', 'random', 'binary', 'unordered', 'trees'],
['intersection', 'graph', 'paths', 'trees'],
['graph', 'minors', 'iv', 'widths', 'trees', 'well', 'quasi', 'ordering']]
Gensim also provides function for more effective preprocessing of the corpus. In such kind of preprocessing, we can convert a document into a list of lowercase tokens. We can also ignore tokens that are too short or too long. Such function is gensim.utils.simple_preprocess(doc, deacc=False, min_len=2, max_len=15).
gensim.utils.simple_preprocess() fucntion
Gensim provide this function to convert a document into a list of lowercase tokens and also for ignoring tokens that are too short or too long. It has the following parameters −
It refers to the input document on which preprocessing should be applied.
This parameter is used to remove the accent marks from tokens. It uses deaccent() to do this.
With the help of this parameter, we can set the minimum length of a token. The tokens shorter than defined length will be discarded.
With the help of this parameter we can set the maximum length of a token. The tokens longer than defined length will be discarded.
The output of this function would be the tokens extracted from input document.
Here, we shall learn about the core concepts of Gensim, with main focus on the vector and the model.
What if we want to infer the latent structure in our corpus? For this, we need to represent the documents in a such a way that we can manipulate the same mathematically. One popular kind of representation is to represent every document of corpus as a vector of features. That’s why we can say that vector is a mathematical convenient representation of a document.
To give you an example, let’s represent a single feature, of our above used corpus, as a Q-A pair −
Q − How many times does the word Hello appear in the document?
A − Zero(0).
Q − How many paragraphs are there in the document?
A − Two(2)
The question is generally represented by its integer id, hence the representation of this document is a series of pairs like (1, 0.0), (2, 2.0). Such vector representation is known as a dense vector. Why dense, because it comprises an explicit answer to all the questions written above.
The representation can be a simple like (0, 2), if we know all the questions in advance. Such sequence of the answers (of course if the questions are known in advance) is the vector for our document.
Another popular kind of representation is the bag-of-word (BoW) model. In this approach, each document is basically represented by a vector containing the frequency count of every word in the dictionary.
To give you an example, suppose we have a dictionary that contains the words [‘Hello’, ‘How’, ‘are’, ‘you’]. A document consisting of the string “How are you how” would then be represented by the vector [0, 2, 1, 1]. Here, the entries of the vector are in order of the occurrences of “Hello”, “How”, “are”, and “you”.
From the above explanation of vector, the distinction between a document and a vector is almost understood. But, to make it clearer, document is text and vector is a mathematically convenient representation of that text. Unfortunately, sometimes many people use these terms interchangeably.
For example, suppose we have some arbitrary document A then instead of saying, “the vector that corresponds to document A”, they used to say, “the vector A” or “the document A”. This leads to great ambiguity. One more important thing to be noted here is that, two different documents may have the same vector representation.
Before taking an implementation example of converting corpus into the list of vectors, we need to associate each word in the corpus with a unique integer ID. For this, we will be extending the example taken in above chapter.
from gensim import corpora
dictionary = corpora.Dictionary(processed_corpus)
print(dictionary)
Dictionary(25 unique tokens: ['computer', 'opinion', 'response', 'survey', 'system']...)
It shows that in our corpus there are 25 different tokens in this gensim.corpora.Dictionary.
We can use the dictionary to turn tokenised documents into these 5-diemsional vectors as follows −
pprint.pprint(dictionary.token2id)
{
'binary': 11,
'computer': 0,
'error': 7,
'generation': 12,
'graph': 16,
'intersection': 17,
'iv': 19,
'measurement': 8,
'minors': 20,
'opinion': 1,
'ordering': 21,
'paths': 18,
'perceived': 9,
'quasi': 22,
'random': 13,
'relation': 10,
'response': 2,
'survey': 3,
'system': 4,
'time': 5,
'trees': 14,
'unordered': 15,
'user': 6,
'well': 23,
'widths': 24
}
And similarly, we can create the bag-of-word representation for a document as follows −
BoW_corpus = [dictionary.doc2bow(text) for text in processed_corpus]
pprint.pprint(BoW_corpus)
[
[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1)],
[(2, 1), (5, 1), (6, 1), (7, 1), (8, 1), (9, 1), (10, 1)],
[(11, 1), (12, 1), (13, 1), (14, 1), (15, 1)],
[(14, 1), (16, 1), (17, 1), (18, 1)],
[(14, 1), (16, 1), (19, 1), (20, 1), (21, 1), (22, 1), (23, 1), (24, 1)]
]
Once we have vectorised the corpus, next what? Now, we can transform it using models. Model may be referred to an algorithm used for transforming one document representation to other.
As we have discussed, documents, in Gensim, are represented as vectors hence, we can, though model as a transformation between two vector spaces. There is always a training phase where models learn the details of such transformations. The model reads the training corpus during training phase.
Let’s initialise tf-idf model. This model transforms vectors from the BoW (Bag of Words) representation to another vector space where the frequency counts are weighted according to the relative rarity of every word in corpus.
In the following example, we are going to initialise the tf-idf model. We will train it on our corpus and then transform the string “trees graph”.
from gensim import models
tfidf = models.TfidfModel(BoW_corpus)
words = "trees graph".lower().split()
print(tfidf[dictionary.doc2bow(words)])
[(3, 0.4869354917707381), (4, 0.8734379353188121)]
Now, once we created the model, we can transform the whole corpus via tfidf and index it, and query the similarity of our query document (we are giving the query document ‘trees system’) against each document in the corpus −
from gensim import similarities
index = similarities.SparseMatrixSimilarity(tfidf[BoW_corpus],num_features=5)
query_document = 'trees system'.split()
query_bow = dictionary.doc2bow(query_document)
simils = index[tfidf[query_bow]]
print(list(enumerate(simils)))
[(0, 0.0), (1, 0.0), (2, 1.0), (3, 0.4869355), (4, 0.4869355)]
From the above output, document 4 and document 5 has a similarity score of around 49%.
Moreover, we can also sort this output for more readability as follows −
for doc_number, score in sorted(enumerate(sims), key=lambda x: x[1], reverse=True):
print(doc_number, score)
2 1.0
3 0.4869355
4 0.4869355
0 0.0
1 0.0
In last chapter where we discussed about vector and model, you got an idea about the dictionary. Here, we are going to discuss Dictionary object in a bit more detail.
Before getting deep dive into the concept of dictionary, let’s understand some simple NLP concepts −
Token − A token means a ‘word’.
Token − A token means a ‘word’.
Document − A document refers to a sentence or paragraph.
Document − A document refers to a sentence or paragraph.
Corpus − It refers to a collection of documents as a bag of words (BoW).
Corpus − It refers to a collection of documents as a bag of words (BoW).
For all the documents, a corpus always contains each word’s token’s id along with its frequency count in the document.
Let’s move to the concept of dictionary in Gensim. For working on text documents, Gensim also requires the words, i.e. tokens to be converted to their unique ids. For achieving this, it gives us the facility of Dictionary object, which maps each word to their unique integer id. It does this by converting input text to the list of words and then pass it to the corpora.Dictionary() object.
Now the question arises that what is actually the need of dictionary object and where it can be used? In Gensim, the dictionary object is used to create a bag of words (BoW) corpus which further used as the input to topic modelling and other models as well.
There are three different forms of input text, we can provide to Gensim −
As the sentences stored in Python’s native list object (known as str in Python 3)
As the sentences stored in Python’s native list object (known as str in Python 3)
As one single text file (can be small or large one)
As one single text file (can be small or large one)
Multiple text files
Multiple text files
As discussed, in Gensim, the dictionary contains the mapping of all words, a.k.a tokens to their unique integer id. We can create a dictionary from list of sentences, from one or more than one text files (text file containing multiple lines of text). So, first let’s start by creating dictionary using list of sentences.
In the following example we will be creating dictionary from a list of sentences. When we have list of sentences or you can say multiple sentences, we must convert every sentence to a list of words and comprehensions is one of the very common ways to do this.
First, import the required and necessary packages as follows −
import gensim
from gensim import corpora
from pprint import pprint
Next, make the comprehension list from list of sentences/document to use it creating the dictionary −
doc = [
"CNTK formerly known as Computational Network Toolkit",
"is a free easy-to-use open-source commercial-grade toolkit",
"that enable us to train deep learning algorithms to learn like the human brain."
]
Next, we need to split the sentences into words. It is called tokenisation.
text_tokens = [[text for text in doc.split()] for doc in doc]
Now, with the help of following script, we can create the dictionary −
dict_LoS = corpora.Dictionary(text_tokens)
Now let’s get some more information like number of tokens in the dictionary −
print(dict_LoS)
Dictionary(27 unique tokens: ['CNTK', 'Computational', 'Network', 'Toolkit', 'as']...)
We can also see the word to unique integer mapping as follows −
print(dict_LoS.token2id)
{
'CNTK': 0, 'Computational': 1, 'Network': 2, 'Toolkit': 3, 'as': 4,
'formerly': 5, 'known': 6, 'a': 7, 'commercial-grade': 8, 'easy-to-use': 9,
'free': 10, 'is': 11, 'open-source': 12, 'toolkit': 13, 'algorithms': 14,
'brain.': 15, 'deep': 16, 'enable': 17, 'human': 18, 'learn': 19, 'learning': 20,
'like': 21, 'that': 22, 'the': 23, 'to': 24, 'train': 25, 'us': 26
}
import gensim
from gensim import corpora
from pprint import pprint
doc = [
"CNTK formerly known as Computational Network Toolkit",
"is a free easy-to-use open-source commercial-grade toolkit",
"that enable us to train deep learning algorithms to learn like the human brain."
]
text_tokens = [[text for text in doc.split()] for doc in doc]
dict_LoS = corpora.Dictionary(text_tokens)
print(dict_LoS.token2id)
In the following example we will be creating dictionary from a single text file. In the similar fashion, we can also create dictionary from more than one text files (i.e. directory of files).
For this, we have saved the document, used in previous example, in the text file named doc.txt. Gensim will read the file line by line and process one line at a time by using simple_preprocess. In this way, it doesn’t need to load the complete file in memory all at once.
First, import the required and necessary packages as follows −
import gensim
from gensim import corpora
from pprint import pprint
from gensim.utils import simple_preprocess
from smart_open import smart_open
import os
Next line of codes will make gensim dictionary by using the single text file named doc.txt −
dict_STF = corpora.Dictionary(
simple_preprocess(line, deacc =True) for line in open(‘doc.txt’, encoding=’utf-8’)
)
Now let’s get some more information like number of tokens in the dictionary −
print(dict_STF)
Dictionary(27 unique tokens: ['CNTK', 'Computational', 'Network', 'Toolkit', 'as']...)
We can also see the word to unique integer mapping as follows −
print(dict_STF.token2id)
{
'CNTK': 0, 'Computational': 1, 'Network': 2, 'Toolkit': 3, 'as': 4,
'formerly': 5, 'known': 6, 'a': 7, 'commercial-grade': 8, 'easy-to-use': 9,
'free': 10, 'is': 11, 'open-source': 12, 'toolkit': 13, 'algorithms': 14,
'brain.': 15, 'deep': 16, 'enable': 17, 'human': 18, 'learn': 19,
'learning': 20, 'like': 21, 'that': 22, 'the': 23, 'to': 24, 'train': 25, 'us': 26
}
import gensim
from gensim import corpora
from pprint import pprint
from gensim.utils import simple_preprocess
from smart_open import smart_open
import os
dict_STF = corpora.Dictionary(
simple_preprocess(line, deacc =True) for line in open(‘doc.txt’, encoding=’utf-8’)
)
dict_STF = corpora.Dictionary(text_tokens)
print(dict_STF.token2id)
Now let’s create dictionary from multiple files, i.e. more than one text file saved in the same directory. For this example, we have created three different text files namely first.txt, second.txt and third.txtcontaining the three lines from text file (doc.txt), we used for previous example. All these three text files are saved under a directory named ABC.
In order to implement this, we need to define a class with a method that can iterate through all the three text files (First, Second, and Third.txt) in the directory (ABC) and yield the processed list of words tokens.
Let’s define the class named Read_files having a method named __iteration__() as follows −
class Read_files(object):
def __init__(self, directoryname):
elf.directoryname = directoryname
def __iter__(self):
for fname in os.listdir(self.directoryname):
for line in open(os.path.join(self.directoryname, fname), encoding='latin'):
yield simple_preprocess(line)
Next, we need to provide the path of the directory as follows −
path = "ABC"
#provide the path as per your computer system where you saved the directory.
Next steps are similar as we did in previous examples. Next line of codes will make Gensim directory by using the directory having three text files −
dict_MUL = corpora.Dictionary(Read_files(path))
Dictionary(27 unique tokens: ['CNTK', 'Computational', 'Network', 'Toolkit', 'as']...)
Now we can also see the word to unique integer mapping as follows −
print(dict_MUL.token2id)
{
'CNTK': 0, 'Computational': 1, 'Network': 2, 'Toolkit': 3, 'as': 4,
'formerly': 5, 'known': 6, 'a': 7, 'commercial-grade': 8, 'easy-to-use': 9,
'free': 10, 'is': 11, 'open-source': 12, 'toolkit': 13, 'algorithms': 14,
'brain.': 15, 'deep': 16, 'enable': 17, 'human': 18, 'learn': 19,
'learning': 20, 'like': 21, 'that': 22, 'the': 23, 'to': 24, 'train': 25, 'us': 26
}
Gensim support their own native save() method to save dictionary to the disk and load() method to load back dictionary from the disk.
For example, we can save the dictionary with the help of following script −
Gensim.corpora.dictionary.save(filename)
#provide the path where you want to save the dictionary.
Similarly, we can load the saved dictionary by using the load() method. Following script can do this −
Gensim.corpora.dictionary.load(filename)
#provide the path where you have saved the dictionary.
We have understood how to create dictionary from a list of documents and from text files (from one as well as from more than one). Now, in this section, we will create a bag-of-words (BoW) corpus. In order to work with Gensim, it is one of the most important objects we need to familiarise with. Basically, it is the corpus that contains the word id and its frequency in each document.
As discussed, in Gensim, the corpus contains the word id and its frequency in every document. We can create a BoW corpus from a simple list of documents and from text files. What we need to do is, to pass the tokenised list of words to the object named Dictionary.doc2bow(). So first, let’s start by creating BoW corpus using a simple list of documents.
In the following example, we will create BoW corpus from a simple list containing three sentences.
First, we need to import all the necessary packages as follows −
import gensim
import pprint
from gensim import corpora
from gensim.utils import simple_preprocess
Now provide the list containing sentences. We have three sentences in our list −
doc_list = [
"Hello, how are you?", "How do you do?",
"Hey what are you doing? yes you What are you doing?"
]
Next, do tokenisation of the sentences as follows −
doc_tokenized = [simple_preprocess(doc) for doc in doc_list]
Create an object of corpora.Dictionary() as follows −
dictionary = corpora.Dictionary()
Now pass these tokenised sentences to dictionary.doc2bow() objectas follows −
BoW_corpus = [dictionary.doc2bow(doc, allow_update=True) for doc in doc_tokenized]
At last we can print Bag of word corpus −
print(BoW_corpus)
[
[(0, 1), (1, 1), (2, 1), (3, 1)],
[(2, 1), (3, 1), (4, 2)], [(0, 2), (3, 3), (5, 2), (6, 1), (7, 2), (8, 1)]
]
The above output shows that the word with id=0 appears once in the first document (because we have got (0,1) in the output) and so on.
The above output is somehow not possible for humans to read. We can also convert these ids to words but for this we need our dictionary to do the conversion as follows −
id_words = [[(dictionary[id], count) for id, count in line] for line in BoW_corpus]
print(id_words)
[
[('are', 1), ('hello', 1), ('how', 1), ('you', 1)],
[('how', 1), ('you', 1), ('do', 2)],
[('are', 2), ('you', 3), ('doing', 2), ('hey', 1), ('what', 2), ('yes', 1)]
]
Now the above output is somehow human readable.
import gensim
import pprint
from gensim import corpora
from gensim.utils import simple_preprocess
doc_list = [
"Hello, how are you?", "How do you do?",
"Hey what are you doing? yes you What are you doing?"
]
doc_tokenized = [simple_preprocess(doc) for doc in doc_list]
dictionary = corpora.Dictionary()
BoW_corpus = [dictionary.doc2bow(doc, allow_update=True) for doc in doc_tokenized]
print(BoW_corpus)
id_words = [[(dictionary[id], count) for id, count in line] for line in BoW_corpus]
print(id_words)
In the following example, we will be creating BoW corpus from a text file. For this, we have saved the document, used in previous example, in the text file named doc.txt..
Gensim will read the file line by line and process one line at a time by using simple_preprocess. In this way, it doesn’t need to load the complete file in memory all at once.
First, import the required and necessary packages as follows −
import gensim
from gensim import corpora
from pprint import pprint
from gensim.utils import simple_preprocess
from smart_open import smart_open
import os
Next, the following line of codes will make read the documents from doc.txt and tokenised it −
doc_tokenized = [
simple_preprocess(line, deacc =True) for line in open(‘doc.txt’, encoding=’utf-8’)
]
dictionary = corpora.Dictionary()
Now we need to pass these tokenized words into dictionary.doc2bow() object(as did in the previous example)
BoW_corpus = [
dictionary.doc2bow(doc, allow_update=True) for doc in doc_tokenized
]
print(BoW_corpus)
[
[(9, 1), (10, 1), (11, 1), (12, 1), (13, 1), (14, 1), (15, 1)],
[
(15, 1), (16, 1), (17, 1), (18, 1), (19, 1), (20, 1), (21, 1),
(22, 1), (23, 1), (24, 1)
],
[
(23, 2), (25, 1), (26, 1), (27, 1), (28, 1), (29, 1),
(30, 1), (31, 1), (32, 1), (33, 1), (34, 1), (35, 1), (36, 1)
],
[(3, 1), (18, 1), (37, 1), (38, 1), (39, 1), (40, 1), (41, 1), (42, 1), (43, 1)],
[
(18, 1), (27, 1), (31, 2), (32, 1), (38, 1), (41, 1), (43, 1),
(44, 1), (45, 1), (46, 1), (47, 1), (48, 1), (49, 1), (50, 1), (51, 1), (52, 1)
]
]
The doc.txt file have the following content −
CNTK formerly known as Computational Network Toolkit is a free easy-to-use open-source commercial-grade toolkit that enable us to train deep learning algorithms to learn like the human brain.
You can find its free tutorial on tutorialspoint.com also provide best technical tutorials on technologies like AI deep learning machine learning for free.
import gensim
from gensim import corpora
from pprint import pprint
from gensim.utils import simple_preprocess
from smart_open import smart_open
import os
doc_tokenized = [
simple_preprocess(line, deacc =True) for line in open(‘doc.txt’, encoding=’utf-8’)
]
dictionary = corpora.Dictionary()
BoW_corpus = [dictionary.doc2bow(doc, allow_update=True) for doc in doc_tokenized]
print(BoW_corpus)
We can save the corpus with the help of following script −
corpora.MmCorpus.serialize(‘/Users/Desktop/BoW_corpus.mm’, bow_corpus)
#provide the path and the name of the corpus. The name of corpus is BoW_corpus and we saved it in Matrix Market format.
Similarly, we can load the saved corpus by using the following script −
corpus_load = corpora.MmCorpus(‘/Users/Desktop/BoW_corpus.mm’)
for line in corpus_load:
print(line)
This chapter will help you in learning about the various transformations in Gensim. Let us begin by understanding the transforming documents.
Transforming documents means to represent the document in such a way that the document can be manipulated mathematically. Apart from deducing the latent structure of the corpus, transforming documents will also serve the following goals −
It discovers the relationship between words.
It discovers the relationship between words.
It brings out the hidden structure in the corpus.
It brings out the hidden structure in the corpus.
It describes the documents in a new and more semantic way.
It describes the documents in a new and more semantic way.
It makes the representation of the documents more compact.
It makes the representation of the documents more compact.
It improves efficiency because new representation consumes less resources.
It improves efficiency because new representation consumes less resources.
It improves efficacy because in new representation marginal data trends are ignored.
It improves efficacy because in new representation marginal data trends are ignored.
The noise is also reduced in new document representation.
The noise is also reduced in new document representation.
Let’s see the implementation steps for transforming the documents from one vector space representation to another.
In order to transform documents, we must follow the following steps −
The very first and basic step is to create the corpus from the documents. We have already created the corpus in previous examples. Let’s create another one with some enhancements (removing common words and the words that appear only once) −
import gensim
import pprint
from collections import defaultdict
from gensim import corpora
Now provide the documents for creating the corpus −
t_corpus = ["CNTK formerly known as Computational Network Toolkit", "is a free easy-to-use open-source commercial-grade toolkit", "that enable us to train deep learning algorithms to learn like the human brain.", "You can find its free tutorial on tutorialspoint.com", "Tutorialspoint.com also provide best technical tutorials on technologies like AI deep learning machine learning for free"]
Next, we need to do tokenise and along with it we will remove the common words also −
stoplist = set('for a of the and to in'.split(' '))
processed_corpus = [
[
word for word in document.lower().split() if word not in stoplist
]
for document in t_corpus
]
Following script will remove those words that appear only −
frequency = defaultdict(int)
for text in processed_corpus:
for token in text:
frequency[token] += 1
processed_corpus = [
[token for token in text if frequency[token] > 1]
for text in processed_corpus
]
pprint.pprint(processed_corpus)
[
['toolkit'],
['free', 'toolkit'],
['deep', 'learning', 'like'],
['free', 'on', 'tutorialspoint.com'],
['tutorialspoint.com', 'on', 'like', 'deep', 'learning', 'learning', 'free']
]
Now pass it to the corpora.dictionary() object to get the unique objects in our corpus −
dictionary = corpora.Dictionary(processed_corpus)
print(dictionary)
Dictionary(7 unique tokens: ['toolkit', 'free', 'deep', 'learning', 'like']...)
Next, the following line of codes will create the Bag of Word model for our corpus −
BoW_corpus = [dictionary.doc2bow(text) for text in processed_corpus]
pprint.pprint(BoW_corpus)
[
[(0, 1)],
[(0, 1), (1, 1)],
[(2, 1), (3, 1), (4, 1)],
[(1, 1), (5, 1), (6, 1)],
[(1, 1), (2, 1), (3, 2), (4, 1), (5, 1), (6, 1)]
]
The transformations are some standard Python objects. We can initialize these transformations i.e. Python objects by using a trained corpus. Here we are going to use tf-idf model to create a transformation of our trained corpus i.e. BoW_corpus.
First, we need to import the models package from gensim.
from gensim import models
Now, we need to initialise the model as follows −
tfidf = models.TfidfModel(BoW_corpus)
Now, in this last step, the vectors will be converted from old representation to new representation. As we have initialised the tfidf model in above step, the tfidf will now be treated as a read only object. Here, by using this tfidf object we will convert our vector from bag of word representation (old representation) to Tfidf real-valued weights (new representation).
doc_BoW = [(1,1),(3,1)]
print(tfidf[doc_BoW]
[(1, 0.4869354917707381), (3, 0.8734379353188121)]
We applied the transformation on two values of corpus, but we can also apply it to the whole corpus as follows −
corpus_tfidf = tfidf[BoW_corpus]
for doc in corpus_tfidf:
print(doc)
[(0, 1.0)]
[(0, 0.8734379353188121), (1, 0.4869354917707381)]
[(2, 0.5773502691896257), (3, 0.5773502691896257), (4, 0.5773502691896257)]
[(1, 0.3667400603126873), (5, 0.657838022678017), (6, 0.657838022678017)]
[
(1, 0.19338287240886842), (2, 0.34687949360312714), (3, 0.6937589872062543),
(4, 0.34687949360312714), (5, 0.34687949360312714), (6, 0.34687949360312714)
]
import gensim
import pprint
from collections import defaultdict
from gensim import corpora
t_corpus = [
"CNTK formerly known as Computational Network Toolkit",
"is a free easy-to-use open-source commercial-grade toolkit",
"that enable us to train deep learning algorithms to learn like the human brain.",
"You can find its free tutorial on tutorialspoint.com",
"Tutorialspoint.com also provide best technical tutorials on
technologies like AI deep learning machine learning for free"
]
stoplist = set('for a of the and to in'.split(' '))
processed_corpus = [
[word for word in document.lower().split() if word not in stoplist]
for document in t_corpus
]
frequency = defaultdict(int)
for text in processed_corpus:
for token in text:
frequency[token] += 1
processed_corpus = [
[token for token in text if frequency[token] > 1]
for text in processed_corpus
]
pprint.pprint(processed_corpus)
dictionary = corpora.Dictionary(processed_corpus)
print(dictionary)
BoW_corpus = [dictionary.doc2bow(text) for text in processed_corpus]
pprint.pprint(BoW_corpus)
from gensim import models
tfidf = models.TfidfModel(BoW_corpus)
doc_BoW = [(1,1),(3,1)]
print(tfidf[doc_BoW])
corpus_tfidf = tfidf[BoW_corpus]
for doc in corpus_tfidf:
print(doc)
Using Gensim, we can implement various popular transformations, i.e. Vector Space Model algorithms. Some of them are as follows −
During initialisation, this tf-idf model algorithm expects a training corpus having integer values (such as Bag-of-Words model). Then after that, at the time of transformation, it takes a vector representation and returns another vector representation.
The output vector will have the same dimensionality but the value of the rare features (at the time of training) will be increased. It basically converts integer-valued vectors into real-valued vectors. Following is the syntax of Tf-idf transformation −
Model=models.TfidfModel(corpus, normalize=True)
LSI model algorithm can transform document from either integer valued vector model (such as Bag-of-Words model) or Tf-Idf weighted space into latent space. The output vector will be of lower dimensionality. Following is the syntax of LSI transformation −
Model=models.LsiModel(tfidf_corpus, id2word=dictionary, num_topics=300)
LDA model algorithm is another algorithm that transforms document from Bag-of-Words model space into a topic space. The output vector will be of lower dimensionality. Following is the syntax of LSI transformation −
Model=models.LdaModel(corpus, id2word=dictionary, num_topics=100)
RP, a very efficient approach, aims to reduce the dimensionality of vector space. This approach is basically approximate the Tf-Idf distances between the documents. It does this by throwing in a little randomness.
Model=models.RpModel(tfidf_corpus, num_topics=500)
HDP is a non-parametric Bayesian method which is a new addition to Gensim. We should have to take care while using it.
Model=models.HdpModel(corpus, id2word=dictionary
Here, we will learn about creating Term Frequency-Inverse Document Frequency (TF-IDF) Matrix with the help of Gensim.
It is the Term Frequency-Inverse Document Frequency model which is also a bag-of-words model. It is different from the regular corpus because it down weights the tokens i.e. words appearing frequently across documents. During initialisation, this tf-idf model algorithm expects a training corpus having integer values (such as Bag-of-Words model).
Then after that at the time of transformation, it takes a vector representation and returns another vector representation. The output vector will have the same dimensionality but the value of the rare features (at the time of training) will be increased. It basically converts integer-valued vectors into real-valued vectors.
TF-IDF model computes tfidf with the help of following two simple steps −
In this first step, the model will multiply a local component such as TF (Term Frequency) with a global component such as IDF (Inverse Document Frequency).
Once done with multiplication, in the next step TFIDF model will normalize the result to the unit length.
As a result of these above two steps frequently occurred words across the documents will get down-weighted.
Here, we will be going to implement an example to see how we can get TF-IDF weights. Basically, in order to get TF-IDF weights, first we need to train the corpus and the then apply that corpus within the tfidf model.
As said above to get the TF-IDF we first need to train our corpus. First, we need to import all the necessary packages as follows −
import gensim
import pprint
from gensim import corpora
from gensim.utils import simple_preprocess
Now provide the list containing sentences. We have three sentences in our list −
doc_list = [
"Hello, how are you?", "How do you do?",
"Hey what are you doing? yes you What are you doing?"
]
Next, do tokenisation of the sentences as follows −
doc_tokenized = [simple_preprocess(doc) for doc in doc_list]
Create an object of corpora.Dictionary() as follows −
dictionary = corpora.Dictionary()
Now pass these tokenised sentences to dictionary.doc2bow() object as follows −
BoW_corpus = [dictionary.doc2bow(doc, allow_update=True) for doc in doc_tokenized]
Next, we will get the word ids and their frequencies in our documents.
for doc in BoW_corpus:
print([[dictionary[id], freq] for id, freq in doc])
[['are', 1], ['hello', 1], ['how', 1], ['you', 1]]
[['how', 1], ['you', 1], ['do', 2]]
[['are', 2], ['you', 3], ['doing', 2], ['hey', 1], ['what', 2], ['yes', 1]]
In this way we have trained our corpus (Bag-of-Word corpus).
Next, we need to apply this trained corpus within the tfidf model models.TfidfModel().
First import the numpay package −
import numpy as np
Now applying our trained corpus(BoW_corpus) within the square brackets of models.TfidfModel()
tfidf = models.TfidfModel(BoW_corpus, smartirs='ntc')
Next, we will get the word ids and their frequencies in our tfidf modeled corpus −
for doc in tfidf[BoW_corpus]:
print([[dictionary[id], np.around(freq,decomal=2)] for id, freq in doc])
[['are', 0.33], ['hello', 0.89], ['how', 0.33]]
[['how', 0.18], ['do', 0.98]]
[['are', 0.23], ['doing', 0.62], ['hey', 0.31], ['what', 0.62], ['yes', 0.31]]
[['are', 1], ['hello', 1], ['how', 1], ['you', 1]]
[['how', 1], ['you', 1], ['do', 2]]
[['are', 2], ['you', 3], ['doing', 2], ['hey', 1], ['what', 2], ['yes', 1]]
[['are', 0.33], ['hello', 0.89], ['how', 0.33]]
[['how', 0.18], ['do', 0.98]]
[['are', 0.23], ['doing', 0.62], ['hey', 0.31], ['what', 0.62], ['yes', 0.31]]
From the above outputs, we see the difference in the frequencies of the words in our documents.
import gensim
import pprint
from gensim import corpora
from gensim.utils import simple_preprocess
doc_list = [
"Hello, how are you?", "How do you do?",
"Hey what are you doing? yes you What are you doing?"
]
doc_tokenized = [simple_preprocess(doc) for doc in doc_list]
dictionary = corpora.Dictionary()
BoW_corpus = [dictionary.doc2bow(doc, allow_update=True) for doc in doc_tokenized]
for doc in BoW_corpus:
print([[dictionary[id], freq] for id, freq in doc])
import numpy as np
tfidf = models.TfidfModel(BoW_corpus, smartirs='ntc')
for doc in tfidf[BoW_corpus]:
print([[dictionary[id], np.around(freq,decomal=2)] for id, freq in doc])
As discussed above, the words that will occur more frequently in the document will get the smaller weights. Let’s understand the difference in weights of words from the above two outputs. The word ‘are’ occurs in two documents and have been weighted down. Similarly, the word ‘you’ appearing in all the documents and removed altogether.
This chapter deals with topic modeling with regards to Gensim.
To annotate our data and understand sentence structure, one of the best methods is to use computational linguistic algorithms. No doubt, with the help of these computational linguistic algorithms we can understand some finer details about our data but,
Can we know what kind of words appear more often than others in our corpus?
Can we know what kind of words appear more often than others in our corpus?
Can we group our data?
Can we group our data?
Can we be underlying themes in our data?
Can we be underlying themes in our data?
We’d be able to achieve all these with the help of topic modeling. So let’s deep dive into the concept of topic models.
A Topic model may be defined as the probabilistic model containing information about topics in our text. But here, two important questions arise which are as follows −
First, what exactly a topic is?
Topic, as name implies, is underlying ideas or the themes represented in our text. To give you an example, the corpus containing newspaper articles would have the topics related to finance, weather, politics, sports, various states news and so on.
Second, what is the importance of topic models in text processing?
As we know that, in order to identify similarity in text, we can do information retrieval and searching techniques by using words. But, with the help of topic models, now we can search and arrange our text files using topics rather than words.
In this sense we can say that topics are the probabilistic distribution of words. That’s why, by using topic models, we can describe our documents as the probabilistic distributions of topics.
As discussed above, the focus of topic modeling is about underlying ideas and themes. Its main goals are as follows −
Topic models can be used for text summarisation.
Topic models can be used for text summarisation.
They can be used to organise the documents. For example, we can use topic modeling to group news articles together into an organised/ interconnected section such as organising all the news articles related to cricket.
They can be used to organise the documents. For example, we can use topic modeling to group news articles together into an organised/ interconnected section such as organising all the news articles related to cricket.
They can improve search result. How? For a search query, we can use topic models to reveal the document having a mix of different keywords, but are about same idea.
They can improve search result. How? For a search query, we can use topic models to reveal the document having a mix of different keywords, but are about same idea.
The concept of recommendations is very useful for marketing. It’s used by various online shopping websites, news websites and many more. Topic models helps in making recommendations about what to buy, what to read next etc. They do it by finding materials having a common topic in list.
The concept of recommendations is very useful for marketing. It’s used by various online shopping websites, news websites and many more. Topic models helps in making recommendations about what to buy, what to read next etc. They do it by finding materials having a common topic in list.
Undoubtedly, Gensim is the most popular topic modeling toolkit. Its free availability and being in Python make it more popular. In this section, we will be discussing some most popular topic modeling algorithms. Here, we will focus on ‘what’ rather than ‘how’ because Gensim abstract them very well for us.
Latent Dirichlet allocation (LDA) is the most common and popular technique currently in use for topic modeling. It is the one that the Facebook researchers used in their research paper published in 2013. It was first proposed by David Blei, Andrew Ng, and Michael Jordan in 2003. They proposed LDA in their paper that was entitled simply Latent Dirichlet allocation.
Let’s know more about this wonderful technique through its characteristics −
Probabilistic topic modeling technique
LDA is a probabilistic topic modeling technique. As we discussed above, in topic modeling we assume that in any collection of interrelated documents (could be academic papers, newspaper articles, Facebook posts, Tweets, e-mails and so-on), there are some combinations of topics included in each document.
The main goal of probabilistic topic modeling is to discover the hidden topic structure for collection of interrelated documents. Following three things are generally included in a topic structure −
Topics
Topics
Statistical distribution of topics among the documents
Statistical distribution of topics among the documents
Words across a document comprising the topic
Words across a document comprising the topic
Work in an unsupervised way
LDA works in an unsupervised way. It is because, LDA use conditional probabilities to discover the hidden topic structure. It assumes that the topics are unevenly distributed throughout the collection of interrelated documents.
Very easy to create it in Gensim
In Gensim, it is very easy to create LDA model. we just need to specify the corpus, the dictionary mapping, and the number of topics we would like to use in our model.
Model=models.LdaModel(corpus, id2word=dictionary, num_topics=100)
May face computationally intractable problem
Calculating the probability of every possible topic structure is a computational challenge faced by LDA. It’s challenging because, it needs to calculate the probability of every observed word under every possible topic structure. If we have large number of topics and words, LDA may face computationally intractable problem.
The topic modeling algorithms that was first implemented in Gensim with Latent Dirichlet Allocation (LDA) is Latent Semantic Indexing (LSI). It is also called Latent Semantic Analysis (LSA).
It got patented in 1988 by Scott Deerwester, Susan Dumais, George Furnas, Richard Harshman, Thomas Landaur, Karen Lochbaum, and Lynn Streeter. In this section we are going to set up our LSI model. It can be done in the same way of setting up LDA model. we need to import LSI model from gensim.models.
Actually, LSI is a technique NLP, especially in distributional semantics. It analyzes the relationship in between a set of documents and the terms these documents contain. If we talk about its working, then it constructs a matrix that contains word counts per document from a large piece of text.
Once constructed, to reduce the number of rows, LSI model use a mathematical technique called singular value decomposition (SVD). Along with reducing the number of rows, it also preserves the similarity structure among columns. In matrix, the rows represent unique words and the columns represent each document. It works based on distributional hypothesis i.e. it assumes that the words that are close in meaning will occur in same kind of text.
Model=models.LsiModel(corpus, id2word=dictionary, num_topics=100)
Topic models such as LDA and LSI helps in summarizing and organize large archives of texts that is not possible to analyze by hand. Apart from LDA and LSI, one other powerful topic model in Gensim is HDP (Hierarchical Dirichlet Process). It’s basically a mixed-membership model for unsupervised analysis of grouped data. Unlike LDA (its’s finite counterpart), HDP infers the number of topics from the data.
Model=models.HdpModel(corpus, id2word=dictionary
This chapter will help you learn how to create Latent Dirichlet allocation (LDA) topic model in Gensim.
Automatically extracting information about topics from large volume of texts in one of the primary applications of NLP (natural language processing). Large volume of texts could be feeds from hotel reviews, tweets, Facebook posts, feeds from any other social media channel, movie reviews, news stories, user feedbacks, e-mails etc.
In this digital era, to know what people/customers are talking about, to understand their opinions, and their problems, can be highly valuable for businesses, political campaigns and administrators. But, is it possible to manually read through such large volumes of text and then extracting the information from topics?
No, it’s not. It requires an automatic algorithm that can read through these large volume of text documents and automatically extract the required information/topics discussed from it.
LDA’s approach to topic modeling is to classify text in a document to a particular topic. Modeled as Dirichlet distributions, LDA builds −
A topic per document model and
Words per topic model
After providing the LDA topic model algorithm, in order to obtain a good composition of topic-keyword distribution, it re-arrange −
The topics distributions within the document and
Keywords distribution within the topics
While processing, some of the assumptions made by LDA are −
Every document is modeled as multi-nominal distributions of topics.
Every topic is modeled as multi-nominal distributions of words.
We should have to choose the right corpus of data because LDA assumes that each chunk of text contains the related words.
LDA also assumes that the documents are produced from a mixture of topics.
Here, we are going to use LDA (Latent Dirichlet Allocation) to extract the naturally discussed topics from dataset.
The dataset which we are going to use is the dataset of ’20 Newsgroups’ having thousands of news articles from various sections of a news report. It is available under Sklearn data sets. We can easily download with the help of following Python script −
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
Let’s look at some of the sample news with the help of following script −
newsgroups_train.data[:4]
["From: lerxst@wam.umd.edu (where's my thing)\nSubject:
WHAT car is this!?\nNntp-Posting-Host: rac3.wam.umd.edu\nOrganization:
University of Maryland, College Park\nLines:
15\n\n I was wondering if anyone out there could enlighten me on this car
I saw\nthe other day. It was a 2-door sports car, looked to be from the
late 60s/\nearly 70s. It was called a Bricklin. The doors were really small.
In addition,\nthe front bumper was separate from the rest of the body.
This is \nall I know. If anyone can tellme a model name,
engine specs, years\nof production, where this car is made, history, or
whatever info you\nhave on this funky looking car, please e-mail.\n\nThanks,
\n- IL\n ---- brought to you by your neighborhood Lerxst ----\n\n\n\n\n",
"From: guykuo@carson.u.washington.edu (Guy Kuo)\nSubject: SI Clock Poll - Final
Call\nSummary: Final call for SI clock reports\nKeywords:
SI,acceleration,clock,upgrade\nArticle-I.D.: shelley.1qvfo9INNc3s\nOrganization:
University of Washington\nLines: 11\nNNTP-Posting-Host: carson.u.washington.edu\n\nA
fair number of brave souls who upgraded their SI clock oscillator have\nshared their
experiences for this poll. Please send a brief message detailing\nyour experiences with
the procedure. Top speed attained, CPU rated speed,\nadd on cards and adapters, heat
sinks, hour of usage per day, floppy disk\nfunctionality with 800 and 1.4 m floppies
are especially requested.\n\nI will be summarizing in the next two days, so please add
to the network\nknowledge base if you have done the clock upgrade and haven't answered
this\npoll. Thanks.\n\nGuy Kuo <;guykuo@u.washington.edu>\n",
'From: twillis@ec.ecn.purdue.edu (Thomas E Willis)\nSubject:
PB questions...\nOrganization: Purdue University Engineering
Computer Network\nDistribution: usa\nLines: 36\n\nwell folks,
my mac plus finally gave up the ghost this weekend after\nstarting
life as a 512k way back in 1985. sooo, i\'m in the market for
a\nnew machine a bit sooner than i intended to be...\n\ni\'m looking
into picking up a powerbook 160 or maybe 180 and have a bunch\nof
questions that (hopefully) somebody can answer:\n\n* does anybody
know any dirt on when the next round of powerbook\nintroductions
are expected? i\'d heard the 185c was supposed to make an\nappearence
"this summer" but haven\'t heard anymore on it - and since i\ndon\'t
have access to macleak, i was wondering if anybody out there had\nmore
info...\n\n* has anybody heard rumors about price drops to the powerbook
line like the\nones the duo\'s just went through recently?\n\n* what\'s
the impression of the display on the 180? i could probably swing\na 180
if i got the 80Mb disk rather than the 120, but i don\'t really have\na
feel for how much "better" the display is (yea, it looks great in the\nstore,
but is that all "wow" or is it really that good?). could i solicit\nsome
opinions of people who use the 160 and 180 day-to-day on if its
worth\ntaking the disk size and money hit to get the active display?
(i realize\nthis is a real subjective question, but i\'ve only played around
with the\nmachines in a computer store breifly and figured the opinions
of somebody\nwho actually uses the machine daily might prove helpful).\n\n*
how well does hellcats perform? ;)\n\nthanks a bunch in advance for any info -
if you could email, i\'ll post a\nsummary (news reading time is at a premium
with finals just around the\ncorner... :
( )\n--\nTom Willis \\ twillis@ecn.purdue.edu \\ Purdue Electrical
Engineering\n---------------------------------------------------------------------------\
n"Convictions are more dangerous enemies of truth than lies." - F. W.\nNietzsche\n',
'From: jgreen@amber (Joe Green)\nSubject: Re: Weitek P9000 ?\nOrganization:
Harris Computer Systems Division\nLines: 14\nDistribution: world\nNNTP-Posting-Host:
amber.ssd.csd.harris.com\nX-Newsreader: TIN [version 1.1 PL9]\n\nRobert
J.C. Kyanko (rob@rjck.UUCP) wrote:\n >abraxis@iastate.edu writes in article
<abraxis.734340159@class1.iastate.edu >:\n> > Anyone know about the
Weitek P9000 graphics chip?\n > As far as the low-level stuff goes, it looks
pretty nice. It\'s got this\n> quadrilateral fill command that requires just
the four points.\n\nDo you have Weitek\'s address/phone number? I\'d like to get
some information\nabout this chip.\n\n--\nJoe Green\t\t\t\tHarris
Corporation\njgreen@csd.harris.com\t\t\tComputer Systems Division\n"The only
thing that really scares me is a person with no sense of humor.
"\n\t\t\t\t\t\t-- Jonathan Winters\n']
We need Stopwords from NLTK and English model from Scapy. Both can be downloaded as follows −
import nltk;
nltk.download('stopwords')
nlp = spacy.load('en_core_web_md', disable=['parser', 'ner'])
In order to build LDA model we need to import following necessary package −
import re
import numpy as np
import pandas as pd
from pprint import pprint
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
import spacy
import pyLDAvis
import pyLDAvis.gensim
import matplotlib.pyplot as plt
Now, we need to import the Stopwords and use them −
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
stop_words.extend(['from', 'subject', 're', 'edu', 'use'])
Now, with the help of Gensim’s simple_preprocess() we need to tokenise each sentence into a list of words. We should also remove the punctuations and unnecessary characters. In order to do this, we will create a function named sent_to_words() −
def sent_to_words(sentences):
for sentence in sentences:
yield(gensim.utils.simple_preprocess(str(sentence), deacc=True))
data_words = list(sent_to_words(data))
As we know that, bigrams are two words that are frequently occurring together in the document and trigram are three words that are frequently occurring together in the document. With the help of Gensim’s Phrases model, we can do this −
bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100)
trigram = gensim.models.Phrases(bigram[data_words], threshold=100)
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
Next, we need to filter out the Stopwords. Along with that, we will also create functions to make bigrams, trigrams and for lemmatisation −
def remove_stopwords(texts):
return [[word for word in simple_preprocess(str(doc))
if word not in stop_words] for doc in texts]
def make_bigrams(texts):
return [bigram_mod[doc] for doc in texts]
def make_trigrams(texts):
return [trigram_mod[bigram_mod[doc]] for doc in texts]
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
texts_out = []
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
return texts_out
We now need to build the dictionary & corpus. We did it in the previous examples as well −
id2word = corpora.Dictionary(data_lemmatized)
texts = data_lemmatized
corpus = [id2word.doc2bow(text) for text in texts]
We already implemented everything that is required to train the LDA model. Now, it is the time to build the LDA topic model. For our implementation example, it can be done with the help of following line of codes −
lda_model = gensim.models.ldamodel.LdaModel(
corpus=corpus, id2word=id2word, num_topics=20, random_state=100,
update_every=1, chunksize=100, passes=10, alpha='auto', per_word_topics=True
)
Let’s see the complete implementation example to build LDA topic model −
import re
import numpy as np
import pandas as pd
from pprint import pprint
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
import spacy
import pyLDAvis
import pyLDAvis.gensim
import matplotlib.pyplot as plt
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
stop_words.extend(['from', 'subject', 're', 'edu', 'use'])
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
data = newsgroups_train.data
data = [re.sub('\S*@\S*\s?', '', sent) for sent in data]
data = [re.sub('\s+', ' ', sent) for sent in data]
data = [re.sub("\'", "", sent) for sent in data]
print(data_words[:4]) #it will print the data after prepared for stopwords
bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100)
trigram = gensim.models.Phrases(bigram[data_words], threshold=100)
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
def remove_stopwords(texts):
return [[word for word in simple_preprocess(str(doc))
if word not in stop_words] for doc in texts]
def make_bigrams(texts):
return [bigram_mod[doc] for doc in texts]
def make_trigrams(texts):
[trigram_mod[bigram_mod[doc]] for doc in texts]
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
texts_out = []
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
return texts_out
data_words_nostops = remove_stopwords(data_words)
data_words_bigrams = make_bigrams(data_words_nostops)
nlp = spacy.load('en_core_web_md', disable=['parser', 'ner'])
data_lemmatized = lemmatization(data_words_bigrams, allowed_postags=[
'NOUN', 'ADJ', 'VERB', 'ADV'
])
print(data_lemmatized[:4]) #it will print the lemmatized data.
id2word = corpora.Dictionary(data_lemmatized)
texts = data_lemmatized
corpus = [id2word.doc2bow(text) for text in texts]
print(corpus[:4]) #it will print the corpus we created above.
[[(id2word[id], freq) for id, freq in cp] for cp in corpus[:4]]
#it will print the words with their frequencies.
lda_model = gensim.models.ldamodel.LdaModel(
corpus=corpus, id2word=id2word, num_topics=20, random_state=100,
update_every=1, chunksize=100, passes=10, alpha='auto', per_word_topics=True
)
We can now use the above created LDA model to get the topics, to compute Model Perplexity.
In this chapter, we will understand how to use Latent Dirichlet Allocation (LDA) topic model.
The LDA model (lda_model) we have created above can be used to view the topics from the documents. It can be done with the help of following script −
pprint(lda_model.print_topics())
doc_lda = lda_model[corpus]
[
(0,
'0.036*"go" + 0.027*"get" + 0.021*"time" + 0.017*"back" + 0.015*"good" + '
'0.014*"much" + 0.014*"be" + 0.013*"car" + 0.013*"well" + 0.013*"year"'),
(1,
'0.078*"screen" + 0.067*"video" + 0.052*"character" + 0.046*"normal" + '
'0.045*"mouse" + 0.034*"manager" + 0.034*"disease" + 0.031*"processor" + '
'0.028*"excuse" + 0.028*"choice"'),
(2,
'0.776*"ax" + 0.079*"_" + 0.011*"boy" + 0.008*"ticket" + 0.006*"red" + '
'0.004*"conservative" + 0.004*"cult" + 0.004*"amazing" + 0.003*"runner" + '
'0.003*"roughly"'),
(3,
'0.086*"season" + 0.078*"fan" + 0.072*"reality" + 0.065*"trade" + '
'0.045*"concept" + 0.040*"pen" + 0.028*"blow" + 0.025*"improve" + '
'0.025*"cap" + 0.021*"penguin"'),
(4,
'0.027*"group" + 0.023*"issue" + 0.016*"case" + 0.016*"cause" + '
'0.014*"state" + 0.012*"whole" + 0.012*"support" + 0.011*"government" + '
'0.010*"year" + 0.010*"rate"'),
(5,
'0.133*"evidence" + 0.047*"believe" + 0.044*"religion" + 0.042*"belief" + '
'0.041*"sense" + 0.041*"discussion" + 0.034*"atheist" + 0.030*"conclusion" +
'
'0.029*"explain" + 0.029*"claim"'),
(6,
'0.083*"space" + 0.059*"science" + 0.031*"launch" + 0.030*"earth" + '
'0.026*"route" + 0.024*"orbit" + 0.024*"scientific" + 0.021*"mission" + '
'0.018*"plane" + 0.017*"satellite"'),
(7,
'0.065*"file" + 0.064*"program" + 0.048*"card" + 0.041*"window" + '
'0.038*"driver" + 0.037*"software" + 0.034*"run" + 0.029*"machine" + '
'0.029*"entry" + 0.028*"version"'),
(8,
'0.078*"publish" + 0.059*"mount" + 0.050*"turkish" + 0.043*"armenian" + '
'0.027*"western" + 0.026*"russian" + 0.025*"locate" + 0.024*"proceed" + '
'0.024*"electrical" + 0.022*"terrorism"'),
(9,
'0.023*"people" + 0.023*"child" + 0.021*"kill" + 0.020*"man" + 0.019*"death" '
'+ 0.015*"die" + 0.015*"live" + 0.014*"attack" + 0.013*"age" + '
'0.011*"church"'),
(10,
'0.092*"cpu" + 0.085*"black" + 0.071*"controller" + 0.039*"white" + '
'0.028*"water" + 0.027*"cold" + 0.025*"solid" + 0.024*"cool" + 0.024*"heat" '
'+ 0.023*"nuclear"'),
(11,
'0.071*"monitor" + 0.044*"box" + 0.042*"option" + 0.041*"generate" + '
'0.038*"vote" + 0.032*"battery" + 0.029*"wave" + 0.026*"tradition" + '
'0.026*"fairly" + 0.025*"task"'),
(12,
'0.048*"send" + 0.045*"mail" + 0.036*"list" + 0.033*"include" + '
'0.032*"price" + 0.031*"address" + 0.027*"email" + 0.026*"receive" + '
'0.024*"book" + 0.024*"sell"'),
(13,
'0.515*"drive" + 0.052*"laboratory" + 0.042*"blind" + 0.020*"investment" + '
'0.011*"creature" + 0.010*"loop" + 0.005*"dialog" + 0.000*"slave" + '
'0.000*"jumper" + 0.000*"sector"'),
(14,
'0.153*"patient" + 0.066*"treatment" + 0.062*"printer" + 0.059*"doctor" + '
'0.036*"medical" + 0.031*"energy" + 0.029*"study" + 0.029*"probe" + '
'0.024*"mph" + 0.020*"physician"'),
(15,
'0.068*"law" + 0.055*"gun" + 0.039*"government" + 0.036*"right" + '
'0.029*"state" + 0.026*"drug" + 0.022*"crime" + 0.019*"person" + '
'0.019*"citizen" + 0.019*"weapon"'),
(16,
'0.107*"team" + 0.102*"game" + 0.078*"play" + 0.055*"win" + 0.052*"player" + '
'0.051*"year" + 0.030*"score" + 0.025*"goal" + 0.023*"wing" + 0.023*"run"'),
(17,
'0.031*"say" + 0.026*"think" + 0.022*"people" + 0.020*"make" + 0.017*"see" + '
'0.016*"know" + 0.013*"come" + 0.013*"even" + 0.013*"thing" + 0.013*"give"'),
(18,
'0.039*"system" + 0.034*"use" + 0.023*"key" + 0.016*"bit" + 0.016*"also" + '
'0.015*"information" + 0.014*"source" + 0.013*"chip" + 0.013*"available" + '
'0.010*"provide"'),
(19,
'0.085*"line" + 0.073*"write" + 0.053*"article" + 0.046*"organization" + '
'0.034*"host" + 0.023*"be" + 0.023*"know" + 0.017*"thank" + 0.016*"want" + '
'0.014*"help"')
]
The LDA model (lda_model) we have created above can be used to compute the model’s perplexity, i.e. how good the model is. The lower the score the better the model will be. It can be done with the help of following script −
print('\nPerplexity: ', lda_model.log_perplexity(corpus))
Perplexity: -12.338664984332151
The LDA model (lda_model) we have created above can be used to compute the model’s coherence score i.e. the average /median of the pairwise word-similarity scores of the words in the topic. It can be done with the help of following script −
coherence_model_lda = CoherenceModel(
model=lda_model, texts=data_lemmatized, dictionary=id2word, coherence='c_v'
)
coherence_lda = coherence_model_lda.get_coherence()
print('\nCoherence Score: ', coherence_lda)
Coherence Score: 0.510264381411751
The LDA model (lda_model) we have created above can be used to examine the produced topics and the associated keywords. It can be visualised by using pyLDAvispackage as follows −
pyLDAvis.enable_notebook()
vis = pyLDAvis.gensim.prepare(lda_model, corpus, id2word)
vis
From the above output, the bubbles on the left-side represents a topic and larger the bubble, the more prevalent is that topic. The topic model will be good if the topic model has big, non-overlapping bubbles scattered throughout the chart.
This chapter will explain what is a Latent Dirichlet Allocation (LDA) Mallet Model and how to create the same in Gensim.
In the previous section we have implemented LDA model and get the topics from documents of 20Newsgroup dataset. That was Gensim’s inbuilt version of the LDA algorithm. There is a Mallet version of Gensim also, which provides better quality of topics. Here, we are going to apply Mallet’s LDA on the previous example we have already implemented.
Mallet, an open source toolkit, was written by Andrew McCullum. It is basically a Java based package which is used for NLP, document classification, clustering, topic modeling, and many other machine learning applications to text. It provides us the Mallet Topic Modeling toolkit which contains efficient, sampling-based implementations of LDA as well as Hierarchical LDA.
Mallet2.0 is the current release from MALLET, the java topic modeling toolkit. Before we start using it with Gensim for LDA, we must download the mallet-2.0.8.zip package on our system and unzip it. Once installed and unzipped, set the environment variable %MALLET_HOME% to the point to the MALLET directory either manually or by the code we will be providing, while implementing the LDA with Mallet next.
Python provides Gensim wrapper for Latent Dirichlet Allocation (LDA). The syntax of that wrapper is gensim.models.wrappers.LdaMallet. This module, collapsed gibbs sampling from MALLET, allows LDA model estimation from a training corpus and inference of topic distribution on new, unseen documents as well.
We will be using LDA Mallet on previously built LDA model and will check the difference in performance by calculating Coherence score.
Before applying Mallet LDA model on our corpus built in previous example, we must have to update the environment variables and provide the path the Mallet file as well. It can be done with the help of following code −
import os
from gensim.models.wrappers import LdaMallet
os.environ.update({'MALLET_HOME':r'C:/mallet-2.0.8/'})
#You should update this path as per the path of Mallet directory on your system.
mallet_path = r'C:/mallet-2.0.8/bin/mallet'
#You should update this path as per the path of Mallet directory on your system.
Once we provided the path to Mallet file, we can now use it on the corpus. It can be done with the help of ldamallet.show_topics() function as follows −
ldamallet = gensim.models.wrappers.LdaMallet(
mallet_path, corpus=corpus, num_topics=20, id2word=id2word
)
pprint(ldamallet.show_topics(formatted=False))
[
(4,
[('gun', 0.024546225966016102),
('law', 0.02181426826996709),
('state', 0.017633545129043606),
('people', 0.017612848479831116),
('case', 0.011341763768445888),
('crime', 0.010596684396796159),
('weapon', 0.00985160502514643),
('person', 0.008671896020034356),
('firearm', 0.00838214293105946),
('police', 0.008257963035784506)]),
(9,
[('make', 0.02147966482730431),
('people', 0.021377478029838543),
('work', 0.018557122419783363),
('money', 0.016676885346413244),
('year', 0.015982015123646026),
('job', 0.012221540976905783),
('pay', 0.010239117106069897),
('time', 0.008910688739014919),
('school', 0.0079092581238504),
('support', 0.007357449417535254)]),
(14,
[('power', 0.018428398507941996),
('line', 0.013784244460364121),
('high', 0.01183271164249895),
('work', 0.011560979224821522),
('ground', 0.010770484918850819),
('current', 0.010745781971789235),
('wire', 0.008399002000938712),
('low', 0.008053160742076529),
('water', 0.006966231071366814),
('run', 0.006892122230182061)]),
(0,
[('people', 0.025218349201353372),
('kill', 0.01500904870564167),
('child', 0.013612400660948935),
('armenian', 0.010307655991816822),
('woman', 0.010287984892595798),
('start', 0.01003226060272248),
('day', 0.00967818081674404),
('happen', 0.009383114328428673),
('leave', 0.009383114328428673),
('fire', 0.009009363443229208)]),
(1,
[('file', 0.030686386604212003),
('program', 0.02227713642901929),
('window', 0.01945561169918489),
('set', 0.015914874783314277),
('line', 0.013831003577619592),
('display', 0.013794120901412606),
('application', 0.012576992586582082),
('entry', 0.009275993066056873),
('change', 0.00872275292295209),
('color', 0.008612104894331132)]),
(12,
[('line', 0.07153810971508515),
('buy', 0.02975597944523662),
('organization', 0.026877236406682988),
('host', 0.025451316957679788),
('price', 0.025182275552207485),
('sell', 0.02461728860071565),
('mail', 0.02192687454599263),
('good', 0.018967419085797303),
('sale', 0.017998870026097017),
('send', 0.013694207538540181)]),
(11,
[('thing', 0.04901329901329901),
('good', 0.0376018876018876),
('make', 0.03393393393393394),
('time', 0.03326898326898327),
('bad', 0.02664092664092664),
('happen', 0.017696267696267698),
('hear', 0.015615615615615615),
('problem', 0.015465465465465466),
('back', 0.015143715143715144),
('lot', 0.01495066495066495)]),
(18,
[('space', 0.020626317374284855),
('launch', 0.00965716006366413),
('system', 0.008560244332602057),
('project', 0.008173097603991913),
('time', 0.008108573149223556),
('cost', 0.007764442723792318),
('year', 0.0076784101174345075),
('earth', 0.007484836753129436),
('base', 0.0067535595990880545),
('large', 0.006689035144319697)]),
(5,
[('government', 0.01918437232469453),
('people', 0.01461203206475212),
('state', 0.011207097828624796),
('country', 0.010214802708381975),
('israeli', 0.010039691804809714),
('war', 0.009436532025838587),
('force', 0.00858043427504086),
('attack', 0.008424780138532182),
('land', 0.0076659662230523775),
('world', 0.0075103120865437)]),
(2,
[('car', 0.041091194044470564),
('bike', 0.015598981291017729),
('ride', 0.011019688510138114),
('drive', 0.010627877363110981),
('engine', 0.009403467528651191),
('speed', 0.008081104907434616),
('turn', 0.007738270153785875),
('back', 0.007738270153785875),
('front', 0.007468899990204721),
('big', 0.007370947203447938)])
]
Now we can also evaluate its performance by calculating the coherence score as follows −
ldamallet = gensim.models.wrappers.LdaMallet(
mallet_path, corpus=corpus, num_topics=20, id2word=id2word
)
pprint(ldamallet.show_topics(formatted=False))
Coherence Score: 0.5842762900901401
This chapter discusses the documents and LDA model in Gensim.
We can find the optimal number of topics for LDA by creating many LDA models with various values of topics. Among those LDAs we can pick one having highest coherence value.
Following function named coherence_values_computation() will train multiple LDA models. It will also provide the models as well as their corresponding coherence score −
def coherence_values_computation(dictionary, corpus, texts, limit, start=2, step=3):
coherence_values = []
model_list = []
for num_topics in range(start, limit, step):
model = gensim.models.wrappers.LdaMallet(
mallet_path, corpus=corpus, num_topics=num_topics, id2word=id2word
)
model_list.append(model)
coherencemodel = CoherenceModel(
model=model, texts=texts, dictionary=dictionary, coherence='c_v'
)
coherence_values.append(coherencemodel.get_coherence())
return model_list, coherence_values
Now with the help of following code, we can get the optimal number of topics which we can show with the help of a graph as well −
model_list, coherence_values = coherence_values_computation (
dictionary=id2word, corpus=corpus, texts=data_lemmatized,
start=1, limit=50, step=8
)
limit=50; start=1; step=8;
x = range(start, limit, step)
plt.plot(x, coherence_values)
plt.xlabel("Num Topics")
plt.ylabel("Coherence score")
plt.legend(("coherence_values"), loc='best')
plt.show()
Next, we can also print the coherence values for various topics as follows −
for m, cv in zip(x, coherence_values):
print("Num Topics =", m, " is having Coherence Value of", round(cv, 4))
Num Topics = 1 is having Coherence Value of 0.4866
Num Topics = 9 is having Coherence Value of 0.5083
Num Topics = 17 is having Coherence Value of 0.5584
Num Topics = 25 is having Coherence Value of 0.5793
Num Topics = 33 is having Coherence Value of 0.587
Num Topics = 41 is having Coherence Value of 0.5842
Num Topics = 49 is having Coherence Value of 0.5735
Now, the question arises which model should we pick now? One of the good practices is to pick the model, that is giving highest coherence value before flattering out. So that’s why, we will be choosing the model with 25 topics which is at number 4 in the above list.
optimal_model = model_list[3]
model_topics = optimal_model.show_topics(formatted=False)
pprint(optimal_model.print_topics(num_words=10))
[
(0,
'0.018*"power" + 0.011*"high" + 0.010*"ground" + 0.009*"current" + '
'0.008*"low" + 0.008*"wire" + 0.007*"water" + 0.007*"work" + 0.007*"design" '
'+ 0.007*"light"'),
(1,
'0.036*"game" + 0.029*"team" + 0.029*"year" + 0.028*"play" + 0.020*"player" '
'+ 0.019*"win" + 0.018*"good" + 0.013*"season" + 0.012*"run" + 0.011*"hit"'),
(2,
'0.020*"image" + 0.019*"information" + 0.017*"include" + 0.017*"mail" + '
'0.016*"send" + 0.015*"list" + 0.013*"post" + 0.012*"address" + '
'0.012*"internet" + 0.012*"system"'),
(3,
'0.986*"ax" + 0.002*"_" + 0.001*"tm" + 0.000*"part" + 0.000*"biz" + '
'0.000*"mb" + 0.000*"mbs" + 0.000*"pne" + 0.000*"end" + 0.000*"di"'),
(4,
'0.020*"make" + 0.014*"work" + 0.013*"money" + 0.013*"year" + 0.012*"people" '
'+ 0.011*"job" + 0.010*"group" + 0.009*"government" + 0.008*"support" + '
'0.008*"question"'),
(5,
'0.011*"study" + 0.011*"drug" + 0.009*"science" + 0.008*"food" + '
'0.008*"problem" + 0.008*"result" + 0.008*"effect" + 0.007*"doctor" + '
'0.007*"research" + 0.007*"patient"'),
(6,
'0.024*"gun" + 0.024*"law" + 0.019*"state" + 0.015*"case" + 0.013*"people" + '
'0.010*"crime" + 0.010*"weapon" + 0.010*"person" + 0.008*"firearm" + '
'0.008*"police"'),
(7,
'0.012*"word" + 0.011*"question" + 0.011*"exist" + 0.011*"true" + '
'0.010*"religion" + 0.010*"claim" + 0.008*"argument" + 0.008*"truth" + '
'0.008*"life" + 0.008*"faith"'),
(8,
'0.077*"time" + 0.029*"day" + 0.029*"call" + 0.025*"back" + 0.021*"work" + '
'0.019*"long" + 0.015*"end" + 0.015*"give" + 0.014*"year" + 0.014*"week"'),
(9,
'0.048*"thing" + 0.041*"make" + 0.038*"good" + 0.037*"people" + '
'0.028*"write" + 0.019*"bad" + 0.019*"point" + 0.018*"read" + 0.018*"post" + '
'0.016*"idea"'),
(10,
'0.022*"book" + 0.020*"_" + 0.013*"man" + 0.012*"people" + 0.011*"write" + '
'0.011*"find" + 0.010*"history" + 0.010*"armenian" + 0.009*"turkish" + '
'0.009*"number"'),
(11,
'0.064*"line" + 0.030*"buy" + 0.028*"organization" + 0.025*"price" + '
'0.025*"sell" + 0.023*"good" + 0.021*"host" + 0.018*"sale" + 0.017*"mail" + '
'0.016*"cost"'),
(12,
'0.041*"car" + 0.015*"bike" + 0.011*"ride" + 0.010*"engine" + 0.009*"drive" '
'+ 0.008*"side" + 0.008*"article" + 0.007*"turn" + 0.007*"front" + '
'0.007*"speed"'),
(13,
'0.018*"people" + 0.011*"attack" + 0.011*"state" + 0.011*"israeli" + '
'0.010*"war" + 0.010*"country" + 0.010*"government" + 0.009*"live" + '
'0.009*"give" + 0.009*"land"'),
(14,
'0.037*"file" + 0.026*"line" + 0.021*"read" + 0.019*"follow" + '
'0.018*"number" + 0.015*"program" + 0.014*"write" + 0.012*"entry" + '
'0.012*"give" + 0.011*"check"'),
(15,
'0.196*"write" + 0.172*"line" + 0.165*"article" + 0.117*"organization" + '
'0.086*"host" + 0.030*"reply" + 0.010*"university" + 0.008*"hear" + '
'0.007*"post" + 0.007*"news"'),
(16,
'0.021*"people" + 0.014*"happen" + 0.014*"child" + 0.012*"kill" + '
'0.011*"start" + 0.011*"live" + 0.010*"fire" + 0.010*"leave" + 0.009*"hear" '
'+ 0.009*"home"'),
(17,
'0.038*"key" + 0.018*"system" + 0.015*"space" + 0.015*"technology" + '
'0.014*"encryption" + 0.010*"chip" + 0.010*"bit" + 0.009*"launch" + '
'0.009*"public" + 0.009*"government"'),
(18,
'0.035*"drive" + 0.031*"system" + 0.027*"problem" + 0.027*"card" + '
'0.020*"driver" + 0.017*"bit" + 0.017*"work" + 0.016*"disk" + '
'0.014*"monitor" + 0.014*"machine"'),
(19,
'0.031*"window" + 0.020*"run" + 0.018*"color" + 0.018*"program" + '
'0.017*"application" + 0.016*"display" + 0.015*"set" + 0.015*"version" + '
'0.012*"screen" + 0.012*"problem"')
]
Finding dominant topics in sentences is one of the most useful practical applications of topic modeling. It determines what topic a given document is about. Here, we will find that topic number which has the highest percentage contribution in that particular document. In order to aggregate the information in a table, we will be creating a function named dominant_topics() −
def dominant_topics(ldamodel=lda_model, corpus=corpus, texts=data):
sent_topics_df = pd.DataFrame()
Next, we will get the main topics in every document −
for i, row in enumerate(ldamodel[corpus]):
row = sorted(row, key=lambda x: (x[1]), reverse=True)
Next, we will get the Dominant topic, Perc Contribution and Keywords for every document −
for j, (topic_num, prop_topic) in enumerate(row):
if j == 0: # => dominant topic
wp = ldamodel.show_topic(topic_num)
topic_keywords = ", ".join([word for word, prop in wp])
sent_topics_df = sent_topics_df.append(
pd.Series([int(topic_num), round(prop_topic,4), topic_keywords]), ignore_index=True
)
else:
break
sent_topics_df.columns = ['Dominant_Topic', 'Perc_Contribution', 'Topic_Keywords']
With the help of following code, we will add the original text to the end of the output −
contents = pd.Series(texts)
sent_topics_df = pd.concat([sent_topics_df, contents], axis=1)
return(sent_topics_df)
df_topic_sents_keywords = dominant_topics(
ldamodel=optimal_model, corpus=corpus, texts=data
)
Now, do the formatting of topics in the sentences as follows −
df_dominant_topic = df_topic_sents_keywords.reset_index()
df_dominant_topic.columns = [
'Document_No', 'Dominant_Topic', 'Topic_Perc_Contrib', 'Keywords', 'Text'
]
Finally, we can show the dominant topics as follows −
df_dominant_topic.head(15)
In order to understand more about the topic, we can also find the documents, a given topic has contributed to the most. We can infer that topic by reading that particular document(s).
sent_topics_sorteddf_mallet = pd.DataFrame()
sent_topics_outdf_grpd = df_topic_sents_keywords.groupby('Dominant_Topic')
for i, grp in sent_topics_outdf_grpd:
sent_topics_sorteddf_mallet = pd.concat([sent_topics_sorteddf_mallet,
grp.sort_values(['Perc_Contribution'], ascending=[0]).head(1)], axis=0)
sent_topics_sorteddf_mallet.reset_index(drop=True, inplace=True)
sent_topics_sorteddf_mallet.columns = [
'Topic_Number', "Contribution_Perc", "Keywords", "Text"
]
sent_topics_sorteddf_mallet.head()
Sometimes we also want to judge how widely the topic is discussed in documents. For this we need to understand the volume and distribution of topics across the documents.
First calculate the number of documents for every Topic as follows −
topic_counts = df_topic_sents_keywords['Dominant_Topic'].value_counts()
Next, calculate the percentage of Documents for every Topic as follows −;
topic_contribution = round(topic_counts/topic_counts.sum(), 4)
Now find the topic Number and Keywords as follows −
topic_num_keywords = df_topic_sents_keywords[['Dominant_Topic', 'Topic_Keywords']]
Now, concatenate then Column wise as follows −
df_dominant_topics = pd.concat(
[topic_num_keywords, topic_counts, topic_contribution], axis=1
)
Next, we will change the Column names as follows −
df_dominant_topics.columns = [
'Dominant-Topic', 'Topic-Keywords', 'Num_Documents', 'Perc_Documents'
]
df_dominant_topics
This chapter deals with creating Latent Semantic Indexing (LSI) and Hierarchical Dirichlet Process (HDP) topic model with regards to Gensim.
The topic modeling algorithms that was first implemented in Gensim with Latent Dirichlet Allocation (LDA) is Latent Semantic Indexing (LSI). It is also called Latent Semantic Analysis (LSA). It got patented in 1988 by Scott Deerwester, Susan Dumais, George Furnas, Richard Harshman, Thomas Landaur, Karen Lochbaum, and Lynn Streeter.
In this section we are going to set up our LSI model. It can be done in the same way of setting up LDA model. We need to import LSI model from gensim.models.
Actually, LSI is a technique NLP, especially in distributional semantics. It analyses the relationship between a set of documents and the terms these documents contain. If we talk about its working, then it constructs a matrix that contains word counts per document from a large piece of text.
Once constructed, to reduce the number of rows, LSI model use a mathematical technique called singular value decomposition (SVD). Along with reducing the number of rows, it also preserves the similarity structure among columns.
In matrix, the rows represent unique words and the columns represent each document. It works based on distributional hypothesis, i.e. it assumes that the words that are close in meaning will occur in same kind of text.
Here, we are going to use LSI (Latent Semantic Indexing) to extract the naturally discussed topics from dataset.
The dataset which we are going to use is the dataset of ’20 Newsgroups’ having thousands of news articles from various sections of a news report. It is available under Sklearn data sets. We can easily download with the help of following Python script −
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
Let’s look at some of the sample news with the help of following script −
newsgroups_train.data[:4]
["From: lerxst@wam.umd.edu (where's my thing)\nSubject:
WHAT car is this!?\nNntp-Posting-Host: rac3.wam.umd.edu\nOrganization:
University of Maryland, College Park\nLines: 15\n\n
I was wondering if anyone out there could enlighten me on this car
I saw\nthe other day. It was a 2-door sports car,
looked to be from the late 60s/\nearly 70s. It was called a Bricklin.
The doors were really small. In addition,\nthe front bumper was separate from
the rest of the body. This is \nall I know. If anyone can tellme a model name,
engine specs, years\nof production, where this car is made, history, or
whatever info you\nhave on this funky looking car,
please e-mail.\n\nThanks,\n- IL\n ---- brought to you by your neighborhood
Lerxst ----\n\n\n\n\n",
"From: guykuo@carson.u.washington.edu (Guy Kuo)\nSubject:
SI Clock Poll - Final Call\nSummary: Final call for SI clock reports\nKeywords:
SI,acceleration,clock,upgrade\nArticle-I.D.: shelley.1qvfo9INNc3s\nOrganization:
University of Washington\nLines: 11\nNNTP-Posting-Host: carson.u.washington.edu\n\nA
fair number of brave souls who upgraded their SI clock oscillator have\nshared their
experiences for this poll. Please send a brief message detailing\nyour experiences with
the procedure. Top speed attained, CPU rated speed,\nadd on cards and adapters, heat
sinks, hour of usage per day, floppy disk\nfunctionality with 800 and 1.4 m floppies
are especially requested.\n\nI will be summarizing in the next two days, so please add
to the network\nknowledge base if you have done the clock upgrade and haven't answered
this\npoll. Thanks.\n\nGuy Kuo <guykuo@u.washington.edu>\n",
'From: twillis@ec.ecn.purdue.edu (Thomas E Willis)\nSubject:
PB questions...\nOrganization: Purdue University Engineering Computer
Network\nDistribution: usa\nLines: 36\n\nwell folks, my mac plus finally gave up the
ghost this weekend after\nstarting life as a 512k way back in 1985. sooo, i\'m in the
market for a\nnew machine a bit sooner than i intended to be...\n\ni\'m looking into
picking up a powerbook 160 or maybe 180 and have a bunch\nof questions that (hopefully)
somebody can answer:\n\n* does anybody know any dirt on when the next round of
powerbook\nintroductions are expected? i\'d heard the 185c was supposed to make
an\nappearence "this summer" but haven\'t heard anymore on it - and since i\ndon\'t
have access to macleak, i was wondering if anybody out there had\nmore info...\n\n* has
anybody heard rumors about price drops to the powerbook line like the\nones the duo\'s
just went through recently?\n\n* what\'s the impression of the display on the 180? i
could probably swing\na 180 if i got the 80Mb disk rather than the 120, but i don\'t
really have\na feel for how much "better" the display is (yea, it looks great in
the\nstore, but is that all "wow" or is it really that good?). could i solicit\nsome
opinions of people who use the 160 and 180 day-to-day on if its worth\ntaking the disk
size and money hit to get the active display? (i realize\nthis is a real subjective
question, but i\'ve only played around with the\nmachines in a computer store breifly
and figured the opinions of somebody\nwho actually uses the machine daily might prove
helpful).\n\n* how well does hellcats perform? ;)\n\nthanks a bunch in advance for any
info - if you could email, i\'ll post a\nsummary (news reading time is at a premium
with finals just around the\ncorner... :( )\n--\nTom Willis \\ twillis@ecn.purdue.edu
\\ Purdue Electrical
Engineering\n---------------------------------------------------------------------------\
n"Convictions are more dangerous enemies of truth than lies." - F. W.\nNietzsche\n',
'From: jgreen@amber (Joe Green)\nSubject: Re: Weitek P9000 ?\nOrganization: Harris
Computer Systems Division\nLines: 14\nDistribution: world\nNNTP-Posting-Host:
amber.ssd.csd.harris.com\nX-Newsreader: TIN [version 1.1 PL9]\n\nRobert J.C. Kyanko
(rob@rjck.UUCP) wrote:\n > abraxis@iastate.edu writes in article <
abraxis.734340159@class1.iastate.edu>:\n> > Anyone know about the Weitek P9000
graphics chip?\n > As far as the low-level stuff goes, it looks pretty nice. It\'s
got this\n > quadrilateral fill command that requires just the four
points.\n\nDo you have Weitek\'s address/phone number? I\'d like to get some
information\nabout this chip.\n\n--\nJoe Green\t\t\t\tHarris
Corporation\njgreen@csd.harris.com\t\t\tComputer Systems Division\n"The only thing that
really scares me is a person with no sense of humor."\n\t\t\t\t\t\t-- Jonathan
Winters\n']
We need Stopwords from NLTK and English model from Scapy. Both can be downloaded as follows −
import nltk;
nltk.download('stopwords')
nlp = spacy.load('en_core_web_md', disable=['parser', 'ner'])
In order to build LSI model we need to import following necessary package −
import re
import numpy as np
import pandas as pd
from pprint import pprint
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
import spacy
import matplotlib.pyplot as plt
Now we need to import the Stopwords and use them −
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
stop_words.extend(['from', 'subject', 're', 'edu', 'use'])
Now, with the help of Gensim’s simple_preprocess() we need to tokenise each sentence into a list of words. We should also remove the punctuations and unnecessary characters. In order to do this, we will create a function named sent_to_words() −
def sent_to_words(sentences):
for sentence in sentences:
yield(gensim.utils.simple_preprocess(str(sentence), deacc=True))
data_words = list(sent_to_words(data))
As we know that bigrams are two words that are frequently occurring together in the document and trigram are three words that are frequently occurring together in the document. With the help of Gensim’s Phrases model, we can do this −
bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100)
trigram = gensim.models.Phrases(bigram[data_words], threshold=100)
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
Next, we need to filter out the Stopwords. Along with that, we will also create functions to make bigrams, trigrams and for lemmatisation −
def remove_stopwords(texts):
return [[word for word in simple_preprocess(str(doc))
if word not in stop_words] for doc in texts]
def make_bigrams(texts):
return [bigram_mod[doc] for doc in texts]
def make_trigrams(texts):
return [trigram_mod[bigram_mod[doc]] for doc in texts]
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
texts_out = []
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
return texts_out
We now need to build the dictionary & corpus. We did it in the previous examples as well −
id2word = corpora.Dictionary(data_lemmatized)
texts = data_lemmatized
corpus = [id2word.doc2bow(text) for text in texts]
We already implemented everything that is required to train the LSI model. Now, it is the time to build the LSI topic model. For our implementation example, it can be done with the help of following line of codes −
lsi_model = gensim.models.lsimodel.LsiModel(
corpus=corpus, id2word=id2word, num_topics=20,chunksize=100
)
Let’s see the complete implementation example to build LDA topic model −
import re
import numpy as np
import pandas as pd
from pprint import pprint
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
import spacy
import matplotlib.pyplot as plt
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
stop_words.extend(['from', 'subject', 're', 'edu', 'use'])
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
data = newsgroups_train.data
data = [re.sub('\S*@\S*\s?', '', sent) for sent in data]
data = [re.sub('\s+', ' ', sent) for sent in data]
data = [re.sub("\'", "", sent) for sent in data]
print(data_words[:4]) #it will print the data after prepared for stopwords
bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100)
trigram = gensim.models.Phrases(bigram[data_words], threshold=100)
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
def remove_stopwords(texts):
return [[word for word in simple_preprocess(str(doc))
if word not in stop_words] for doc in texts]
def make_bigrams(texts):
return [bigram_mod[doc] for doc in texts]
def make_trigrams(texts):
return [trigram_mod[bigram_mod[doc]] for doc in texts]
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
texts_out = []
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
return texts_out
data_words_nostops = remove_stopwords(data_words)
data_words_bigrams = make_bigrams(data_words_nostops)
nlp = spacy.load('en_core_web_md', disable=['parser', 'ner'])
data_lemmatized = lemmatization(
data_words_bigrams, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']
)
print(data_lemmatized[:4]) #it will print the lemmatized data.
id2word = corpora.Dictionary(data_lemmatized)
texts = data_lemmatized
corpus = [id2word.doc2bow(text) for text in texts]
print(corpus[:4]) #it will print the corpus we created above.
[[(id2word[id], freq) for id, freq in cp] for cp in corpus[:4]]
#it will print the words with their frequencies.
lsi_model = gensim.models.lsimodel.LsiModel(
corpus=corpus, id2word=id2word, num_topics=20,chunksize=100
)
We can now use the above created LSI model to get the topics.
The LSI model (lsi_model) we have created above can be used to view the topics from the documents. It can be done with the help of following script −
pprint(lsi_model.print_topics())
doc_lsi = lsi_model[corpus]
[
(0,
'1.000*"ax" + 0.001*"_" + 0.000*"tm" + 0.000*"part" + 0.000*"pne" + '
'0.000*"biz" + 0.000*"mbs" + 0.000*"end" + 0.000*"fax" + 0.000*"mb"'),
(1,
'0.239*"say" + 0.222*"file" + 0.189*"go" + 0.171*"know" + 0.169*"people" + '
'0.147*"make" + 0.140*"use" + 0.135*"also" + 0.133*"see" + 0.123*"think"')
]
Topic models such as LDA and LSI helps in summarising and organising large archives of texts that is not possible to analyse by hand. Apart from LDA and LSI, one other powerful topic model in Gensim is HDP (Hierarchical Dirichlet Process). It’s basically a mixed-membership model for unsupervised analysis of grouped data. Unlike LDA (its’s finite counterpart), HDP infers the number of topics from the data.
For implementing HDP in Gensim, we need to train corpus and dictionary (as did in the above examples while implementing LDA and LSI topic models) HDP topic model that we can import from gensim.models.HdpModel. Here also we will implement HDP topic model on 20Newsgroup data and the steps are also same.
For our corpus and dictionary (created in above examples for LSI and LDA model), we can import HdpModel as follows −
Hdp_model = gensim.models.hdpmodel.HdpModel(corpus=corpus, id2word=id2word)
The HDP model (Hdp_model) can be used to view the topics from the documents. It can be done with the help of following script −
pprint(Hdp_model.print_topics())
[
(0,
'0.009*line + 0.009*write + 0.006*say + 0.006*article + 0.006*know + '
'0.006*people + 0.005*make + 0.005*go + 0.005*think + 0.005*be'),
(1,
'0.016*line + 0.011*write + 0.008*article + 0.008*organization + 0.006*know '
'+ 0.006*host + 0.006*be + 0.005*get + 0.005*use + 0.005*say'),
(2,
'0.810*ax + 0.001*_ + 0.000*tm + 0.000*part + 0.000*mb + 0.000*pne + '
'0.000*biz + 0.000*end + 0.000*wwiz + 0.000*fax'),
(3,
'0.015*line + 0.008*write + 0.007*organization + 0.006*host + 0.006*know + '
'0.006*article + 0.005*use + 0.005*thank + 0.004*get + 0.004*problem'),
(4,
'0.004*line + 0.003*write + 0.002*believe + 0.002*think + 0.002*article + '
'0.002*belief + 0.002*say + 0.002*see + 0.002*look + 0.002*organization'),
(5,
'0.005*line + 0.003*write + 0.003*organization + 0.002*article + 0.002*time '
'+ 0.002*host + 0.002*get + 0.002*look + 0.002*say + 0.001*number'),
(6,
'0.003*line + 0.002*say + 0.002*write + 0.002*go + 0.002*gun + 0.002*get + '
'0.002*organization + 0.002*bill + 0.002*article + 0.002*state'),
(7,
'0.003*line + 0.002*write + 0.002*article + 0.002*organization + 0.001*none '
'+ 0.001*know + 0.001*say + 0.001*people + 0.001*host + 0.001*new'),
(8,
'0.004*line + 0.002*write + 0.002*get + 0.002*team + 0.002*organization + '
'0.002*go + 0.002*think + 0.002*know + 0.002*article + 0.001*well'),
(9,
'0.004*line + 0.002*organization + 0.002*write + 0.001*be + 0.001*host + '
'0.001*article + 0.001*thank + 0.001*use + 0.001*work + 0.001*run'),
(10,
'0.002*line + 0.001*game + 0.001*write + 0.001*get + 0.001*know + '
'0.001*thing + 0.001*think + 0.001*article + 0.001*help + 0.001*turn'),
(11,
'0.002*line + 0.001*write + 0.001*game + 0.001*organization + 0.001*say + '
'0.001*host + 0.001*give + 0.001*run + 0.001*article + 0.001*get'),
(12,
'0.002*line + 0.001*write + 0.001*know + 0.001*time + 0.001*article + '
'0.001*get + 0.001*think + 0.001*organization + 0.001*scope + 0.001*make'),
(13,
'0.002*line + 0.002*write + 0.001*article + 0.001*organization + 0.001*make '
'+ 0.001*know + 0.001*see + 0.001*get + 0.001*host + 0.001*really'),
(14,
'0.002*write + 0.002*line + 0.002*know + 0.001*think + 0.001*say + '
'0.001*article + 0.001*argument + 0.001*even + 0.001*card + 0.001*be'),
(15,
'0.001*article + 0.001*line + 0.001*make + 0.001*write + 0.001*know + '
'0.001*say + 0.001*exist + 0.001*get + 0.001*purpose + 0.001*organization'),
(16,
'0.002*line + 0.001*write + 0.001*article + 0.001*insurance + 0.001*go + '
'0.001*be + 0.001*host + 0.001*say + 0.001*organization + 0.001*part'),
(17,
'0.001*line + 0.001*get + 0.001*hit + 0.001*go + 0.001*write + 0.001*say + '
'0.001*know + 0.001*drug + 0.001*see + 0.001*need'),
(18,
'0.002*option + 0.001*line + 0.001*flight + 0.001*power + 0.001*software + '
'0.001*write + 0.001*add + 0.001*people + 0.001*organization + 0.001*module'),
(19,
'0.001*shuttle + 0.001*line + 0.001*roll + 0.001*attitude + 0.001*maneuver + '
'0.001*mission + 0.001*also + 0.001*orbit + 0.001*produce + 0.001*frequency')
]
The chapter will help us understand developing word embedding in Gensim.
Word embedding, approach to represent words & document, is a dense vector representation for text where words having the same meaning have a similar representation. Following are some characteristics of word embedding −
It is a class of technique which represents the individual words as real-valued vectors in a pre-defined vector space.
It is a class of technique which represents the individual words as real-valued vectors in a pre-defined vector space.
This technique is often lumped into the field of DL (deep learning) because every word is mapped to one vector and the vector values are learned in the same way a NN (Neural Networks) does.
This technique is often lumped into the field of DL (deep learning) because every word is mapped to one vector and the vector values are learned in the same way a NN (Neural Networks) does.
The key approach of word embedding technique is a dense distributed representation for every word.
The key approach of word embedding technique is a dense distributed representation for every word.
As discussed above, word embedding methods/algorithms learn a real-valued vector representation from a corpus of text. This learning process can either use with the NN model on task like document classification or is an unsupervised process such as document statistics. Here we are going to discuss two methods/algorithm that can be used to learn a word embedding from text −
Word2Vec, developed by Tomas Mikolov, et. al. at Google in 2013, is a statistical method for efficiently learning a word embedding from text corpus. It’s actually developed as a response to make NN based training of word embedding more efficient. It has become the de facto standard for word embedding.
Word embedding by Word2Vec involves analysis of the learned vectors as well as exploration of vector math on representation of words. Following are the two different learning methods which can be used as the part of Word2Vec method −
CBoW(Continuous Bag of Words) Model
Continuous Skip-Gram Model
GloVe(Global vectors for Word Representation), is an extension to the Word2Vec method. It was developed by Pennington et al. at Stanford. GloVe algorithm is a mix of both −
Global statistics of matrix factorization techniques like LSA (Latent Semantic Analysis)
Local context-based learning in Word2Vec.
If we talk about its working then instead of using a window to define local context, GloVe constructs an explicit word co-occurrence matrix using statistics across the whole text corpus.
Here, we will develop Word2Vec embedding by using Gensim. In order to work with a Word2Vec model, Gensim provides us Word2Vec class which can be imported from models.word2vec. For its implementation, word2vec requires a lot of text e.g. the entire Amazon review corpus. But here, we will apply this principle on small-in memory text.
First we need to import the Word2Vec class from gensim.models as follows −
from gensim.models import Word2Vec
Next, we need to define the training data. Rather than taking big text file, we are using some sentences to implement this principal.
sentences = [
['this', 'is', 'gensim', 'tutorial', 'for', 'free'],
['this', 'is', 'the', 'tutorials' 'point', 'website'],
['you', 'can', 'read', 'technical','tutorials', 'for','free'],
['we', 'are', 'implementing','word2vec'],
['learn', 'full', 'gensim', 'tutorial']
]
Once the training data is provided, we need to train the model. it can be done as follows −
model = Word2Vec(sentences, min_count=1)
We can summarise the model as follows −;
print(model)
We can summarise the vocabulary as follows −
words = list(model.wv.vocab)
print(words)
Next, let’s access the vector for one word. We are doing it for the word ‘tutorial’.
print(model['tutorial'])
Next, we need to save the model −
model.save('model.bin')
Next, we need to load the model −
new_model = Word2Vec.load('model.bin')
Finally, print the saved model as follows −
print(new_model)
from gensim.models import Word2Vec
sentences = [
['this', 'is', 'gensim', 'tutorial', 'for', 'free'],
['this', 'is', 'the', 'tutorials' 'point', 'website'],
['you', 'can', 'read', 'technical','tutorials', 'for','free'],
['we', 'are', 'implementing','word2vec'],
['learn', 'full', 'gensim', 'tutorial']
]
model = Word2Vec(sentences, min_count=1)
print(model)
words = list(model.wv.vocab)
print(words)
print(model['tutorial'])
model.save('model.bin')
new_model = Word2Vec.load('model.bin')
print(new_model)
Word2Vec(vocab=20, size=100, alpha=0.025)
[
'this', 'is', 'gensim', 'tutorial', 'for', 'free', 'the', 'tutorialspoint',
'website', 'you', 'can', 'read', 'technical', 'tutorials', 'we', 'are',
'implementing', 'word2vec', 'learn', 'full'
]
[
-2.5256255e-03 -4.5352755e-03 3.9024993e-03 -4.9509313e-03
-1.4255195e-03 -4.0217536e-03 4.9407515e-03 -3.5925603e-03
-1.1933431e-03 -4.6682903e-03 1.5440651e-03 -1.4101702e-03
3.5070938e-03 1.0914479e-03 2.3334436e-03 2.4452661e-03
-2.5336299e-04 -3.9676363e-03 -8.5054158e-04 1.6443320e-03
-4.9968651e-03 1.0974540e-03 -1.1123562e-03 1.5393364e-03
9.8941079e-04 -1.2656028e-03 -4.4471184e-03 1.8309267e-03
4.9302122e-03 -1.0032534e-03 4.6892050e-03 2.9563988e-03
1.8730218e-03 1.5343715e-03 -1.2685956e-03 8.3664013e-04
4.1721235e-03 1.9445885e-03 2.4097660e-03 3.7517555e-03
4.9687522e-03 -1.3598346e-03 7.1032363e-04 -3.6595813e-03
6.0000515e-04 3.0872561e-03 -3.2115565e-03 3.2270295e-03
-2.6354722e-03 -3.4988276e-04 1.8574356e-04 -3.5757164e-03
7.5391348e-04 -3.5205986e-03 -1.9795434e-03 -2.8321696e-03
4.7155009e-03 -4.3349937e-04 -1.5320212e-03 2.7013756e-03
-3.7055744e-03 -4.1658725e-03 4.8034848e-03 4.8594419e-03
3.7129463e-03 4.2385766e-03 2.4612297e-03 5.4920948e-04
-3.8912550e-03 -4.8226118e-03 -2.2763973e-04 4.5571579e-03
-3.4609400e-03 2.7903817e-03 -3.2709218e-03 -1.1036445e-03
2.1492650e-03 -3.0384419e-04 1.7709908e-03 1.8429896e-03
-3.4038599e-03 -2.4872608e-03 2.7693063e-03 -1.6352943e-03
1.9182395e-03 3.7772327e-03 2.2769428e-03 -4.4629495e-03
3.3151123e-03 4.6509290e-03 -4.8521687e-03 6.7615538e-04
3.1034781e-03 2.6369948e-05 4.1454583e-03 -3.6932561e-03
-1.8769916e-03 -2.1958587e-04 6.3395966e-04 -2.4969708e-03
]
Word2Vec(vocab=20, size=100, alpha=0.025)
We can also explore the word embedding with visualisation. It can be done by using a classical projection method (like PCA) to reduce the high-dimensional word vectors to 2-D plots. Once reduced, we can then plot them on graph.
First, we need to retrieve all the vectors from a trained model as follows −
Z = model[model.wv.vocab]
Next, we need to create a 2-D PCA model of word vectors by using PCA class as follows −
pca = PCA(n_components=2)
result = pca.fit_transform(Z)
Now, we can plot the resulting projection by using the matplotlib as follows −
Pyplot.scatter(result[:,0],result[:,1])
We can also annotate the points on the graph with the words itself. Plot the resulting projection by using the matplotlib as follows −
words = list(model.wv.vocab)
for i, word in enumerate(words):
pyplot.annotate(word, xy=(result[i, 0], result[i, 1]))
from gensim.models import Word2Vec
from sklearn.decomposition import PCA
from matplotlib import pyplot
sentences = [
['this', 'is', 'gensim', 'tutorial', 'for', 'free'],
['this', 'is', 'the', 'tutorials' 'point', 'website'],
['you', 'can', 'read', 'technical','tutorials', 'for','free'],
['we', 'are', 'implementing','word2vec'],
['learn', 'full', 'gensim', 'tutorial']
]
model = Word2Vec(sentences, min_count=1)
X = model[model.wv.vocab]
pca = PCA(n_components=2)
result = pca.fit_transform(X)
pyplot.scatter(result[:, 0], result[:, 1])
words = list(model.wv.vocab)
for i, word in enumerate(words):
pyplot.annotate(word, xy=(result[i, 0], result[i, 1]))
pyplot.show()
Doc2Vec model, as opposite to Word2Vec model, is used to create a vectorised representation of a group of words taken collectively as a single unit. It doesn’t only give the simple average of the words in the sentence.
Here to create document vectors using Doc2Vec, we will be using text8 dataset which can be downloaded from gensim.downloader.
We can download the text8 dataset by using the following commands −
import gensim
import gensim.downloader as api
dataset = api.load("text8")
data = [d for d in dataset]
It will take some time to download the text8 dataset.
In order to train the model, we need the tagged document which can be created by using models.doc2vec.TaggedDcument() as follows −
def tagged_document(list_of_list_of_words):
for i, list_of_words in enumerate(list_of_list_of_words):
yield gensim.models.doc2vec.TaggedDocument(list_of_words, [i])
data_for_training = list(tagged_document(data))
We can print the trained dataset as follows −
print(data_for_training [:1])
[TaggedDocument(words=['anarchism', 'originated', 'as', 'a', 'term', 'of',
'abuse', 'first', 'used', 'against', 'early', 'working', 'class', 'radicals',
'including', 'the', 'diggers', 'of', 'the', 'english', 'revolution',
'and', 'the', 'sans', 'culottes', 'of', 'the', 'french', 'revolution',
'whilst', 'the', 'term', 'is', 'still', 'used', 'in', 'a', 'pejorative',
'way', 'to', 'describe', 'any', 'act', 'that', 'used', 'violent',
'means', 'to', 'destroy',
'the', 'organization', 'of', 'society', 'it', 'has', 'also', 'been'
, 'taken', 'up', 'as', 'a', 'positive', 'label', 'by', 'self', 'defined',
'anarchists', 'the', 'word', 'anarchism', 'is', 'derived', 'from', 'the',
'greek', 'without', 'archons', 'ruler', 'chief', 'king', 'anarchism',
'as', 'a', 'political', 'philosophy', 'is', 'the', 'belief', 'that',
'rulers', 'are', 'unnecessary', 'and', 'should', 'be', 'abolished',
'although', 'there', 'are', 'differing', 'interpretations', 'of',
'what', 'this', 'means', 'anarchism', 'also', 'refers', 'to',
'related', 'social', 'movements', 'that', 'advocate', 'the',
'elimination', 'of', 'authoritarian', 'institutions', 'particularly',
'the', 'state', 'the', 'word', 'anarchy', 'as', 'most', 'anarchists',
'use', 'it', 'does', 'not', 'imply', 'chaos', 'nihilism', 'or', 'anomie',
'but', 'rather', 'a', 'harmonious', 'anti', 'authoritarian', 'society',
'in', 'place', 'of', 'what', 'are', 'regarded', 'as', 'authoritarian',
'political', 'structures', 'and', 'coercive', 'economic', 'institutions',
'anarchists', 'advocate', 'social', 'relations', 'based', 'upon', 'voluntary',
'association', 'of', 'autonomous', 'individuals', 'mutual', 'aid', 'and',
'self', 'governance', 'while', 'anarchism', 'is', 'most', 'easily', 'defined',
'by', 'what', 'it', 'is', 'against', 'anarchists', 'also', 'offer',
'positive', 'visions', 'of', 'what', 'they', 'believe', 'to', 'be', 'a',
'truly', 'free', 'society', 'however', 'ideas', 'about', 'how', 'an', 'anarchist',
'society', 'might', 'work', 'vary', 'considerably', 'especially', 'with',
'respect', 'to', 'economics', 'there', 'is', 'also', 'disagreement', 'about',
'how', 'a', 'free', 'society', 'might', 'be', 'brought', 'about', 'origins',
'and', 'predecessors', 'kropotkin', 'and', 'others', 'argue', 'that', 'before',
'recorded', 'history', 'human', 'society', 'was', 'organized', 'on', 'anarchist',
'principles', 'most', 'anthropologists', 'follow', 'kropotkin', 'and', 'engels',
'in', 'believing', 'that', 'hunter', 'gatherer', 'bands', 'were', 'egalitarian',
'and', 'lacked', 'division', 'of', 'labour', 'accumulated', 'wealth', 'or', 'decreed',
'law', 'and', 'had', 'equal', 'access', 'to', 'resources', 'william', 'godwin',
'anarchists', 'including', 'the', 'the', 'anarchy', 'organisation', 'and', 'rothbard',
'find', 'anarchist', 'attitudes', 'in', 'taoism', 'from', 'ancient', 'china',
'kropotkin', 'found', 'similar', 'ideas', 'in', 'stoic', 'zeno', 'of', 'citium',
'according', 'to', 'kropotkin', 'zeno', 'repudiated', 'the', 'omnipotence', 'of',
'the', 'state', 'its', 'intervention', 'and', 'regimentation', 'and', 'proclaimed',
'the', 'sovereignty', 'of', 'the', 'moral', 'law', 'of', 'the', 'individual', 'the',
'anabaptists', 'of', 'one', 'six', 'th', 'century', 'europe', 'are', 'sometimes',
'considered', 'to', 'be', 'religious', 'forerunners', 'of', 'modern', 'anarchism',
'bertrand', 'russell', 'in', 'his', 'history', 'of', 'western', 'philosophy',
'writes', 'that', 'the', 'anabaptists', 'repudiated', 'all', 'law', 'since',
'they', 'held', 'that', 'the', 'good', 'man', 'will', 'be', 'guided', 'at',
'every', 'moment', 'by', 'the', 'holy', 'spirit', 'from', 'this', 'premise',
'they', 'arrive', 'at', 'communism', 'the', 'diggers', 'or', 'true', 'levellers',
'were', 'an', 'early', 'communistic', 'movement',
(truncated...)
Once trained we now need to initialise the model. it can be done as follows −
model = gensim.models.doc2vec.Doc2Vec(vector_size=40, min_count=2, epochs=30)
Now, build the vocabulary as follows −
model.build_vocab(data_for_training)
Now, let’s train the Doc2Vec model as follows −
model.train(data_for_training, total_examples=model.corpus_count, epochs=model.epochs)
Finally, we can analyse the output by using model.infer_vector() as follows −
print(model.infer_vector(['violent', 'means', 'to', 'destroy', 'the','organization']))
import gensim
import gensim.downloader as api
dataset = api.load("text8")
data = [d for d in dataset]
def tagged_document(list_of_list_of_words):
for i, list_of_words in enumerate(list_of_list_of_words):
yield gensim.models.doc2vec.TaggedDocument(list_of_words, [i])
data_for_training = list(tagged_document(data))
print(data_for_training[:1])
model = gensim.models.doc2vec.Doc2Vec(vector_size=40, min_count=2, epochs=30)
model.build_vocab(data_training)
model.train(data_training, total_examples=model.corpus_count, epochs=model.epochs)
print(model.infer_vector(['violent', 'means', 'to', 'destroy', 'the','organization']))
[
-0.2556166 0.4829361 0.17081228 0.10879577 0.12525807 0.10077011
-0.21383236 0.19294572 0.11864349 -0.03227958 -0.02207291 -0.7108424
0.07165232 0.24221905 -0.2924459 -0.03543589 0.21840079 -0.1274817
0.05455418 -0.28968817 -0.29146606 0.32885507 0.14689675 -0.06913587
-0.35173815 0.09340707 -0.3803535 -0.04030455 -0.10004586 0.22192696
0.2384828 -0.29779273 0.19236489 -0.25727913 0.09140676 0.01265439
0.08077634 -0.06902497 -0.07175519 -0.22583418 -0.21653089 0.00347822
-0.34096122 -0.06176808 0.22885063 -0.37295452 -0.08222228 -0.03148199
-0.06487323 0.11387568
]
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2157,
"s": 2052,
"text": "This chapter will help you understand history and features of Gensim along with its uses and advantages."
},
{
"code": null,
"e": 2401,
"s": 2157,
"text": "Gensim = “Generate Similar” is a popular open source natural language process... |
Creating a Distributed Computer Cluster with Python and Dask | by Matthew Grint | Towards Data Science | Calculating a correlation matrix can very quickly consume a vast amount of computational resources. Fortunately, correlation (and covariance) calculations can be intelligently split into multiple processes and distributed across a number of computers.
In this article, we will use Dask for Python to manage the parallel computation of a large correlation matrix across a number of computers on a Local Area Network.
A correlation matrix shows the linear statistical relationship between two variables. Famously, correlation does not imply causation, but we still regularly make use of it as part of our efforts to understand the datasets we work with.
If you just want to go ahead and compute a correlation matrix on your own home cluster of computers then skip this section, but if you are interested in how to actually calculate correlation mathematically then read on.
One easy way to visualise the steps is to recreate this in Excel, so I will show you in Excel how to calculate the correlation and covariance between three sets of Foreign Exchange data.
First, we take our three sets of time-series data, arrange it by columns for each currency pair, and then take the mean (sum of values, divided by number of values) of each set of FX rates:
Then for each time series data point, we calculate the difference of that point from the mean:
We then square each of those differences which will give us our variance for each of the datasets.
We then calculate the sum of the range of squared values, divided by our sample size minus one, giving us our Variance, 0.0013% for GBPUSD for example as seen in row 3. We subtract one from the count of our data points as this is a sample of our dataset (we do not have every single historical datapoint) rather than the entire population. This is known as the “sample variance”. If we had the entire population we would not subtract one.
We can also calculate the Standard Deviation at this stage, which is simply the square root of the variance:
We can calculate the Covariance between two given datasets by multiplying the two “Difference from mean” values at each point in time. The example below shows the single point-in-time covariance calculation for GBPUSD and JPYUSD:
We have called GBPUSD dataset A, JPYUSD dataset B, and so on. With four individual datasets we then have 6 combinations for our covariance calculation: AB, AC, AD, BC, BD, and CD. All of these are calculated in the same way, by multiplying the differences from the means of each dataset at each point in time.
We then take the mean of these values, in the same way as we did for the variance — subtracting one from the denominator — to get the covariance:
And finally, we can calculate the correlation of these two datasets by dividing the covariance by the product of the two standard deviations:
Mathematically, we can represent this as :
This may look daunting but it is just what we calculated already. Looking at the very right-hand equation, the top of the fraction is saying “take the difference of each data point in dataset X from the mean of dataset X, and multiply that by the equivalent calculation in dataset Y” and then sum all of those values together. The x with the subscript i just means a given point of dataset x, whilst x̅ means the average for the dataset. The same goes for the y-values.
The denominator is saying, again, for every point in X, subtract the mean value of X from that point and then square the value you get. Sum all of those together and do the same for Y. Now multiply those two values together and take the square root of that number. The result is the product of the standard deviations of the two datasets.
This is exactly what we have just calculated for ourselves and it’s exactly what we are doing today, except we are using a home computer cluster to calculate correlations like this across a large number of financial instruments for a significant number of points in time.
The Network
The first step is to establish your computation environment on every computer that will form part of the cluster. You will need to make sure the computers can see each other on the Local Network (through firewall exceptions). You will also need to note the IP address of each computer.
On Windows you can go to a Command Prompt and type:
ipconfig
This will give you your IP address on the Local Network. On a Linux machine you can type:
hostname -I
Or on OSX the below should work:
ipconfig getifaddr en0
You should note each of the IP addresses on each of your machines as you will need this to set-up the cluster. You can also check that each computer can see the others by running the command:
ping <Target IP Address>
If this responds with data then the two computers can see each other, if not you will need to double-check the IP address and ensure that a firewall is not blocked communication.
We are going to use three computers as part of our small cluster, 192.168.1.1, 192.168.2 and 192.168.1.3.
Python and Dask
One very important point to note is that each Python environment must be identical on each computer. If there are inconsistencies in library versions this can cause problems.
If it is easier, use Anaconda or a similar tool to create a new Python environment with matching versions of the libraries you will be using such as Numpy and Pandas.
You will also need to install Dask itself on each of the computers. You can do this using Conda:
conda install dask distributed -c conda-forge
Or you can use Pip:
python -m pip install dask distributed --upgrade
You should now have consistent Python installations, with matching library versions, across all of the computers in your cluster.
In this example, I am using a large dataset of daily equities data. This contains the daily close prices of ~7,000 US equities from 2000 to 2020. This would be extremely time-consuming to compute on any one of the personal computers that form part of my cluster. When I tried the calculation initially the system crashed after a few hours due to hitting a memory limit when writing to the page file.
These data are stored in a large SQL table which we are going to access sequentially for each stock and then merge. This proved significantly faster than performing the manipulations in Python.
We will also be using a different implementation of a MySQL Database Connector. The standard MySQL connector library is relatively slow when dealing with large datasets because it itself is written in Python. The C implementation of this connector is must faster and you can find the installation details here:
mysqlclient.readthedocs.io
Once MySQLdb is installed you can begin to query the data. I am firstly extracting a list of all unique, chronologically-ordered dates from the table. This will be the golden source of each timestep in the overall database.
I am then extracting, stock-by-stock, the time series data, and left-joining this to the golden time series source. I then add this individual dataframe to a list of dataframes, one for each stock that has been extracted. I defer the merge operation to combine each stock time series into a single dataset until later.
This query approach can itself be parallelised, where a single query to extract all data from the table could not be.
This leaves us with a list of datasets with a common time-series index which we can then work with.
Here we are extracting our list of stocks, building our golden source for the time-series, and then iterating through the list extracting the time series data for each stock in this common format.
The query where we extract the stock data:
SELECT date, close as {} FROM history where stock = '{}'
takes the closing price of the stock and names that column with the ticker for that stock. This means that instead of having 7,000 columns each with the name “close” we will have 7,000 columns each named with the relevant ticker code and mapped to a common time-series. This will be much easier to deal with.
Now for the good bit. We are going to get the data in its final format and begin to calculate the correlation matrix.
Here we are setting the date index for each of the dataframes in our list and merging them together. Because we named each column after the relevant stock ticker, we now have one large table which has a common list of dates as an index and a labeled column for each stock. We’re going to simplify the table where we can by dropping and column with all “NaN” values — this is where there is no relevant data.
This is the exact format we need to calculate our correlation matrix so we are going to:
import dask.dataframe as dd
and create a Dask dataframe
merged = dd.from_pandas(merged, 20)
This is the time when you will need to make an important design decision that will significantly impact the speed of processing the correlation matrix.
Here we are converting the Pandas dataframe into a Dask dataframe and we also have to specify a number, here “20”, for the number of partitions in the dataset.
Each partition will be treated as an independent unit for the purposes of parallel calculation. If you select “1” for example, the correlation matrix will be calculated by a single thread on a single CPU and you will achieve no parallelisation benefit.
On the other end of the spectrum if you set this number too high your performance will suffer as there is an overhead to loading and processing each task.
Initially, I set this equal to the number of threads across the computers in my mini-cluster. However, I soon found that one of the machines was much slower and the two faster machines were sat idle waiting for this one to complete. To mitigate this I increased the number of processes so that the faster computers could continue to work on later tasks whilst the slower one continued to work on the initially allocated processes.
After making this decision, we convert the merged dataframe to Float 16 from Float 32 datatype as the extra precision is unnecessary and will slow our next calculations.
We then run the critical line:
merged = merged.corr().compute()
There are two functions here, .corr() which calculates our correlation matrix, and .compute() which sends it to Dask to calculate.
After this has processed, we will receive a correlation matrix, showing the correlation of each stock with each other. Below is a simplified illustration of what this could look like (with made-up data as we haven’t calculated the matrix
You will notice there is a lot of repetition and unnecessary data here. The correlation of AAPL with APPL is 1. The correlation of AMZN with AMZN is 1. This is going to be the case across the diagonal of your entire matrix.
Similarly, there is duplication. The correlation of AMZN with AAPL is clearly the same as the correlation of AAPL with AMZN (it’s the same thing!).
There is no point in duplicating all of this data, particularly for such a large dataset, when you come to save this back in your dataset. So let’s get rid of this duplication.
You will notice that the unnecessary data values form half of the dataset, across a diagonal shape. So let’s mask out that diagonal before we save down this dataset.
corrs = merged.mask(np.tril(np.ones(merged.shape)).astype(np.bool))
This line does exactly that. It creates a matrix of ones in the shape of our correlation matrix. Then, it uses Numpy to return a copy of the data with the given diagonal zeroed-out using the tril() function. Then it turns these zeroes into a False condition by casting it to the type bool. Finally it uses Pandas to apply this as a mask to the dataset and saves it as our new correlation matrix.
We can then use convert this simplified matrix into a table which we can then upload back into our database.
levels = merged.columns.nlevels...df1 = corrs.stack(list(range(levels))).reset_index()del df1["level_0"]del df1["level_2"]df1.columns = ["Stock 1", "Stock 2", "Correlation"] print("Uploading SQL")df1.to_sql("correlations", con=engine, if_exists='append', chunksize=1000, index=False) print("Process Complete")
Here we convert this back into a table for our database by using the stack() function. This gives us a table that looks something like this:
You can see there is no duplication of values or redundant 1s showing a stock’s correlation with itself. This can then be uploaded back into the database and the process is complete.
There is one final, crucial line we must add to our code but first we need to create our cluster.
On each computer you will need to go to a Python terminal. If you are using Anaconda you can use the GUI to launch a terminal, or you can go to Anaconda Prompt and select your environment using:
conda activate <environment>
You will now need to decide which computer will act as a scheduler and manage the distribution of the tasks and which will act as workers. It is perfectly possible for a computer to act as both scheduler and worker.
On the scheduler, go ahead and enter the following into the Python terminal:
dask-scheduler
This will launch the scheduler and manage the workflow around your cluster. Make a note of the local network IP address that you have done this on, in our case, we will use 192.168.1.1.
We will want this machine to also act as a worker, so we will open another Python terminal and type:
dask-worker 192.168.1.1:8786
This should show that we have established a connection between the worker and the scheduler.
Now we need to repeat this command on each of the other computers in our cluster, with each pointing towards the scheduler hosted at 192.168.1.1.
I mentioned earlier than one of the computers was slower than the others. It did, however, have twice as much RAM which can be handy for assembling the correlation matrix at the end. Instead of removing it from the cluster entirely, I decided to limit the number of processes it could run by restricting the number of threads available to Dask. You can do this by appending the following to your Dask-worker instruction:
dask-worker 192.168.1.1:8786 --nprocs 1--nthreads 1
Now your cluster is up and running, you can add the crucial line to your code. Add the following near the top of your Python file (or in any case, above your call to .corr().calculate():
client = _get_global_client() or Client('192.168.1.1:8786')
Make sure you replace the IP address with that of your scheduler. This will point Dask towards the scheduler to manage the computation.
Now, you can run your program.
Dask provides an excellent Dashboard for monitoring the progress of your computation.
Go to 192.168.1.1:8787 (or whichever IP address you have established your scheduler on using the port 8787).
This will show you the Dask Dashboard so you can track progress and monitor the utilisation of the cores and memory.
The “Workers” page above shows you the nodes in your cluster and their current levels of utilisation. You can use this to track which are doing the most work and make sure this remains in an acceptable range. It is worth noting the CPU utilisation percentage is based on each core, so you can have 200% CPU utilisation if two cores are used, or 800% for eight cores etc.
The status screen shows the breakdown of the tasks being worked on. The screenshot above shows that we have converted 4 of the 20 partitions from Pandas dataframes into Dask dataframes, with the others being worked on across the cluster. We can also see that we have already stored over 4 GB of data, with plenty more to come.
The Graph screen shows graphically how the task has been divided up. We can see that each partition is being handled separately, so there are 20 partitions as expected. Each partition goes through three stages, stage 0 where it is converted from Pandas to Dask, stage 1 where the correlation matrix for that particular partition is calculated, and stage 3 where all the partitions are combined and the cross-correlations are calculated.
The System screen shows the current system utilisation of resources, including CPU, Memory, and Bandwidth. This shows you how much computational power is being drawn by the cluster at any given time and is useful for seeing the current strain and load on the network.
You can use these screens to watch your calculation progress and you will gradually see the various Tasks turn green as they finish calculating. The results will then be propagated across the network to a single node which will then compile the results and return them to your Python program.
As you have already coded it to do, the resultant matrix will then be slimmed down to its relevant diagonal part, tabulated, and uploaded to your database.
You have now successfully set-up a computer cluster and used it to calculate a correlation matrix, simplify your results, and store them away for further analysis.
You can use these core components to do many more interesting things with your cluster, from machine learning to statistical analysis.
Let me know how you get on.
You can follow me on Twitter @mgrint or see more from me at https://grint.tech. Email me at matt@grint.tech. | [
{
"code": null,
"e": 424,
"s": 172,
"text": "Calculating a correlation matrix can very quickly consume a vast amount of computational resources. Fortunately, correlation (and covariance) calculations can be intelligently split into multiple processes and distributed across a number of computers."
... |
Compound assignment operators in Java
| The Assignment Operators
Following are the assignment operators supported by Java language −
Live Demo
public class Test {
public static void main(String args[]) {
int a = 10;
int b = 20;
int c = 0;
c = a + b;
System.out.println("c = a + b = " + c );
c += a ;
System.out.println("c += a = " + c );
c -= a ;
System.out.println("c -= a = " + c );
c *= a ;
System.out.println("c *= a = " + c );
a = 10;
c = 15;
c /= a ;
System.out.println("c /= a = " + c );
a = 10;
c = 15;
c %= a ;
System.out.println("c %= a = " + c );
c <<= 2 ;
System.out.println("c <<= 2 = " + c );
c >>= 2 ;
System.out.println("c >>= 2 = " + c );
c >>= 2 ;
System.out.println("c >>= 2 = " + c );
c &= a ;
System.out.println("c &= a = " + c );
c ^= a ;
System.out.println("c ^= a = " + c );
c |= a ;
System.out.println("c |= a = " + c );
}
}
This will produce the following result −
c = a + b = 30
c += a = 40
c -= a = 30
c *= a = 300
c /= a = 1
c %= a = 5
c <<= 2 = 20
c >>= 2 = 5
c >>= 2 = 1
c &= a = 0
c ^= a = 10
c |= a = 10 | [
{
"code": null,
"e": 1087,
"s": 1062,
"text": "The Assignment Operators"
},
{
"code": null,
"e": 1155,
"s": 1087,
"text": "Following are the assignment operators supported by Java language −"
},
{
"code": null,
"e": 1165,
"s": 1155,
"text": "Live Demo"
},
... |
Python program to left rotate the elements of an array | When it is required to left rotate the elements of an array, the array can be iterated over, and depending on the number of left rotations, the index can be incremented that many times.
Below is a demonstration of the same −
Live Demo
my_list = [11, 12, 23, 34, 65]
n = 3
print("The list is : ")
for i in range(0, len(my_list)):
print(my_list[i])
for i in range(0, n):
first_elem = my_list[0]
for j in range(0, len(my_list)-1):
my_list[j] = my_list[j+1]
my_list[len(my_list)-1] = first_elem
print()
print("Array after left rotating is : ")
for i in range(0, len(my_list)):
print(my_list[i])
The list is :
11
12
23
34
65
Array after left rotating is :
34
65
11
12
23
A list is defined, and is displayed on the console.
A list is defined, and is displayed on the console.
The value for left rotation is defined.
The value for left rotation is defined.
The list is iterated over, and the index of elements in the list is incremented, and assigned to previous index of the same list.
The list is iterated over, and the index of elements in the list is incremented, and assigned to previous index of the same list.
Once it comes out of the loop, the first element (at the 0th index) is assigned to the last element.
Once it comes out of the loop, the first element (at the 0th index) is assigned to the last element.
This is the output that is displayed on the console.
This is the output that is displayed on the console. | [
{
"code": null,
"e": 1248,
"s": 1062,
"text": "When it is required to left rotate the elements of an array, the array can be iterated over, and depending on the number of left rotations, the index can be incremented that many times."
},
{
"code": null,
"e": 1287,
"s": 1248,
"text... |
C++ Program to Multiply two Matrices by Passing Matrix to Function | A matrix is a rectangular array of numbers that is arranged in the form of rows and columns.
An example of a matrix is as follows.
A 3*4 matrix has 3 rows and 4 columns as shown below.
8 6 3 5
7 1 9 2
5 1 9 8
A program that multiplies two matrices by passing the matrices to functions is as follows.
Live Demo
#include<iostream>
using namespace std;
void MatrixMultiplication(int a[2][3],int b[3][3]) {
int product[10][10], r1=2, c1=3, r2=3, c2=3, i, j, k;
if (c1 != r2) {
cout<<"Column of first matrix should be equal to row of second matrix";
} else {
cout<<"The first matrix is:"<<endl;
for(i=0; i<r1; ++i) {
for(j=0; j<c1; ++j)
cout<<a[i][j]<<" ";
cout<<endl;
}
cout<<endl;
cout<<"The second matrix is:"<<endl;
for(i=0; i<r2; ++i) {
for(j=0; j<c2; ++j)
cout<<b[i][j]<<" ";
cout<<endl;
}
cout<<endl;
for(i=0; i<r1; ++i)
for(j=0; j<c2; ++j) {
product[i][j] = 0;
}
for(i=0; i<r1; ++i)
for(j=0; j<c2; ++j)
for(k=0; k<c1; ++k) {
product[i][j]+=a[i][k]*b[k][j];
}
cout<<"Product of the two matrices is:"<<endl;
for(i=0; i<r1; ++i) {
for(j=0; j<c2; ++j)
cout<<product[i][j]<<" ";
cout<<endl;
}
}
}
int main() {
int a[2][3] = { {2, 4, 1} , {2, 3, 9} };
int b[3][3] = { {1, 2, 3} , {3, 6, 1} , {2, 9, 7} };
MatrixMultiplication(a,b);
return 0;
}
The first matrix is:
2 4 1
2 3 9
The second matrix is:
1 2 3
3 6 1
2 9 7
Product of the two matrices is:
16 37 17
29 103 72
In the above program, the two matrices a and b are initialized in the main() function as follows.
int a[2][3] = { {2, 4, 1} , {2, 3, 9} };
int b[3][3] = { {1, 2, 3} , {3, 6, 1} , {2, 9, 7} };
The function MatrixMultiplication() is called with the values of a and b. This is seen below.
MatrixMultiplication(a,b);
In the function MatrixMultiplication(), if the number of columns in the first matrix are not equal to the number of rows in the second matrix then multiplication cannot be performed. In this case an error message is printed. It is given as follows.
if (c1 != r2) {
cout<<"Column of first matrix should be equal to row of second matrix";
}
Both the matrices a and b are displayed using a nested for loop. This is demonstrated by the following code snippet.
cout<<"The first matrix is:"<<endl;
for(i=0; i<r1; ++i) {
for(j=0; j<c1; ++j)
cout<<a[i][j]<<" ";
cout<<endl;
}
cout<<endl;
cout<<"The second matrix is:"<<endl;
for(i=0; i<r2; ++i) {
for(j=0; j<c2; ++j)
cout<<b[i][j]<<" ";
cout<<endl;
}
cout<<endl;
After this, the product[][] matrix is initialized to 0. Then a nested for loop is used to find the product of the 2 matrices a and b. This is demonstrated in the below code snippet.
for(i=0; i<r1; ++i)
for(j=0; j<c2; ++j) {
product[i][j] = 0;
}
for(i=0; i<r1; ++i)
for(j=0; j<c2; ++j)
for(k=0; k<c1; ++k) {
product[i][j]+=a[i][k]*b[k][j];
}
After the product is obtained, it is printed. This is shown as follows.
cout<<"Product of the two matrices is:"<<endl;
for(i=0; i<r1; ++i) {
for(j=0; j<c2; ++j)
cout<<product[i][j]<<" ";
cout<<endl;
} | [
{
"code": null,
"e": 1155,
"s": 1062,
"text": "A matrix is a rectangular array of numbers that is arranged in the form of rows and columns."
},
{
"code": null,
"e": 1193,
"s": 1155,
"text": "An example of a matrix is as follows."
},
{
"code": null,
"e": 1247,
"s":... |
Virtual Destructor in C++ | Deleting a derived class object using a pointer to a base class, the base class should be defined with a virtual destructor.
#include<iostream>
using namespace std;
class b {
public:
b() {
cout<<"Constructing base \n";
}
virtual ~b() {
cout<<"Destructing base \n";
}
};
class d: public b {
public:
d() {
cout<<"Constructing derived \n";
}
~d() {
cout<<"Destructing derived \n";
}
};
int main(void) {
d *derived = new d();
b *bptr = derived;
delete bptr;
return 0;
}
Constructing base
Constructing derived
Destructing derived
Destructing base | [
{
"code": null,
"e": 1187,
"s": 1062,
"text": "Deleting a derived class object using a pointer to a base class, the base class should be defined with a virtual destructor."
},
{
"code": null,
"e": 1630,
"s": 1187,
"text": "#include<iostream>\nusing namespace std;\nclass b {\n p... |
Building Apps with Apache Cordova - GeeksforGeeks | 30 Jun, 2020
Apache Cordova is hybrid mobile development framework used to create mobile apps out of progressive web applications. However, Apache Cordova is used to make mobile apps using web view and it cannot be used for native android app development. The downside to web view applications is that it is slower in performing than native applications, nonetheless, this is not enough difference to have any noticeable changes in the performance speeds.
First up, install Node.js for your respective computer specifications and set up npm environment variables.
Node Package Manager(npm) is used to easily install or upgrade or uninstall packages to your computer. We have to install Cordova packages. We type in the following commands:
npm install -g cordova
-g refers that the package is installed globally, this means you can set up your Cordova project anywhere in your computer.
Then download the Android SDK and install it in your computer.
You have successfully installed Cordova and Android SDK in your computer. Now let’s create our project. In this article we will build a simple clock application using HTML, CSS and JavaScript.
Go to the folder in which you want to create your project in. Create your first project using the following command:
cordova create projectDirectory com.example.name ProjectName
com.example.name is the project ID, ProjectName is the name of the project and projectDirectory is the directory that is now created for building our Cordova app. Change the working directory to the project you just created.
cd projectDirectory
Now let’s add our platforms. It is to be noted that Cordova is a hybrid app development framework, this means that the same codebase can be deployed to multiple platforms such as Windows Desktop, Android Phones, iOS Phones etc., In this example, we are going to deploy to android.
cordova platform add android
Note: If you want to develop for Apple iOS, you need to have XCode which can be installed only in MacOS desktops.
Now we have successfully created a project and added all the modules required for android.
All of the code files that we will be using will be created in a folder named www. Here we can see an index.html file, js/index.js file css/index.css file. Our entry point is going to be index.html. Apache Cordova would have already created a simple starter template. We won’t be needing that as we are going to start coding from scratch. So, open a code editor and remove all the code from index.html, js/index.js and css/index.css.
Now let’s make a simple HTML that has a div container where the clock is displayed and a heading text. Let’s link our stylesheet and JavaScript to the HTML. Now we have the basic view structure of the app, but it still does nothing. So to add functionality, let’s write some javascript codes.
HTML
<!DOCTYPE html><html lang="en"><head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Geeks For Geeks Clock</title> <link rel="stylesheet" href="css/index.css"></head><body> <div> <h2 id="Text">Geeks For Geeks</h2> <div id="ClockDisplay"></div> </div> <script src="js/index.js"></script></body></html>
Let’s add some functionality in js/index.js. Create a function named showTime which takes in Date object and sets the inner text of the container in which ClockDisplay is residing to time. Also, set the interval in which the function needs to be repeated. Here, the function repeats itself every 1000ms or 1 second.
Javascript
function showTime() { var date = new Date(); var h = date.getHours(); var m = date.getMinutes(); var s = date.getSeconds(); var time = h + ":" + m + ":" + s; document.getElementById("ClockDisplay").innerText = time; document.getElementById("ClockDisplay").textContent = time;}setInterval(showTime, 1000);
Now we have a very dull looking clock. So let’s add some CSS in css/index.css in to make it look better.
CSS
html { height: 100%;}body { height: 100%; display: flex; align-items: center; justify-content: center; background-color: black; font-family:'Open Sans', sans-serif;} #ClockDisplay { width: 80%; font-size: 20px; text-align: center; font-size: 19vw; color: #acfac1; /*text-shadow: 0 0 5px #fff, 0 0 10px #fff, 0 0 15px #82e896, 0 0 20px #82e896, 0 0 25px #82e896, 0 0 30px #82e896, 0 0 35px #82e896;*/} #Text { color:white; font-family:'Open Sans', sans-serif; text-align: center; font-size: 30px;}
Open index.html on a browser to see it’s working. Now we have to move to the next stage, i.e., building the Android Application Package(or .apk file).
Apache Cordova makes it really simple to build applications. Open terminal, and change the directory to the Cordova project directory. Just type the following commands to build:
cordova build android
The build process will take some time and save the output file in “(projectfolder)\project\platforms\android\app\build\outputs\apk\debug” as “app-debug.apk”.
Transfer this file to your mobile and install it.
This article aims in teaching the basics of Cordova, however the same can be extended to build much more complex applications.
Apache
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Roadmap to Become a Web Developer in 2022
How to fetch data from an API in ReactJS ?
Convert a string to an integer in JavaScript
Top 10 Angular Libraries For Web Developers
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
How to create footer to stay at the bottom of a Web page?
Node.js fs.readFileSync() Method
How to redirect to another page in ReactJS ?
How to Insert Form Data into Database using PHP ? | [
{
"code": null,
"e": 24870,
"s": 24842,
"text": "\n30 Jun, 2020"
},
{
"code": null,
"e": 25314,
"s": 24870,
"text": "Apache Cordova is hybrid mobile development framework used to create mobile apps out of progressive web applications. However, Apache Cordova is used to make mobil... |
How to use selenium to check if element contains specific class attribute? | We can check if an element contains a specific class attribute value. The getAttribute() method is used to get the value of the class attribute. We need to pass class as a parameter to the getAttribute() method.
First of all we need to identify the element with help of any of the locators like id, class, name, xpath or css. Then obtain the class value of the attribute. Finally, we need to check if the class attribute contains a specific value.
Let us take an element with the below html code having class attribute. The tp-logo is the value of the class attribute.
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import java.util.concurrent.TimeUnit;
public class SpecificClassVal{
public static void main(String[] args) {
System.setProperty("webdriver.chrome.driver","C:\\Users\\ghs6kor\\Desktop\\Java\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);
driver.get("https://www.tutorialspoint.com/about/about_careers.htm");
// identify element
WebElement t=driver.findElement(By.xpath("//img[@class='tp-logo']"));
// get class attribute with getAttribute()
String clsVal = t.getAttribute("class");
//iterate through class value
for (String i : clsVal.split(" ")) {
//check the class for specific value
if (i.equals("tp-logo")) {
System.out.println("class attribute contains: " + clsVal);
} else {
System.out.println("class attribute does not contain: " + clsVal);}
}
driver.close();
}
} | [
{
"code": null,
"e": 1274,
"s": 1062,
"text": "We can check if an element contains a specific class attribute value. The getAttribute() method is used to get the value of the class attribute. We need to pass class as a parameter to the getAttribute() method."
},
{
"code": null,
"e": 1510... |
Lint and its Usage in Android Studio - GeeksforGeeks | 08 Sep, 2021
As Android Developers, we all utilize Android Studio to create our apps. There are numerous alternative editors that can be used for Android app development, but what draws us to Android Studio is the assistance that it offers to Android developers. The assistance may take the shape of auto-suggestions or the display of faults in our code (in all the files present in our project). So, in this article, we’ll look at Lint, one of the greatest aspects of Android Studio that helps us enhance our ability to write error-free code.
This article will cover the following topics:
What exactly is lint?Lint configurationUsing Baseline to Assist Lint Removal
What exactly is lint?
Lint configuration
Using Baseline to Assist Lint Removal
Lint is a code scanning tool supplied by Android Studio that identifies, suggests, and corrects incorrect or dangerous code in a project.
GeekTip #1: Lint functions similarly to a full-fledged stack analysis framework.
We’ve all been using Lint since we first started using Android Studio because it comes standard with support for Lint in every project. The Lint will notify you of any errors found in your code, along with some suggestions and a warning level. You may use the advice to make changes to your code. Lint’s biggest feature is that you may utilize it any way you see fit. If you provide a certain type of error in your project, the Lint will only display that sort of mistake. Lint is adaptable in nature. Android Studio automatically executes the inspection process when you create your project, but you can also examine your code manually or from the command line using Lint.
Lint removal may be broken down into three steps:
Making a lint.xml file: In the lint.xml file, you may modify the Lint checks. You can write the checks you want to include in this file and disregard the checks you don’t want to include. For example, if you wish to check for unused variables but not for naming problems, you may do so in the lint.xml file. Aside from that, you may manually configure the Lint checks. In the following section of this article, we will look at how to perform manual lint checks.Lint Inspection: The next step is to choose the source files that will be subjected to the Lint inspection. It may be your project’s.java file, .kt file, or any XML file.The Lint Remover: Finally, the lint tool examines the source and lint.xml files for structural code issues and, if any, suggests code changes. It is recommended that we apply the lint recommendation before releasing our program.
Making a lint.xml file: In the lint.xml file, you may modify the Lint checks. You can write the checks you want to include in this file and disregard the checks you don’t want to include. For example, if you wish to check for unused variables but not for naming problems, you may do so in the lint.xml file. Aside from that, you may manually configure the Lint checks. In the following section of this article, we will look at how to perform manual lint checks.
Lint Inspection: The next step is to choose the source files that will be subjected to the Lint inspection. It may be your project’s.java file, .kt file, or any XML file.
The Lint Remover: Finally, the lint tool examines the source and lint.xml files for structural code issues and, if any, suggests code changes. It is recommended that we apply the lint recommendation before releasing our program.
When Should You Use Lint?
If you wish to publish your app on the Play Store or any other app store, it must be error-free. You must conduct a great deal of manual testing on your app for this purpose.
GeekTip #2: However, if you wish to eliminate part of the manual testing, you may incorporate Lint into your project.
Lint will detect the issues and propose solutions if you check each and every file in your code for faults. Errors or warnings can be of the following types:
Variables that have not been utilizedUnjustifiable exceptionsImports that aren’t needed for the project, and much more
Variables that have not been utilized
Unjustifiable exceptions
Imports that aren’t needed for the project, and much more
So, before you publish your app, use Lint to thoroughly check your code.
You may also set up lint checking at multiple layers of your project:
Across the board (entire project)Module for the projectThe module of production Module of testingFiles should be openedVersion Control System (VCS) scopes in a class hierarchy
Across the board (entire project)
Module for the project
The module of production Module of testing
Files should be opened
Version Control System (VCS) scopes in a class hierarchy
To utilize Lint or just run inspections in your project, add Lint inspection to the lint.xml file or manually pick the list of issues to be configured by Lint in your project using Android Studio.
Configure the lint file:
Add the list of issues to be configured in the lint.xml file to define manual inspections in your app. If you create a new lint.xml file, place it in the root directory of your Android project.
Here’s an example of a lint.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<lint>
<issue id="GeeksIconMissing" severity="error" />
<issue id="OldDimens">
<ignore path="res/layout/merger.xml" />
<ignore path="res/layout-xlarge/merger.xml" />
</issue>
<issue id="HellOWorld">
<ignore path="res/layout/main.xml" />
</issue>
<issue id="someText" severity="ignore" />
</lint>
Manually configure the lint:
By default, lint checks for a handful of problems but not all of them. This is not done since running all of the problem checks that lint may check for will slow down the speed of Android Studio. As a result, Android Studio only employs a restricted amount of lint checks by default. However, you may add and remove checks from the lint by following the steps below:
Go to Files > Settings > Editor > Inspections, and then tick the problem checks you want the lint to execute.
Lint Removal
There are instances when you are writing dangerous or error-prone code, yet the lint does not report any errors or warnings. For instance, in Android Studio, enter the following code:
Kotlin
fun someUIUpdate() { // your UI code goes here proceessSomething()}fun processSomething() { // Geeks for geeks}
The preceding lines of code will not display any errors, although they should logically display some errors because network requests should not be made during UI updates. So, what you can do is assist the lint.
Geek Tip #3: Yes, if you assist the lint, the lint will assist you.
Always attempt to utilize annotations in your project to assist the lint to understand the code more precisely. Now write the same some that you did before, and then check for errors:
Kotlin
@UiThreadfun someUIUpdate() { // your code bugs here processChanges()}@WorkerThreadfun processChanges() { // Geeks for Geeks}
If you are working on a large project and want to identify future mistakes that may arise while adding additional codes to your project, you can set a baseline to your project and the lint will create the errors that happened after that baseline. As a result, lint will disregard prior code problems and only alert you about new lines of code introduced after the baseline.
To include a baseline in your project, add the following line to the build.gradle file:
android {
lintOptions {
baseline file("lint-geeksforgeeks-example.xml")
}
}
This will generate a lint-baseline.xml file, which will serve as a baseline for your project. To add another baseline, remove the file and lint once more.
We learned how to use the Lint in Android Studio in this article. We discovered that if we want to examine our code, we don’t have to do it manually. Lint, a code checking tool provided by Android Studio, assists us in cleaning our code and using the essential and correct code for application development. So, utilize Lint to remove various sorts of errors from your project while also assisting Lint to assist you.
Android-Studio
Android
Kotlin
Android
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Flutter - Custom Bottom Navigation Bar
How to Read Data from SQLite Database in Android?
Android Listview in Java with Example
How to Post Data to API using Retrofit in Android?
Retrofit with Kotlin Coroutine in Android
Android UI Layouts
Kotlin Array
Retrofit with Kotlin Coroutine in Android
Kotlin Setters and Getters | [
{
"code": null,
"e": 24725,
"s": 24697,
"text": "\n08 Sep, 2021"
},
{
"code": null,
"e": 25256,
"s": 24725,
"text": "As Android Developers, we all utilize Android Studio to create our apps. There are numerous alternative editors that can be used for Android app development, but w... |
Node.js __filename object - GeeksforGeeks | 06 Apr, 2021
The __filename in the Node.js returns the filename of the code which is executed. It gives the absolute path of the code file. The following approach covers how to implement __filename in the NodeJS project.
Syntax:
console.log(__filename)
Prerequisites:
Basic knowledge of Node.js
Node.js installed (version 12+)
NPM installed (version 6+)
Return Value: It returns the absolute filename of the current module.
Example 1: Create a JavaScript file index.js and write down the following code:
index.js
// Node.js code to demonstrate the absolute// file name of the current Module.console.log("Filename of the current file is: ", __filename);
Run the index.js file using the following command:
node index.js
Output:
Filename of the current file is:
C:\Users\Pallavi\Desktop\node_func\app.js
Example 2:
index.js
// Node.js code to demonstrate the absolute// file name of the current Module console.log(__filename); // To show to parts of file using filename.const parts = __filename.split(/[\\/]/);console.log( "This the all the parts " + "present in file :",parts);
Output:
C:\Users\Pallavi\Desktop\node_func\app.js
This the all the parts present in file :
[ 'C:', 'Users', 'Pallavi', 'Desktop',
'node_func', 'app.js' ]
Reference: https://nodejs.org/api/globals.html#globals_filename
Node.js-Basics
Picked
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Express.js express.Router() Function
JWT Authentication with Node.js
Node.js Event Loop
Mongoose Populate() Method
Mongoose find() Function
Roadmap to Become a Web Developer in 2022
How to fetch data from an API in ReactJS ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS?
Difference between var, let and const keywords in JavaScript | [
{
"code": null,
"e": 24842,
"s": 24814,
"text": "\n06 Apr, 2021"
},
{
"code": null,
"e": 25050,
"s": 24842,
"text": "The __filename in the Node.js returns the filename of the code which is executed. It gives the absolute path of the code file. The following approach covers how to... |
Create a CICD Pipeline with GitHub Actions | Machine Learning Pipeline | Build CICD Pipeline | Towards Data Science | What if I told you “You can automate the process of building, testing, delivering, or deploying your Machine Learning models into production”?
The world’s most popular hosted repository service, GitHub is providing an integrated way to design and develop our workflows by automating the tasks through GitHub Actions. With Actions, the events that take place in our GitHub repository like pushes, pull requests, releases, etc. are used as triggers to kick-off the workflows. These workflows are coded in YAML format.
Workflows are nothing but the steps we follow while bringing an application into production which includes unit testing, integration testing, building artifacts, sanity check, and deploying. In this article, I am going to introduce you to GitHub Actions and show you how to build your workflow to deploy a Machine Learning Application.
Let’s Get Started!!
Create a repository on GitHub, if you don’t have an account I recommend you to create one. Every repository on GitHub now supports GitHub Actions features.
Click on “Actions”. If we already have any files in our repository, GitHub will recommend some predefined workflows. Here we will create our workflow. Click on “set up a workflow yourself”.
As I mentioned earlier, GitHub workflows are coded in YAML format, here we define the jobs, steps, and the pipeline. Before getting started let’s have a glimpse of the attributes we use in the YAML file. Below is a simple template to print “Hello, World!!”
The basic attributes we use in any workflow are:
name — The name of your workflow (optional)
on — GitHub event that triggers the flow. It can be repository events (push, pull requests, release) or webhooks, branch creation, issues, or members joining the repository.
jobs — Workflows must have at least one job. Each job must have an identifier, it’s like a name for the task we perform say “build”, “test” etc.
runs-on — The type of machine needed to run the job. These environments are called Runners. Some of them are windows server 2019, ubuntu 18.04, macOS Catalina 10.15, etc.
steps — a list of all the commands or actions. Each step runs its process.
uses — identifies an action to use, defines the location of that action. An action can be uploading and downloading artifacts or checkout or configure any cloud account. You can find various actions at GitHub MarketPlace, with categories including Continuous Integration, Deployment, Project management, Testing, Security, etc. I really suggest you to explore various actions, also we can publish our custom actions on the Marketplace.
run — runs the command in the virtual environment shell.
name — an optional name identifier for the step.
After configuring your workflow with the steps and jobs as per your wish, commit the pipeline — YAML file. The workflow will start running now. Now that we know the terminology of GitHub Actions, let’s start building the workflow for a Machine Learning Application.
I have developed a machine learning application that will predict the profit of a startup by taking some inputs from the user.
Firstly, I trained the model with the required dataset, tested it, and validated it. Then I created a Flask server that serves the requests from users. I pushed the project directory to GitHub. The GitHub repository should look something like this:
I want to create a workflow with the following four stages. You can also define your workflow accordingly.
LintingBuild a Docker Image and push to Google’s Container Registry.Testing — Download the Docker image from Container Registry and Test it.Deploy the Docker image on Cloud Run.
Linting
Build a Docker Image and push to Google’s Container Registry.
Testing — Download the Docker image from Container Registry and Test it.
Deploy the Docker image on Cloud Run.
To reduce errors and improvise the overall quality of your code, Linting is necessary. If lint tools are used effectively, it helps fasten your development by reducing the costs of finding the errors. In this stage, we are doing unit testing and checking the code style.
Now we will create the workflow, as shown above, and we will do it step by step.
In this stage, we need to define the environment (Ubuntu in my case), check out the repository in that environment, and install all the dependencies or requirements required for the model to run. Also, I used Pylint and flake8 for checking the code base against coding style (PEP8), programming errors, and to check cyclomatic complexity.
Now that we have tested our code against certain coding styles, let’s build a Docker Image. Create a Dockerfile as shown below
We build a docker image for the application using a Dockerfile. This artifact (docker image) needs to be stored somewhere, where all the different versions of our application are present. This will ensure that you are delivering your software in successive versions continuously. We have various tools to store artifacts few of which are — JFrog Artifactory, Amazon ECR, DockerHub, Google’s Container Registry.
Now, I am going to push the docker image to Google’s Container Registry. So we need to set up the GCP environment which requires sensitive information like passwords and API keys. GitHub allows us to store secrets and we can access those secrets as variables.
${{ secrets.GCP_SERVICE_ACCT_KEY }}
To create a secret go to settings and select secrets.
You will be redirected to the secrets section. Click on “New secret”. Give the secret a name and add value. Click on “Add secret”
I have updated all of the required secrets.
Once the environment is set up, our job will start to build a docker image and upload it to Google’s container Registry (make sure you enable the API for the Container registry on the Google Cloud Platform).
After executing the above stages, we will have a docker image at our service stored on the container registry. In this stage, we are going to download the image and run it against various test cases. In the real scenarios, we may use different versions of docker images and we specify the versions we want to release. So we download the image from the registry rather than use an image built from the previous stage.
Testing is required for the effective performance of software applications or products. Here we can test our Flask server by checking if it is returning 200 status code i.e., a proof of successful run. Also, as we can test and validate our machine learning model, calculate the accuracy and give a threshold level for accuracy and allow the job to continue to deploy the model only if the accuracy is greater than 95% (or any other threshold).
Once the testing stage is finished with all test cases passed, our next step is to deploy the artifact or docker image. For deploying the docker image, I have used Cloud Run.
That’s all! Soon after executing this stage, the application will be in production.
Now we have seen the entire workflow, let’s start creating our action. You can copy the workflow file from here.
Click on Start commit and give a commit message. Commit the file. Go to the Actions section. You can see that the workflow has started to execute.
After running all jobs successfully, you will see the following output. We can check the console output for each stage and we can also check the logs for debugging.
Now if we go to the Container Registry, we can view the artifacts that got pushed.
Go to Cloud Run under COMPUTE Section on GCP Navigation Menu. A service with the given name is created. The machine learning model is in production. You can access the service through the endpoint now!
The final application (in my case) looks something like this:
This workflow will get triggered every time you push the changes to code. This is a basic pipeline. You can build a pipeline with various stages — Getting Data through API calls, Perform Feature Engineering, Model Training, Building or storing models (artifacts), Testing, and Deploying to production.
The full code can be found on GitHub here.
We have seen the process of setting up a workflow that automates the deployment of a Machine Learning Application using GitHub Actions. It enables us to create our actions and combine them to create a workflow for any software application.
I’ll be happy to hear suggestions if you have any. I will come back with another intriguing topic very soon. Till then, Stay Home, Stay Safe, and keep exploring!
If you would like to get in touch, connect with me on LinkedIn. | [
{
"code": null,
"e": 315,
"s": 172,
"text": "What if I told you “You can automate the process of building, testing, delivering, or deploying your Machine Learning models into production”?"
},
{
"code": null,
"e": 688,
"s": 315,
"text": "The world’s most popular hosted repository ... |
Apache Flume - Environment | We already discussed the architecture of Flume in the previous chapter. In this chapter, let us see how to download and setup Apache Flume.
Before proceeding further, you need to have a Java environment in your system. So first of all, make sure you have Java installed in your system. For some examples in this tutorial, we have used Hadoop HDFS (as sink). Therefore, we would recommend that you go install Hadoop along with Java. To collect more information, follow the link − http://www.tutorialspoint.com/hadoop/hadoop_enviornment_setup.htm
First of all, download the latest version of Apache Flume software from the website https://flume.apache.org/.
Open the website. Click on the download link on the left-hand side of the home page. It will take you to the download page of Apache Flume.
In the Download page, you can see the links for binary and source files of Apache Flume. Click on the link apache-flume-1.6.0-bin.tar.gz
You will be redirected to a list of mirrors where you can start your download by clicking any of these mirrors. In the same way, you can download the source code of Apache Flume by clicking on apache-flume-1.6.0-src.tar.gz.
Create a directory with the name Flume in the same directory where the installation directories of Hadoop, HBase, and other software were installed (if you have already installed any) as shown below.
$ mkdir Flume
Extract the downloaded tar files as shown below.
$ cd Downloads/
$ tar zxvf apache-flume-1.6.0-bin.tar.gz
$ tar zxvf apache-flume-1.6.0-src.tar.gz
Move the content of apache-flume-1.6.0-bin.tar file to the Flume directory created earlier as shown below. (Assume we have created the Flume directory in the local user named Hadoop.)
$ mv apache-flume-1.6.0-bin.tar/* /home/Hadoop/Flume/
To configure Flume, we have to modify three files namely, flume-env.sh, flumeconf.properties, and bash.rc.
In the .bashrc file, set the home folder, the path, and the classpath for Flume as shown below.
If you open the conf folder of Apache Flume, you will have the following four files −
flume-conf.properties.template,
flume-env.sh.template,
flume-env.ps1.template, and
log4j.properties.
Now rename
flume-conf.properties.template file as flume-conf.properties and
flume-conf.properties.template file as flume-conf.properties and
flume-env.sh.template as flume-env.sh
flume-env.sh.template as flume-env.sh
Open flume-env.sh file and set the JAVA_Home to the folder where Java was installed in your system.
Verify the installation of Apache Flume by browsing through the bin folder and typing the following command.
$ ./flume-ng
If you have successfully installed Flume, you will get a help prompt of Flume as shown below.
46 Lectures
3.5 hours
Arnab Chakraborty
23 Lectures
1.5 hours
Mukund Kumar Mishra
16 Lectures
1 hours
Nilay Mehta
52 Lectures
1.5 hours
Bigdata Engineer
14 Lectures
1 hours
Bigdata Engineer
23 Lectures
1 hours
Bigdata Engineer
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 1997,
"s": 1857,
"text": "We already discussed the architecture of Flume in the previous chapter. In this chapter, let us see how to download and setup Apache Flume."
},
{
"code": null,
"e": 2402,
"s": 1997,
"text": "Before proceeding further, you need to hav... |
Addition and Concatenation Using + Operator in Java - GeeksforGeeks | 07 Dec, 2021
Till now in Java, we were playing with the integral part where we witnessed that the + operator behaves the same way as it was supposed to because the decimal system was getting added up deep down at binary level and the resultant binary number is thrown up at console in the generic decimal system. But geeks even wondered what if we play this + operator between integer and string.
Example
Java
// Java Program to Illustrate Addition and Concatenation // Main classpublic class GFG { // Main driver method public static void main(String[] args) { // Print statements to illustrate // addition and Concatenation // using + operators over string and integer // combination System.out.println(2 + 0 + 1 + 6 + "GeeksforGeeks"); System.out.println("GeeksforGeeks" + 2 + 0 + 1 + 6); System.out.println(2 + 0 + 1 + 5 + "GeeksforGeeks" + 2 + 0 + 1 + 6); System.out.println(2 + 0 + 1 + 5 + "GeeksforGeeks" + (2 + 0 + 1 + 6)); }}
9GeeksforGeeks
GeeksforGeeks2016
8GeeksforGeeks2016
8GeeksforGeeks9
Output Explanation: This unpredictable output is due to the fact that the compiler evaluates the given expression from left to right given that the operators have the same precedence. Once it encounters the String, it considers the rest of the expression as of a String (again based on the precedence order of the expression).
System.out.println(2 + 0 + 1 + 6 + "GeeksforGeeks");
It prints the addition of 2,0,1 and 6 which is equal to 9
System.out.println("GeeksforGeeks" + 2 + 0 + 1 + 6);
It prints the concatenation of 2,0,1 and 6 which is 2016 since it encounters the string initially. Basically, Strings take precedence because they have a higher casting priority than integers do.
System.out.println(2 + 0 + 1 + 5 + "GeeksforGeeks" + 2 + 0 + 1 + 6);
It prints the addition of 2,0,1 and 5 while the concatenation of 2,0,1 and 6 is based on the above-given examples.
System.out.println(2 + 0 + 1 + 5 + "GeeksforGeeks" + (2 + 0 + 1 + 6));
It prints the addition of both 2,0,1 and 5 and 2,0,1 and 6 based due to the precedence of ( ) over +. Hence expression in ( ) is calculated first and then further evaluation takes place.
This article is contributed by Pranjal Mathur. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
solankimayank
Java-Output
Java-Strings
Java
Java-Strings
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Object Oriented Programming (OOPs) Concept in Java
Stream In Java
HashMap in Java with Examples
Interfaces in Java
How to iterate any Map in Java
Initialize an ArrayList in Java
ArrayList in Java
Stack Class in Java
Singleton Class in Java
Multidimensional Arrays in Java | [
{
"code": null,
"e": 25905,
"s": 25877,
"text": "\n07 Dec, 2021"
},
{
"code": null,
"e": 26289,
"s": 25905,
"text": "Till now in Java, we were playing with the integral part where we witnessed that the + operator behaves the same way as it was supposed to because the decimal syst... |
How to convert a string in an R data frame to NA? | We often see mistakes in data collection processes and these mistakes might lead to incorrect results of the research. When the data is collected with mistakes, it makes the job of analyst difficult. One of the situations, that shows the data has mistakes is getting strings in place of numerical values. Therefore, we need to convert these strings to NA in R so that we can proceed with our intended analysis.
Consider the below data frame −
> x1<-rep(c(1,3,6,7,5,2,"XYZ",12,4,5),times=2)
> x2<-rep(c(67,"XYZ",45,32,52),each=4)
> df<-data.frame(x1,x2)
> df
x1 x2
1 1 67
2 3 67
3 6 67
4 7 67
5 5 XYZ
6 2 XYZ
7 XYZ XYZ
8 12 XYZ
9 4 45
10 5 45
11 1 45
12 3 45
13 6 32
14 7 32
15 5 32
16 2 32
17 XYZ 52
18 12 52
19 4 52
20 5 52
Converting all XYZ’s to NA −
> df[df=="XYZ"]<-NA
> df
x1 x2
1 1 67
2 3 67
3 6 67
4 7 67
5 5 <NA>
6 2 <NA>
7 <NA> <NA>
8 12 <NA>
9 4 45
10 5 45
11 1 45
12 3 45
13 6 32
14 7 32
15 5 32
16 2 32
17 <NA> 52
18 12 52
19 4 52
20 5 52
Let’s have a look at one more example −
> ID<-c("Class",2:20)
> ID<-c("Class",1:19)
> Group<-rep(c("Class",2,3,4,5),times=4)
> df1<-data.frame(ID,Group)
> df1
ID Group
1 Class Class
2 1 2
3 2 3
4 3 4
5 4 5
6 5 Class
7 6 2
8 7 3
9 8 4
10 9 5
11 10 Class
12 11 2
13 12 3
14 13 4
15 14 5
16 15 Class
17 16 2
18 17 3
19 18 4
20 19 5
> df1[df1=="Class"]<-NA
> df1
ID Group
1 <NA> <NA>
2 1 2
3 2 3
4 3 4
5 4 5
6 5 <NA>
7 6 2
8 7 3
9 8 4
10 9 5
11 10 <NA>
12 11 2
13 12 3
14 13 4
15 14 5
16 15 <NA>
17 16 2
18 17 3
19 18 4
20 19 5 | [
{
"code": null,
"e": 1473,
"s": 1062,
"text": "We often see mistakes in data collection processes and these mistakes might lead to incorrect results of the research. When the data is collected with mistakes, it makes the job of analyst difficult. One of the situations, that shows the data has mistak... |
Merge two binary Max heaps | Practice | GeeksforGeeks | Given two binary max heaps as arrays, merge the given heaps to form a new max heap.
Example 1:
Input :
n = 4 m = 3
a[] = {10, 5, 6, 2},
b[] = {12, 7, 9}
Output :
{12, 10, 9, 2, 5, 7, 6}
Explanation :
Your Task:
You don't need to read input or print anything. Your task is to complete the function mergeHeaps() which takes the array a[], b[], its size n and m, as inputs and return the merged max heap. Since there can be multiple solutions, therefore, to check for the correctness of your solution, your answer will be checked by the driver code and will return 1 if it is correct, else it returns 0.
Expected Time Complexity: O(n.Logn)
Expected Auxiliary Space: O(n + m)
Constraints:
1 <= n, m <= 105
1 <= a[i], b[i] <= 2*105
0
rayalravi20011 week ago
c++ solution
priority_queue<int>pq(a.begin(),a.end()); for(auto i: b){ pq.push(i); } vector<int>res; while(!pq.empty()){ res.push_back(pq.top()); pq.pop(); } return res;
+1
superdude2 weeks ago
Well the arrays are somewhat sorted so we can simply merge them like merging two sorted arrays. This algorithm might not be exactly what one wants when they ask you to do this question but it is a solution.
+1
koulikmaity3 weeks ago
void heapify(vector<int> &arr, int n, int i){ int largest = i; int left = 2 * i + 1; int right = 2 * i + 2;
if (left < n && arr[left] > arr[largest]) largest = left;
if (right < n && arr[right] > arr[largest]) largest = right;
if (largest != i) { swap(arr[i], arr[largest]); heapify(arr, n, largest); }} vector<int> mergeHeaps(vector<int> &a, vector<int> &b, int n, int m) { // merge two vector in a single vector vector<int> ans; for(auto i: a) { ans.push_back(i); } for(auto i: b) { ans.push_back(i); } // then call heapify function int size = ans.size(); for(int i=size/2 - 1; i>=0; i--) { heapify(ans, size, i); } return ans; }
+2
dugguanshuman1 month ago
Simple solution without using priority_queue STL:
void insert_heap(vector <int> &arr, int i){
if(i==1)
return;
if(arr[i/2-1]<arr[i-1]){
swap(arr[i-1], arr[i/2-1]);
insert_heap(arr, i/2);
}
}
vector<int> mergeHeaps(vector<int> &a, vector<int> &b, int n, int m) {
while(!b.empty()){
a.push_back(b.back());
b.pop_back();
insert_heap(a, a.size());
}
return a;
}
0
harshchittora20012 months ago
C++ SOLUTION USING PRIORITY_QUEUE
vector<int> mergeHeaps(vector<int> &a, vector<int> &b, int n, int m) { // your code here priority_queue<int>m1(a.begin(),a.end()); priority_queue<int>m2(b.begin(),b.end()); vector<int>v; while(!m2.empty()) { m1.push(m2.top()); m2.pop(); } while(!m1.empty()) { v.push_back(m1.top()); m1.pop(); } return v; }
0
singh_manish2 months ago
class Solution{ public: void Heapify(int arr[], int i, int n) { int j = 2 * i + 1; while (j < n) { if (j + 1 < n && arr[j + 1] > arr[j]) j++; if (arr[j] > arr[i]) { swap(arr[j], arr[i]); i = j; j = 2 * i + 1; } else break; } } vector<int> mergeHeaps(vector<int> &a, vector<int> &b, int n, int m) { // your code here vector<int> ans; int v[n + m]; for (int i = 0; i < n; i++) v[i] = a[i]; for (int i = 0; i < m; i++) v[i + n] = b[i]; for (int i = n+m; i >= 0; i--) Heapify(v, i, n + m); for (int i = 0; i < n + m; i++) ans.push_back(v[i]); return ans; }};
0
patildhiren442 months ago
Java - 1.3
public int[] mergeHeaps(int[] a, int[] b, int n, int m) {
// your code here
PriorityQueue<Integer> pq = new PriorityQueue<>(Collections.reverseOrder());
for (int i = 0; i < a.length; i++) {
pq.add(a[i]);
}
for (int i = 0; i < b.length; i++) {
pq.add(b[i]);
}
int[]res = new int[n+m];
int ind=0;
while(!pq.isEmpty()){
res[ind] = pq.remove();
ind++;
}
return res;
}
+1
yashchawla1162 months ago
Simple To Understand And Easy To Implement.
https://yashboss116.blogspot.com/2022/03/merge-two-binary-max-heaps-geeks-for.html
-1
ayushnautiyal11103 months ago
vector<int> mergeHeaps(vector<int> &a, vector<int> &b, int n, int m) { for(int i=0;i<m;i++){ a.push_back(b[i]); } sort(a.begin(),a.end()); reverse(a.begin(),a.end()); return a; }
+2
aloksinghbais023 months ago
C++ solution having time complexity as O(N*log(N)) and space complexity as O(N+M) is as follows :-
Execution Time :- 0.8 / 2.1 sec
void heapify(vector<int> &arr,int n,int i){ while(true){ int largest = i; int left = 2*i+1; int right = 2*i+2; if(left < n && arr[left] > arr[largest]){ largest = left; } if(right < n && arr[right] > arr[largest]){ largest = right; } if(largest != i){ swap(arr[i],arr[largest]); i = largest; } else{ break; } } } vector<int> mergeHeaps(vector<int> &a, vector<int> &b, int n, int m) { vector<int> ans(a); for(auto x: b) ans.push_back(x); for(int i = ans.size()/2 - 1; i >= 0; i--){ heapify(ans,ans.size(),i); } return (ans); }
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 322,
"s": 238,
"text": "Given two binary max heaps as arrays, merge the given heaps to form a new max heap."
},
{
"code": null,
"e": 335,
"s": 324,
"text": "Example 1:"
},
{
"code": null,
"e": 448,
"s": 335,
"text": "Input : \nn = 4 m = 3... |
Amazon Web Services - DynamoDB | Amazon DynamoDB is a fully managed NoSQL database service that allows to create database tables that can store and retrieve any amount of data. It automatically manages the data traffic of tables over multiple servers and maintains performance. It also relieves the customers from the burden of operating and scaling a distributed database. Hence, hardware provisioning, setup, configuration, replication, software patching, cluster scaling, etc. is managed by Amazon.
Following are the steps to set up DynamoDB.
Step 1 − Following are the steps to set up DynamoDB.
Download DynamoDB (.jar file) using the following link. It supports multiple Operating Systems like Windows, Linux, Mac, etc.
.tar.gz format − http://dynamodb-local.s3-website-us-west2.amazonaws.com/dynamodb_local_latest.tar.gz
.zip format − http://dynamodb-local.s3-website-us-west2.amazonaws.com/dynamodb_local_latest.zip.
Download DynamoDB (.jar file) using the following link. It supports multiple Operating Systems like Windows, Linux, Mac, etc.
.tar.gz format − http://dynamodb-local.s3-website-us-west2.amazonaws.com/dynamodb_local_latest.tar.gz
.zip format − http://dynamodb-local.s3-website-us-west2.amazonaws.com/dynamodb_local_latest.zip.
Once download is complete, extract the contents and copy the extracted directory to a location wherever you want.
Once download is complete, extract the contents and copy the extracted directory to a location wherever you want.
Open the command prompt and navigate to the directory where you extracted DynamoDBLocal.jar, and execute the following command −
Open the command prompt and navigate to the directory where you extracted DynamoDBLocal.jar, and execute the following command −
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
Now there is access to the build-in javaScript shell.
Now there is access to the build-in javaScript shell.
Step 2 − Create a Table using the following steps.
Open AWS Management Console and select DynamoDB.
Open AWS Management Console and select DynamoDB.
Select the region where the table will be created and click the Create Table button.
Select the region where the table will be created and click the Create Table button.
Create Table window opens. Fill the details into their respective fields and click the Continue button.
Create Table window opens. Fill the details into their respective fields and click the Continue button.
Finally, a review page opens where we can view details. Click the Create button.
Finally, a review page opens where we can view details. Click the Create button.
Now the Table-name is visible in the in-to the list and Dynamo Table is ready to use.
Now the Table-name is visible in the in-to the list and Dynamo Table is ready to use.
Managed service − Amazon DynamoDB is a managed service. There is no need to hire experts to manage NoSQL installation. Developers need not worry about setting up, configuring a distributed database cluster, managing ongoing cluster operations, etc. It handles all the complexities of scaling, partitions and re-partitions data over more machine resources to meet I/O performance requirements.
Scalable − Amazon DynamoDB is designed to scale. There is no need to worry about predefined limits to the amount of data each table can store. Any amount of data can be stored and retrieved. DynamoDB will spread automatically with the amount of data stored as the table grows.
Fast − Amazon DynamoDB provides high throughput at very low latency. As datasets grow, latencies remain stable due to the distributed nature of DynamoDB's data placement and request routing algorithms.
Durable and highly available − Amazon DynamoDB replicates data over at least 3 different data centers’ results. The system operates and serves data even under various failure conditions.
Flexible: Amazon DynamoDB allows creation of dynamic tables, i.e. the table can have any number of attributes, including multi-valued attributes.
Cost-effective: Payment is for what we use without any minimum charges. Its pricing structure is simple and easy to calculate.
24 Lectures
3 hours
Syed Raza
43 Lectures
4 hours
Theo McArthur
15 Lectures
51 mins
John Shea
58 Lectures
4 hours
John Shea
72 Lectures
6.5 hours
Pranjal Srivastava
15 Lectures
1.5 hours
Harshit Srivastava
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2760,
"s": 2291,
"text": "Amazon DynamoDB is a fully managed NoSQL database service that allows to create database tables that can store and retrieve any amount of data. It automatically manages the data traffic of tables over multiple servers and maintains performance. It also ... |
C# - Array Class | The Array class is the base class for all the arrays in C#. It is defined in the System namespace. The Array class provides various properties and methods to work with arrays.
The following table describes some of the most commonly used properties of the Array class −
IsFixedSize
Gets a value indicating whether the Array has a fixed size.
IsReadOnly
Gets a value indicating whether the Array is read-only.
Length
Gets a 32-bit integer that represents the total number of elements in all the dimensions of the Array.
LongLength
Gets a 64-bit integer that represents the total number of elements in all the dimensions of the Array.
Rank
Gets the rank (number of dimensions) of the Array.
The following table describes some of the most commonly used methods of the Array class −
Clear
Sets a range of elements in the Array to zero, to false, or to null, depending on the element type.
Copy(Array, Array, Int32)
Copies a range of elements from an Array starting at the first element and pastes them into another Array starting at the first element. The length is specified as a 32-bit integer.
CopyTo(Array, Int32)
Copies all the elements of the current one-dimensional Array to the specified one-dimensional Array starting at the specified destination Array index. The index is specified as a 32-bit integer.
GetLength
Gets a 32-bit integer that represents the number of elements in the specified dimension of the Array.
GetLongLength
Gets a 64-bit integer that represents the number of elements in the specified dimension of the Array.
GetLowerBound
Gets the lower bound of the specified dimension in the Array.
GetType
Gets the Type of the current instance. (Inherited from Object.)
GetUpperBound
Gets the upper bound of the specified dimension in the Array.
GetValue(Int32)
Gets the value at the specified position in the one-dimensional Array. The index is specified as a 32-bit integer.
IndexOf(Array, Object)
Searches for the specified object and returns the index of the first occurrence within the entire one-dimensional Array.
Reverse(Array)
Reverses the sequence of the elements in the entire one-dimensional Array.
SetValue(Object, Int32)
Sets a value to the element at the specified position in the one-dimensional Array. The index is specified as a 32-bit integer.
Sort(Array)
Sorts the elements in an entire one-dimensional Array using the IComparable implementation of each element of the Array.
ToString
Returns a string that represents the current object. (Inherited from Object.)
For complete list of Array class properties and methods, please consult Microsoft documentation on C#.
The following program demonstrates use of some of the methods of the Array class −
using System;
namespace ArrayApplication {
class MyArray {
static void Main(string[] args) {
int[] list = { 34, 72, 13, 44, 25, 30, 10 };
int[] temp = list;
Console.Write("Original Array: ");
foreach (int i in list) {
Console.Write(i + " ");
}
Console.WriteLine();
// reverse the array
Array.Reverse(temp);
Console.Write("Reversed Array: ");
foreach (int i in temp) {
Console.Write(i + " ");
}
Console.WriteLine();
//sort the array
Array.Sort(list);
Console.Write("Sorted Array: ");
foreach (int i in list) {
Console.Write(i + " ");
}
Console.WriteLine();
Console.ReadKey();
}
}
}
When the above code is compiled and executed, it produces the following result −
Original Array: 34 72 13 44 25 30 10
Reversed Array: 10 30 25 44 13 72 34
Sorted Array: 10 13 25 30 34 44 72
119 Lectures
23.5 hours
Raja Biswas
37 Lectures
13 hours
Trevoir Williams
16 Lectures
1 hours
Peter Jepson
159 Lectures
21.5 hours
Ebenezer Ogbu
193 Lectures
17 hours
Arnold Higuit
24 Lectures
2.5 hours
Eric Frick
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2446,
"s": 2270,
"text": "The Array class is the base class for all the arrays in C#. It is defined in the System namespace. The Array class provides various properties and methods to work with arrays."
},
{
"code": null,
"e": 2539,
"s": 2446,
"text": "The fo... |
What are the differences between a pointer variable and a reference variable in C++? | When a variable is declared as a reference, it becomes an alternative name for an existing variable.
Type &newname = existing name;
Type &pointer;
pointer = variable name;
Pointers are used to store the address of a variable.
Type *pointer;
Type *pointer;
pointer = variable name;
The main differences between references and pointers are -
References are used to refer an existing variable in another name whereas pointers are used to store the address of a variable.
References cannot have a null value assigned but pointer can.
A reference variable can be referenced bypass by value whereas a pointer can be referenced but pass by reference
A reference must be initialized on declaration while it is not necessary in case of a pointer.
A reference shares the same memory address with the original variable but also takes up some space on the stack whereas a pointer has its own memory address and size on the stack. | [
{
"code": null,
"e": 1163,
"s": 1062,
"text": "When a variable is declared as a reference, it becomes an alternative name for an existing variable."
},
{
"code": null,
"e": 1194,
"s": 1163,
"text": "Type &newname = existing name;"
},
{
"code": null,
"e": 1234,
"s"... |
Program to find minimum number of busses are required to pass through all stops in Python | Suppose we have a list of numbers called nums and that is showing the bus stops on a line where nums[i] shows the time a bus must arrive at station i. Now that buses can only move forward, we have to find the minimum number of buses that are needed to pass through all the stops.
So, if the input is like nums = [1, 2, 7, 9, 3, 4], then the output will be 2, as one bus can take stops [1, 2, 3, 4] and another can do [7, 9].
To solve this, we will follow these steps−
ans := 0
ans := 0
seen := a list whose length is same as nums and initially filled with false
seen := a list whose length is same as nums and initially filled with false
for each index i and corresponding n in nums, doif seen[i] is false, thenseen[i] := Trueans := ans + 1prev := nfor j in range i+1 to size of nums, doif nums[j] > prev and seen[j] is false, thenseen[j] := Trueprev := nums[j]
for each index i and corresponding n in nums, do
if seen[i] is false, thenseen[i] := Trueans := ans + 1prev := nfor j in range i+1 to size of nums, doif nums[j] > prev and seen[j] is false, thenseen[j] := Trueprev := nums[j]
if seen[i] is false, then
seen[i] := True
seen[i] := True
ans := ans + 1
ans := ans + 1
prev := n
prev := n
for j in range i+1 to size of nums, doif nums[j] > prev and seen[j] is false, thenseen[j] := Trueprev := nums[j]
for j in range i+1 to size of nums, do
if nums[j] > prev and seen[j] is false, thenseen[j] := Trueprev := nums[j]
if nums[j] > prev and seen[j] is false, then
seen[j] := True
seen[j] := True
prev := nums[j]
prev := nums[j]
return ans
return ans
Let us see the following implementation to get better understanding −
Live Demo
class Solution:
def solve(self, nums):
ans = 0
seen = [False] * len(nums)
for i, n in enumerate(nums):
if not seen[i]:
seen[i] = True
ans += 1
prev = n
for j in range(i+1, len(nums)):
if nums[j] > prev and not seen[j]: seen[j] = True
prev = nums[j]
return ans
ob = Solution()
nums = [1, 2, 7, 9, 3, 4]
print(ob.solve(nums))
[1, 2, 7, 9, 3, 4]
2 | [
{
"code": null,
"e": 1342,
"s": 1062,
"text": "Suppose we have a list of numbers called nums and that is showing the bus stops on a line where nums[i] shows the time a bus must arrive at station i. Now that buses can only move forward, we have to find the minimum number of buses that are needed to p... |
Printing Triangle Pattern in Java - GeeksforGeeks | 17 Aug, 2018
Given a number N, the task is to print the following pattern:-
Examples:
Input : 10
Output :
*
* *
* * *
* * * *
* * * * *
* * * * * *
* * * * * * *
* * * * * * * *
* * * * * * * * *
* * * * * * * * * *
Input :5
Output :
*
* *
* * *
* * * *
* * * * *
There is a nested loop required to print the above pattern. The outer loop is used to run for the number of rows given as input. The first loop within the outer loop is used to print the spaces before each star. As you can see the number of spaces decreases with each row while we move towards the base of the triangle, so this loop runs one time less with each iteration. The second loop within the outer loop is used to print the stars. As you can see the number of stars increases in each row as we move towards the base of the triangle, so this loop runs one time more with each iteration. Clarity can be achieved if this program is dry run.
// Java Program to print the given patternimport java.util.*; // package to use Scanner classclass pattern { public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.println("Enter the number of rows to be printed"); int rows = sc.nextInt(); // loop to iterate for the given number of rows for (int i = 1; i <= rows; i++) { // loop to print the number of spaces before the star for (int j = rows; j >= i; j--) { System.out.print(" "); } // loop to print the number of stars in each row for (int j = 1; j <= i; j++) { System.out.print("* "); } // for new line after printing each row System.out.println(); } }}
pattern-printing
Java
Java Programs
School Programming
pattern-printing
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Interfaces in Java
ArrayList in Java
Stack Class in Java
Stream In Java
Singleton Class in Java
Convert a String to Character array in Java
Initializing a List in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class | [
{
"code": null,
"e": 24702,
"s": 24674,
"text": "\n17 Aug, 2018"
},
{
"code": null,
"e": 24765,
"s": 24702,
"text": "Given a number N, the task is to print the following pattern:-"
},
{
"code": null,
"e": 24775,
"s": 24765,
"text": "Examples:"
},
{
"co... |
Java Program to convert int array to IntStream | To convert int array to IntStream, let us first create an int array:
int[] arr = {10, 20, 30, 40, 50, 60, 70, 80, 90, 100};
Now, create IntStream and convert the above array to IntStream:
IntStream stream = Arrays.stream(arr);
Now limit some elements and find the sum of those elements in the stream:
IntStream stream = Arrays.stream(arr);
stream = stream.limit(7);
System.out.println("Sum of first 7 elements = "+stream.sum());
The following is an example to convert int array to IntStream:
import java.util.Arrays;
import java.util.stream.IntStream;
public class Demo {
public static void main(String[] args) {
int[] arr = {10, 20, 30, 40, 50, 60, 70, 80, 90, 100};
System.out.println("Array elements...");
for (int res : arr)
{
System.out.println(res);
}
IntStream stream = Arrays.stream(arr);
stream = stream.limit(7);
System.out.println("Sum of first 7 elements = "+stream.sum());
}
}
Array elements...
10
20
30
40
50
60
70
80
90
100
Sum of first 7 elements = 280 | [
{
"code": null,
"e": 1131,
"s": 1062,
"text": "To convert int array to IntStream, let us first create an int array:"
},
{
"code": null,
"e": 1186,
"s": 1131,
"text": "int[] arr = {10, 20, 30, 40, 50, 60, 70, 80, 90, 100};"
},
{
"code": null,
"e": 1250,
"s": 1186,
... |
How to simulate touch event with android at a given position? | This example demonstrates how do I simulate touch even with android at a given position.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:padding="4dp"
tools:context=".MainActivity">
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.java
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.MotionEvent;
import android.widget.Toast;
public class MainActivity extends AppCompatActivity {
private boolean isTouch = false;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
}
@Override
public boolean onTouchEvent(MotionEvent event) {
int X = (int) event.getX();
int Y = (int) event.getY();
int eventAction = event.getAction();
switch (eventAction) {
case MotionEvent.ACTION_DOWN:
Toast.makeText(this, "ACTION_DOWN "+"X: "+X+" Y: "+Y, Toast.LENGTH_SHORT).show();
isTouch = true;
break;
case MotionEvent.ACTION_MOVE:
Toast.makeText(this, "MOVE "+"X: "+X+" Y: "+Y,
Toast.LENGTH_SHORT).show();
break;
case MotionEvent.ACTION_UP:
Toast.makeText(this, "ACTION_UP "+"X: "+X+" Y: "+Y, Toast.LENGTH_SHORT).show();
break;
}
return true;
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest
xmlns:android="http://schemas.android.com/apk/res/android"
package="app.com.sample">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action
android:name="android.intent.action.MAIN" />
<category
android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − | [
{
"code": null,
"e": 1151,
"s": 1062,
"text": "This example demonstrates how do I simulate touch even with android at a given position."
},
{
"code": null,
"e": 1280,
"s": 1151,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all requir... |
MATLAB - Magnitude of a Vector | Magnitude of a vector v with elements v1, v2, v3, ..., vn, is given by the equation −
|v| = √(v12 + v22 + v32 + ... + vn2)
You need to take the following steps to calculate the magnitude of a vector −
Take the product of the vector with itself, using array multiplication (.*). This produces a vector sv, whose elements are squares of the elements of vector v.
sv = v.*v;
Take the product of the vector with itself, using array multiplication (.*). This produces a vector sv, whose elements are squares of the elements of vector v.
sv = v.*v;
Use the sum function to get the sum of squares of elements of vector v. This is also called the dot product of vector v.
dp= sum(sv);
Use the sum function to get the sum of squares of elements of vector v. This is also called the dot product of vector v.
dp= sum(sv);
Use the sqrt function to get the square root of the sum which is also the magnitude of the vector v.
mag = sqrt(s);
Use the sqrt function to get the square root of the sum which is also the magnitude of the vector v.
mag = sqrt(s);
Create a script file with the following code −
v = [1: 2: 20];
sv = v.* v; %the vector with elements
% as square of v's elements
dp = sum(sv); % sum of squares -- the dot product
mag = sqrt(dp); % magnitude
disp('Magnitude:');
disp(mag);
When you run the file, it displays the following result −
Magnitude:
36.469
30 Lectures
4 hours
Nouman Azam
127 Lectures
12 hours
Nouman Azam
17 Lectures
3 hours
Sanjeev
37 Lectures
5 hours
TELCOMA Global
22 Lectures
4 hours
TELCOMA Global
18 Lectures
3 hours
Phinite Academy
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2227,
"s": 2141,
"text": "Magnitude of a vector v with elements v1, v2, v3, ..., vn, is given by the equation −"
},
{
"code": null,
"e": 2264,
"s": 2227,
"text": "|v| = √(v12 + v22 + v32 + ... + vn2)"
},
{
"code": null,
"e": 2342,
"s": 2264,
... |
Comparing Strings with (possible) null values in java? | Strings in Java represents an array of characters. They are represented by the String class.
The compareTo() method of the String class two Strings (char by char) it also accepts null values. This method returns an integer representing the result, if the value of the obtained integer is −
0: Given two Strings are equal or, null.
1 or less: The current String preceeds the argument.
1 or more: The current String succeeds the argument.
import java.util.Scanner;
public class CompringStrings {
public static void main(String args[]) {
Scanner sc = new Scanner(System.in);
System.out.println("Enter your first string value: ");
String str1 = sc.next();
System.out.println("Enter your second string value: ");
String str2 = sc.next();
//Comparing two Strings
int res = str1.compareTo(str2);
System.out.println(res);
if(res==0) {
System.out.println("Both Strings are null or equal");
}else if(res<0){
System.out.println(""+str1+" preceeds "+str2+"");
}else if(res>0){
System.out.println(""+str2+" preceeds "+str1+"");
}
}
}
Enter your first string value:
null
Enter your second string value:
null
0
Both Strings are null or equal
Enter your first string value:
mango
Enter your second string value:
apple
-1
apple preceeds mango
In the same way the equals() method of the object class accepts two String values and returns a boolean value, which is true if both are equal (or, null) and false if not.
import java.util.Scanner;
public class CompringStrings {
public static void main(String args[]) {
Scanner sc = new Scanner(System.in);
System.out.println("Enter your first string value: ");
String str1 = sc.next();
System.out.println("Enter your second string value: ");
String str2 = sc.next();
if(str1.equals(str2)) {
System.out.println("Both Strings are null or equal");
}else {
System.out.println("Both Strings are not equal");
}
}
}
Enter your first string value:
null
Enter your second string value:
null
0
Both Strings are null or equal | [
{
"code": null,
"e": 1155,
"s": 1062,
"text": "Strings in Java represents an array of characters. They are represented by the String class."
},
{
"code": null,
"e": 1352,
"s": 1155,
"text": "The compareTo() method of the String class two Strings (char by char) it also accepts nul... |
CSS Gradients - GeeksforGeeks | 21 Oct, 2021
The Gradient in CSS is a special type of image that is made up of progressive & smooth transition between two or more colors. CSS is the way to add style to various web documents. By using the gradient in CSS, we can create variants styling of images which can help to make an attractive webpage.
The Gradients can be categorized into 2 types:
Linear Gradients: It includes the smooth color transitions to going up, down, left, right, and diagonally. The minimum two-color required to create a linear gradient. More than two color elements can be possible in linear gradients. The starting point and the direction are needed for the gradient effect.
Syntax:
background-image: linear-gradient(direction, color-stop1, color-stop2, ...);
The linear-gradient can be implemented in the following ways:
Top to Bottom: In this image, the transition started with white color and ended with green color. On exchanging the color sequence, the transition will start with green and will end with white.
Example: This example illustrates the linear-gradient that starts from the top & ends at the bottom, initiating from the white color, transitioning to the green color.
HTML
<!DOCTYPE html><html><head> <title>CSS Gradients</title> <style> #main { height: 200px; background-color: white; background-image: linear-gradient(white, green); } .gfg { text-align: center; font-size: 40px; font-weight: bold; padding-top: 80px; } .geeks { font-size: 17px; text-align: center; } </style></head> <body> <div id="main"> <div class="gfg">GeeksforGeeks</div> <div class="geeks"> A computer science portal for geeks </div> </div></body></html>
Output:
Left to Right: In this image, the transition started from left to right. It starts from white transitioning to green.
Example: This example illustrates the linear-gradient that starts from the left & ends at the right.
HTML
<!DOCTYPE html><html><head> <title>CSS Gradients</title> <style> #main { height: 200px; background-color: white; background-image: linear-gradient(to right, white, green); } .gfg { text-align: center; font-size: 40px; font-weight: bold; padding-top: 80px; } .geeks { font-size: 17px; text-align: center; } </style></head> <body> <div id="main"> <div class="gfg">GeeksforGeeks</div> <div class="geeks"> A computer science portal for geeks </div> </div></body></html>
Output:
Diagonal: This transition started from top-left to bottom-right. It starts with the green transition to white. For the diagonal gradient, need to specify both horizontal and vertical starting positions.
Example: This example illustrates the linear-gradient with the diagonal transition by specifying both the horizontal and vertical starting positions.
HTML
<!DOCTYPE html><html><head> <title>CSS Gradients</title> <style> #main { height: 200px; background-color: white; background-image: linear-gradient(to bottom right, green, rgba(183, 223, 182, 0.4)); } .gfg { text-align: center; font-size: 40px; font-weight: bold; padding-top: 80px; } .geeks { font-size: 17px; text-align: center; } </style></head> <body> <div id="main"> <div class="gfg">GeeksforGeeks</div> <div class="geeks"> A computer science portal for geeks </div> </div></body></html>
Output:
Repeating Linear Gradient: CSS allows the user to implement multiple linear gradients using a single function repeating-linear-gradient(). The image here contains 3 colors in each transition with some percentage value.
Example: This example illustrates the linear-gradient with repeating transition effects by implementing the multicolors.
HTML
<!DOCTYPE html><html><head> <title>CSS Gradients</title> <style> #main { height: 200px; background-color: white; background-image: repeating-linear-gradient(#090, #fff 10%, #2a4f32 20%); } .gfg { text-align: center; font-size: 40px; font-weight: bold; padding-top: 80px; } .geeks { font-size: 17px; text-align: center; } </style></head> <body> <div id="main"> <div class="gfg">GeeksforGeeks</div> <div class="geeks"> A computer science portal for geeks </div> </div></body></html>
Output:
Angles on Linear Gradients: CSS allows the user to implement directions in Linear Gradients rather than restricting themselves to predefined directions.
Example: This example illustrates the linear-gradient by implementing the direction on linear gradients.
HTML
<!DOCTYPE html><html><head> <title>CSS Gradients</title> <style> #main { height: 200px; background-color: white; background-image: repeating-linear-gradient(-45deg, #090, #2a4f32 10%); } .gfg { text-align: center; font-size: 40px; font-weight: bold; padding-top: 80px; } .geeks { font-size: 17px; text-align: center; } </style></head> <body> <div id="main"> <div class="gfg">GeeksforGeeks</div> <div class="geeks"> A computer science portal for geeks </div> </div></body></html>
Output:
CSS Radial Gradients: A radial gradient differs from a linear gradient. It starts at a single point and emanates outward. By default, the gradient will be elliptical shape, the size will be farthest-corner the first color starts at the center position of the element and then fades to the end color towards the edge of the element. Fade happens at an equal rate until specified.
Syntax:
background-image: radial-gradient(shape size at position, start-color, ..., last-color);
The radial-gradient can be implemented in the following ways:
Radial Gradient – evenly spaced color stops: In CSS, by default, the fade happens at an equal rate. The following figure shows the Radial Gradient with even color stops.
Color stops: Color stops inform the browsers that what color to use, at the starting point of the gradient & where to stop. By default, they are equally spaced but we can overrule it by providing the specific color stops.
Example: This example illustrates the radial-gradient having evenly spaced color stops.
HTML
<!DOCTYPE html><html><head> <title>CSS Gradients</title> <style> #main { height: 350px; width: 700px; background-color: white; background-image: radial-gradient(#090, #fff, #2a4f32); } .gfg { text-align: center; font-size: 40px; font-weight: bold; padding-top: 80px; } .geeks { font-size: 17px; text-align: center; } </style></head> <body> <div id="main"> <div class="gfg">GeeksforGeeks</div> <div class="geeks"> computer science portal for geeks </div> </div></body></html>
Output:
Radial Gradient- unevenly spaced color stops: CSS allows the user to have variation in spacing of color stops while applying the radial-gradient feature.
Example: This example illustrates the radial-gradient having unevenly spaced color stops.
HTML
<!DOCTYPE html><html><head> <title>CSS Gradients</title> <style> #main { height: 350px; width: 100%; background-color: white; background-image: radial-gradient(#090 40%, #fff, #2a4f32); } .gfg { text-align: center; font-size: 40px; font-weight: bold; padding-top: 80px; } .geeks { font-size: 17px; text-align: center; } </style></head> <body> <div id="main"> <div class="gfg">GeeksforGeeks</div> <div class="geeks"> A computer science portal for geeks </div> </div></body></html>
Output:
Supported Browser:
Google Chrome 26.0
Microsoft Edge 12.0
Firefox 16.0
Opera 12.1
Internet Explorer 10.0
Safari 6.1
bhaskargeeksforgeeks
CSS-Advanced
CSS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to insert spaces/tabs in text using HTML/CSS?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to create footer to stay at the bottom of a Web page?
How to update Node.js and NPM to next version ?
CSS to put icon inside an input element in a form
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 28144,
"s": 28116,
"text": "\n21 Oct, 2021"
},
{
"code": null,
"e": 28442,
"s": 28144,
"text": "The Gradient in CSS is a special type of image that is made up of progressive & smooth transition between two or more colors. CSS is the way to add style to variou... |
AngularJS | AJAX - $http - GeeksforGeeks | 27 Jun, 2019
The AngularJS provides a control service named as AJAX – $http, which serves the task for reading all the data that is available on the remote servers. The demand for the requirement of desired records gets met when the server makes the database call by using the browser. The data is mostly needed in JSON format. This is primarily because for transporting the data, JSON is an amazon method and also it is straightforward & effortless to use within AngularJS, JavaScript, etc.
Syntax:
function studentController($scope,$https:) {
var url = "data.txt";
$https:.get(url).success( function(response) {
$scope.students = response;
});
}
Method: There are lots of methods that can be used to call $http service, this are also shortcut methods to call $http service.
.post()
.get()
.head()
.jsonp()
.patch()
.delete()
.put()
Properties: With the help of these properties, the response from the server is an object.
.headers : To get the header information (A Function).
.statusText: To define the HTTP status (A String).
.status: To define the HTTP status (A Number).
.data: To carry the response from the server (A string/ An Object).
.config: To generate the request (An Object).
Example: First of all, we will have a file which is going to contain our data. For this example, we have the file data.txt, which will include the records of the student. An ajax call will be made by the $http service. It is going to divert & set the response to the students having priority. After this extraction, the tables will be drawn up in the HTML, which will be based on the students model.
The data.txt file:[
{
"Name" : "Ronaldo",
"Goals" : 128,
"Ratio" : "69%"
},
{
"Name" : "James",
"Goals" : 007,
"Ratio" : "70%"
},
{
"Name" : "Ali",
"Goals" : 786,
"Ratio" : "99%"
},
{
"Name" : "Lionel ",
"Goals" : 210,
"Ratio" : "100%"
}
]
[
{
"Name" : "Ronaldo",
"Goals" : 128,
"Ratio" : "69%"
},
{
"Name" : "James",
"Goals" : 007,
"Ratio" : "70%"
},
{
"Name" : "Ali",
"Goals" : 786,
"Ratio" : "99%"
},
{
"Name" : "Lionel ",
"Goals" : 210,
"Ratio" : "100%"
}
]
Code:<!DOCTYPE html><html> <head> <title>AngularJS AJAX - $http</title> <style> table, th, td { border: 1px #2E0854; border-collapse: collapse; padding: 5px; } table tr:nth-child(odd) { background-color: #F6ADCD; } table tr:nth-child(even) { background-color: #42C0FB; } </style> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.15/angular.min.js"> </script></head><body> <center> <h1 style="color:green">GeeksforGeeks</h1> <h3>AJAX - $http</h> <div ng-app="" ng-controller="studentController"> <table> <tr> <th>Name</th> <th>Goals</th> <th>Ratio</th> </tr> <tr ng-repeat="student in students"> <td>{{ Player.Name }}</td> <td>{{ Player.Goals}}</td> <td>{{ Player.Ratio}}</td> </tr> </table> </div> <script> function studentController($scope, $http) { var url = "/data.txt"; $http.get(url).then( function(response) { $scope.students = response.data; }); } </script> </center></body> </html>
<!DOCTYPE html><html> <head> <title>AngularJS AJAX - $http</title> <style> table, th, td { border: 1px #2E0854; border-collapse: collapse; padding: 5px; } table tr:nth-child(odd) { background-color: #F6ADCD; } table tr:nth-child(even) { background-color: #42C0FB; } </style> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.15/angular.min.js"> </script></head><body> <center> <h1 style="color:green">GeeksforGeeks</h1> <h3>AJAX - $http</h> <div ng-app="" ng-controller="studentController"> <table> <tr> <th>Name</th> <th>Goals</th> <th>Ratio</th> </tr> <tr ng-repeat="student in students"> <td>{{ Player.Name }}</td> <td>{{ Player.Goals}}</td> <td>{{ Player.Ratio}}</td> </tr> </table> </div> <script> function studentController($scope, $http) { var url = "/data.txt"; $http.get(url).then( function(response) { $scope.students = response.data; }); } </script> </center></body> </html>
Output:My Personal Notes
arrow_drop_upSave
Picked
AngularJS
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Top 10 Angular Libraries For Web Developers
Angular File Upload
How to use <mat-chip-list> and <mat-chip> in Angular Material ?
Auth Guards in Angular 9/10/11
What is AOT and JIT Compiler in Angular ?
Angular | keyup event
How to make a Bootstrap Modal Popup in Angular 9/8 ?
Angular 10 (blur) Event
Angular PrimeNG Dropdown Component
How to make a multi-select dropdown using Angular 11/10 ? | [
{
"code": null,
"e": 28297,
"s": 28269,
"text": "\n27 Jun, 2019"
},
{
"code": null,
"e": 28776,
"s": 28297,
"text": "The AngularJS provides a control service named as AJAX – $http, which serves the task for reading all the data that is available on the remote servers. The demand ... |
A gentle introduction to OCR. How and why to apply deep learning to... | by Gidi Shperber | Towards Data Science | Want to learn more? visit www.Shibumi-ai.com
Read here the revisited version of this post
OCR, or optical character recognition, is one of the earliest addressed computer vision tasks, since in some aspects it does not require deep learning. Therefore there were different OCR implementations even before the deep learning boom in 2012, and some even dated back to 1914 (!).
This makes many people think the OCR challenge is “solved”, it is no longer challenging. Another belief which comes from similar sources is that OCR does not require deep learning, or in other words, using deep learning for OCR is an overkill.
Anyone who practices computer vision, or machine learning in general, knows that there is no such thing as a solved task, and this case is not different. On the contrary, OCR yields very-good results only on very specific use cases, but in general, it is still considered as challenging.
Additionally, it’s true there are good solutions for certain OCR tasks that do not require deep learning. However, to really step forward towards better, more general solutions, deep learning will be mandatory.
Like many of my works/write-ups, this too started off as project for client. I was requested to solve a specific OCR task.
During and after working on this task, I’ve reached some conclusions and insights which are worth sharing. Additionally, after intensively working on a task, it is hard to stop and throw it away, so I keep my research going, and hoping to achieve an even better and more generalized solution.
In this post I will explore some of the strategies, methods and logic used to address different OCR tasks, and will share some useful approaches. In the last part, we will tackle a real world problem with code. This should not be considered as an exhaustive review (unfortunately) since the depth, history and breadth of approaches are too wide for this kind of a blog-post.
However, as always, I will not spare you from references to articles, data sets, repositories and other relevant blog-posts.
As hinted before, there are more than one meaning for OCR. In its most general meaning, it refers to extracting text from every possible image, be it a standard printed page from a book, or a random image with graffiti in it (“in the wild”). In between, you may find many other tasks, such as reading license plates, no-robot captchas, street signs etc.
Although each of these options has its own difficulties, clearly “in the wild” task is the hardest.
Form these examples we can draw out some attributes of the OCR tasks:
Text density: on a printed/written page, text is dense. However, given an image of a street with a single street sign, text is sparse.
Structure of text: text on a page is structured, mostly in strict rows, while text in the wild may be sprinkled everywhere, in different rotations.
Fonts: printed fonts are easier, since they are more structured then the noisy hand-written characters.
Character type: text may come in different language which may be very different from each other. Additionally, structure of text may be different from numbers, such as house numbers etc.
Artifacts: clearly, outdoor pictures are much noisier than the comfortable scanner.
Location: some tasks include cropped/centred text, while in others, text may be located in random locations in the image.
A good place to start from is SVHN, Street View House Numbers data-set. As its name implies, this is a data-set of house numbers extracted from google street view. The task difficulty is intermediate. The digits come in various shapes and writing styles, however, each house number is located in the middle of the image, thus detection is not required. The images are not of a very high resolution, and their arrangement may be a bit peculiar.
Another common challenge, which is not very hard and useful in practice, is the license plate recognition. This task, as most OCR tasks, requires to detect the license plate, and then recognizing it’s characters. Since the plate’s shape is relatively constant, some approach use simple reshaping method before actually recognizing the digits. Here are some examples from the web:
OpenALPR is a very robust tool, with no deep learning involved, to recognize license plates from various countriesThis repo provides an implementation of CRNN model (will be further discussed) to recognize Korean license plates.Supervise.ly, a data utilities company, wrote about training a license plate recognizer using artificial data generated by their tool (artificial data will also be further discussed)
OpenALPR is a very robust tool, with no deep learning involved, to recognize license plates from various countries
This repo provides an implementation of CRNN model (will be further discussed) to recognize Korean license plates.
Supervise.ly, a data utilities company, wrote about training a license plate recognizer using artificial data generated by their tool (artificial data will also be further discussed)
Since the internet is full of robots, a common practice to tell them apart from real humans, are vision tasks, specifically text reading, aka CAPTCHA. Many of these texts are random and distorted, which should make it harder for computer to read. I’m not sure whoever developed the CAPTCHA predicted the advances in computer vision, however most of today text CAPTCHAs are not very hard to solve, especially if we don’t try to solve all of them at once.
Adam Geitgey provides a nice tutorial to solving some CAPTCHAs with deep learning, which includes synthesizing artificial data once again.
The most common scenario for OCR is the printed/pdf OCR. The structured nature of printed documents make it much easier to parse them. Most OCR tools (e.g Tesseract) are mostly intended to address this task, and achieve good result. Therefore, I will not elaborate too much on this task in this post.
This is the most challenging OCR task, as it introduces all general computer vision challenges such as noise, lighting, and artifacts into OCR. Some relevant data-sets for this task is the coco-text, and the SVT data set which once again, uses street view images to extract text from.
SynthText is not a data-set, and perhaps not even a task, but a nice idea to improve training efficiency is artificial data generation. Throwing random characters or words on an image will seem much more natural than any other object, because of the flat nature of text.
We have seen earlier some data generation for easier tasks like CAPTCHA and license plate. Generating text in the wild is a little bit more complex. The task includes considering depth information of an image. Fortunately, SynthText is a nice work that takes in images with the aforementioned annotations, and intelligently sprinkles words (from newsgroup data-set).
To make the “sprinkled” text look realistic and useful, the SynthText library takes with every image two masks, one of depth and another of segmentation. If you like to use your own images, you should add this data as well
It is recommended to check the repo and generate some images on your own. You should pay attention that the repo uses some outdated version of opencv and maptlotlib, so some modifications may be necessary.
Although not really an OCR task, it is impossible to write about OCR and not include the Mnist example. The most well known computer vision challenge is not really an considered and OCR task, since it contains one character (digit) at a time, and only 10 digits. However, it may hint why OCR is considered easy. Additionally, in some approaches every letter will be detected separately, and then Mnist like (classification) models become relevantץ
As we’ve seen and implied, the text recognition is mostly a two-step task. First, you would like to detect the text(s) appearances in the image, may it be dense (as in printed document) or sparse (As text in the wild).
After detecting the line/word level we can choose once again from a large set of solutions, which generally come from three main approaches:
Classic computer vision techniques.Specialized deep learning.Standard deep learning approach (Detection).
Classic computer vision techniques.
Specialized deep learning.
Standard deep learning approach (Detection).
Let’s examine each of them:
As said earlier, computer vision solves various text recognition problems for a long time. You can find many examples online:
The Great Adrian Rosebrook has a tremendous number of tutorials in his site, like this one, this one and more.
Stack overflow has also some gems like this one.
The classic-CV approach generally claims:
Apply filters to make the characters stand out from the background.Apply contour detection to recognize the characters one by one.Apply image classification to identify the characters
Apply filters to make the characters stand out from the background.
Apply contour detection to recognize the characters one by one.
Apply image classification to identify the characters
Clearly, if part two is done well, part three is easy either with pattern matching or machine learning (e.g Mnist).
However, the contour detection is quite challenging for generalization. it requires a lot of manual fine tuning, therefore becomes infeasible in most of the problem. e.g, lets apply a simple computer vision script from here on some images from SVHN data-set. At first attempt we may achieve very good results:
But when characters are closer to each other, things start to break:
I’ve found out the hard way, that when you start messing around with the parameters, you may reduce such errors, but unfortunately cause others. In other words, if your task is not straightforward, these methods are not the way to go.
Most successful deep learning approaches excel in their generality. However, considering the attributes described above, Specialized networks can be very useful.
I’ll examine here an non-exhaustive sample of some prominent approaches, and will do a very quick summary of the articles which present them. As always, every article is opened with the words “task X (text recognition) gains attention lately” and goes on to describe their method in detail. Reading the articles carefully will reveal these methods are assembled from pieces of previous deep learning/text recognition works.
Results are also depicted thoroughly, however due to many differences in design (including minor differences in data sets)actual comparison is quite impossible. The only way to actually know the performance of these methods on your task, is to get their code (best to worse: find official repo, find unofficial but highly rated repo, implement by yourself) and try it on your data.
Thus, we will always prefer articles with good accompanying repos, and even demos if possible.
EAST
EAST ( Efficient accurate scene text detector) is a simple yet powerful approach for text detection. Using a specialized network.
Unlike the other methods we’ll discuss, is limited only to text detection (not actual recognition) however it’s robustness make it worth mentioning. Another advantage is that it was also added to open-CV library (from version 4) so you can easily use it (see tutorial here). The network is actually a version of the well known U-Net, which good for detecting features that may vary in size. The underlying feed forward “stem” (as coined in the article, see figure below) of this network may very — PVANet is used in the paper, however opencv implementation use Resnet. Obviously, it can be also pre-trained (with imagenet e.g) . As in U-Net, features are extracted from different levels in the network.
Finally, the network allows two types of outputs rotated bounding boxes: either a standard bounding box with a rotation angle (2X2+1 parameters) or “quadrangle” which is merely a rotated bounding box with coordinates of all vertices.
If real life results will be as in the above images, recognizing the texts will not take much of an effort. However, real life results are not perfect.
CRNN
Convolutional-recurrent neural network, is an article from 2015, which suggest a hybrid (or tribrid?) end to end architecture, that is intended to capture words, in a three step approach.
The idea goes as follows: the first level is a standard fully convolutional network. The last layer of the net is defined as feature layer, and divided into “feature columns”. See in the image below how every such feature column is intended to represent a certain section in the text.
Afterwards, the feature columns are fed into a deep-bidirectional LSTM which outputs a sequence, and is intended for finding relations between the characters.
Finally, the third part is a transcription layer. Its goal is to take the messy character sequence, in which some characters are redundant and others are blank, and use probabilistic method to unify and make sense out of it.
This method is called CTC loss, and can be read about here. This layer can be used with/without predefined lexicon, which may facilitate predictions of words.
This paper reaches high (>95%) rates of accuracy with fixed text lexicon, and varying rates of success without it.
SEE — Semi-Supervised End-to-End Scene Text Recognition, is a work by Christian Bartzi. He and his colleagues apply a truly end to end strategy to detect and recognize text. They use very weak supervision (which they refer to as semi-supervision, in a different meaning than usual ). as they train the network with only text annotation (without bounding boxes). This allows them use more data, but makes their training procedure quite challenging, and they discuss different tricks to make it work, e.g not training on images with more than two lines of text (at least at the first stages of training).
The paper has an earlier version which is called STN OCR. In the final paper the researchers have refined their methods and presentation, and additionally they’ve put more emphasis on the generality of their approach on the account of high quality of the results.
The name STN-OCR hints on the strategy, of using spatial transformer (=STN, no relation to the recent google transformer).
They train two concatenated networks in which the first network, the transformer, learns a transformation on the image to output an easier sub-image to interpret.
Then, another feed forward network with LSTM on top (hmm... seems like we’ve seen it before) to recognize the text.
The researches emphasize here the importance of using resnet(they use it twice) since it provides “strong” propagation to the early layers. however this practice quite accepted nowadays.
Either way, this is an interesting approach to try.
As the header implies, after detecting the “words” we can apply standard deep learning detection approaches, such as SSD, YOLO and Mask RCNN. I’m not going to elaborate too much on theses approaches since there is a plethora of info online.
I must say this is currently my favorite approach, since what I like in deep learning is the “end to end” philosophy, where you apply a strong model which with some tuning will solve almost every problem. In the next section of this post we will see how it actually works.
However, SSD and other detection models are challenged when it comes to dense, similar classes, as reviewed here. I find it a bit ironic since in fact, deep learning models find it much more difficult to recognize digits and letters than to recognize much more challenging and elaborate objects such as dogs, cats or humans. They tend no to reach the desired accuracy, and therefore, specialized approaches thrive.
So after all the talking, it’s time to get our hands dirty, and try some modelling ourselves. We will try the tackle SVHN task. The SVHN data contains three different data-sets: train, test and extra. The differences are not 100% clear, however the extra data-set which is the biggest (with ~500K samples) includes images that are somehow easier to recognize. So for the sake of this take we will use it.
To prepare for the task, do the following:
You’ll need a basic GPU machine with Tensorflow≥1.4 and Keras≥2
Clone the SSD_Keras project from here.
Download the pre-trained SSD300 model on coco data-set from here.
Clone this project's repo from here..
Download the extra.tar.gz file, which contains the extra images of SVHN data-set.
Update all relevant paths in json_config.json in this project repo.
To efficiently follow the process, you should read the below instruction along with running the ssd_OCR.ipynb notebook from the project’s repo.
And... You are ready to start!
Like it or not, but there is no “golden” format for data representation in detection tasks. Some well known formats are: coco, via, pascal, xml. And there are more. For instance, the SVHN data-set is annotated with the obscure .mat format. Fortunately for us, this gist provides a slick read_process_h5 script to convert the .mat file to standard json, and you should go one step ahead and convert it further to pascal format, like so:
def json_to_pascal(json, filename): #filename is the .mat file # convert json to pascal and save as csv pascal_list = [] for i in json: for j in range(len(i['labels'])): pascal_list.append({'fname': i['filename'] ,'xmin': int(i['left'][j]), 'xmax': int(i['left'][j]+i['width'][j]) ,'ymin': int(i['top'][j]), 'ymax': int(i['top'][j]+i['height'][j]) ,'class_id': int(i['labels'][j])}) df_pascal = pd.DataFrame(pascal_list,dtype='str') df_pascal.to_csv(filename,index=False)p = read_process_h5(file_path)json_to_pascal(p, data_folder+'pascal.csv')
Now we should have a pascal.csv file that is much more standard and will allow us to progress. If the conversion is to slow, you should take note that we don’t need t all the data samples. ~10K will be enough.
Before starting the modeling process, you should better do some exploration of the data. I only provide a quick function for sanity test, but I recommend you to do some further analysis:
def viz_random_image(df): file = np.random.choice(df.fname) im = skimage.io.imread(data_folder+file) annots = df[df.fname==file].iterrows() plt.figure(figsize=(6,6)) plt.imshow(im) current_axis = plt.gca() for box in annots: label = box[1]['class_id'] current_axis.add_patch(plt.Rectangle( (box[1]['xmin'], box[1]['ymin']), box[1]['xmax']-box[1]['xmin'], box[1]['ymax']-box[1]['ymin'], color='blue', fill=False, linewidth=2)) current_axis.text(box[1]['xmin'], box[1]['ymin'], label, size='x-large', color='white', bbox={'facecolor':'blue', 'alpha':1.0}) plt.show()viz_random_image(df)
For the following steps, I provide a utils_ssd.py in the repo that facilitates the training, weight loading etc. Some of the code is taken from the SSD_Keras repo, which is also used extensively.
As previously discussed, we have many possible approaches for this problem. In this tutorial I’ll take standard deep learning detection approach, and will use the SSD detection model. We will use the SSD keras implementation from here. This is a nice Implementation by PierreLuigi. Although it has less GitHub stars than the rykov8 implementation, it seems more updated, and is easier to integrate. This is a very important thing to notice when you choose which project are you going to use. Other good choices will be the YOLO model, and the Mask RCNN.
Some definitions
To use the repo, you’ll need to verify you have the SSD_keras repo, and fill in the paths in the json_config.json file, to allow the notebook finding the paths.
Start with importing:
import osimport sysimport skimage.ioimport scipyimport jsonwith open('json_config.json') as f: json_conf = json.load(f)ROOT_DIR = os.path.abspath(json_conf['ssd_folder']) # add here mask RCNN pathsys.path.append(ROOT_DIR)import cv2from utils_ssd import *import pandas as pdfrom PIL import Imagefrom matplotlib import pyplot as plt%matplotlib inline%load_ext autoreload% autoreload 2
and some more definitions:
task = 'svhn'labels_path = f'{data_folder}pascal.csv'input_format = ['class_id','image_name','xmax','xmin','ymax','ymin' ] df = pd.read_csv(labels_path)
Model configurations:
class SVHN_Config(Config): batch_size = 8 dataset_folder = data_folder task = task labels_path = labels_path input_format = input_formatconf=SVHN_Config()resize = Resize(height=conf.img_height, width=conf.img_width)trans = [resize]
Define the model, load weights
As in most of deep learning cases, we won’t start training from scratch, but we’ll load pre-trained weights. In this case, we’ll load the weights of SSD model, trained on COCO data-set, which has 80 classes. Clearly our task has only 10 classes, therefore we will reconstruct the top layer to have the right number of outputs, after our loading the weights. We do it in the init_weights function. A side note: the right number of outputs in this case is 44: 4 for each class (bounding box coordinates) and another 4 for the background/none class.
learner = SSD_finetune(conf)learner.get_data(create_subset=True)weights_destination_path=learner.init_weights()learner.get_model(mode='training', weights_path = weights_destination_path)model = learner.modellearner.get_input_encoder()ssd_input_encoder = learner.ssd_input_encoder# Training schedule definitionsadam = Adam(lr=0.0002, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) ssd_loss = SSDLoss(neg_pos_ratio=3, n_neg_min=0, alpha=1.0)model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
Define data loaders
train_annotation_file=f'{conf.dataset_folder}train_pascal.csv'val_annotation_file=f'{conf.dataset_folder}val_pascal.csv'subset_annotation_file=f'{conf.dataset_folder}small_pascal.csv'batch_size=4ret_5_elements={'original_images','processed_images','processed_labels','filenames','inverse_transform'}train_generator = learner.get_generator(batch_size, trans=trans, anot_file=train_annotation_file, encoder=ssd_input_encoder)val_generator = learner.get_generator(batch_size,trans=trans, anot_file=val_annotation_file, returns={'processed_images','encoded_labels'}, encoder=ssd_input_encoder,val=True)
Now that the model is ready, we’ll set some last training related definitions, and start training
learner.init_training()history = learner.train(train_generator, val_generator, steps=100,epochs=80)
As a bonus, I’ve included the training_plot callback in the training script to visualize a random image after every epoch. For example, here is a snapshot of predictions after sixth epoch:
The SSD_Keras repo handles saving the model after almost each epoch , so you can load later the models simply by changing the weights_destination_path line to equal the path
weights_destination_path = <path>
If you followed my instructions, you should be able to train the model. The ssd_keras provides some more features, e.g data augmentations, different loaders, and evaluator. I’ve reached >80 mAP after short training.
How high did you achieve?
In this post, we discussed different challenges and approaches in the OCR field. As many problems in deep learning/computer vision, it has much more to it than seems at first. We have seen many sub tasks of it, and some different approaches to solve it, neither currently serves as a silver bullet. From the other hand, we’ve seen it is not very hard to reach preliminary results, without too much of hassle.
Hope you’ve enjoyed! | [
{
"code": null,
"e": 217,
"s": 172,
"text": "Want to learn more? visit www.Shibumi-ai.com"
},
{
"code": null,
"e": 262,
"s": 217,
"text": "Read here the revisited version of this post"
},
{
"code": null,
"e": 547,
"s": 262,
"text": "OCR, or optical character r... |
How to overplot a line on a scatter plot in Python? | First, we can create a scatter for different data points using the scatter method, and then, we can plot the lines using the plot method.
Create a new figure, or activate an existing figure with figure size(4, 3), using figure() method.
Create a new figure, or activate an existing figure with figure size(4, 3), using figure() method.
Add an axis to the current figure and make it the current axes, create x using plt.axes().
Add an axis to the current figure and make it the current axes, create x using plt.axes().
Draw scatter points using scatter() method.
Draw scatter points using scatter() method.
Draw line using ax.plot() method.
Draw line using ax.plot() method.
Set the X-axis label using plt.xlabel() method.
Set the X-axis label using plt.xlabel() method.
Set the Y-axis label using plt.ylabel() method.
Set the Y-axis label using plt.ylabel() method.
To show the plot, use plt.show() method.
To show the plot, use plt.show() method.
import random
import matplotlib.pyplot as plt
plt.figure(figsize=(4, 3))
ax = plt.axes()
ax.scatter([random.randint(1, 1000) % 50 for i in range(100)],
[random.randint(1, 1000) % 50 for i in range(100)])
ax.plot([1, 2, 4, 50], [1, 2, 4, 50])
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.show() | [
{
"code": null,
"e": 1200,
"s": 1062,
"text": "First, we can create a scatter for different data points using the scatter method, and then, we can plot the lines using the plot method."
},
{
"code": null,
"e": 1299,
"s": 1200,
"text": "Create a new figure, or activate an existing... |
Getting Started With Apache Spark, Python and PySpark | by Hadi Fadlallah | Towards Data Science | This article is a quick guide to Apache Spark single node installation, and how to use Spark python library PySpark.
Hadoop Version: 3.1.0
Apache Kafka Version: 1.1.1
Operating System: Ubuntu 16.04
Java Version: Java 8
Apache Spark requires Java. To ensure that Java is installed, first update the Operating System then try to install it:
sudo apt-get updatesudo apt-get –y upgradesudo add-apt-repository -y ppa:webupd8team/javasudo apt-get install oracle-java8-installer
First, we need to create a directory for apache Spark.
sudo mkdir /opt/spark
Then, we need to download apache spark binaries package.
wget “http://www-eu.apache.org/dist/spark/spark-2.3.1/spark-2.3.1-bin-hadoop2.7.tgz”
Next, we need to extract apache spark files into /opt/spark directory
sudo tar -xzvf spark-2.3.1-bin-hadoop2.7.tgz --directory=/opt/spark -- strip 1
When Spark launches jobs it transfers its jar files to HDFS so they’re available to any machines working. These files are a large overhead on smaller jobs so I’ve packaged them up, copied them to HDFS and told Spark it doesn’t need to copy them over any more.
jar cv0f ~/spark-libs.jar -C /opt/spark/jars/ .hdfs dfs -mkdir /spark-libshdfs dfs -put ~/spark-libs.jar /spark-libs/
After copying the files we must tell Spark to ignore copying jar files from the spark defaults configuration file:
sudo gedit /opt/spark/conf/spark-defaults.conf
Add the following lines:
spark.master spark://localhost:7077spark.yarn.preserve.staging.files truespark.yarn.archive hdfs:///spark-libs/spark-libs.jar
In this article we will configure Apache Spark to run on a single node, so it will be only localhost:
sudo gedit /opt/spark/conf/slaves
Make sure that it contains only the value localhost
Before running the service we must open .bashrc file using gedit
sudo gedit ~/.bashrc
And add the following lines
export SPARK_HOME=/opt/sparkexport SPARK_CONF_DIR=/opt/spark/confexport SPARK_MASTER_HOST=localhost
Now, we have to run Apache Spark services:
sudo /opt/spark/sbin/start-master.shsudo /opt/spark/sbin/start-slaves.sh
Ubuntu 16.04 ships with both Python 3 and Python 2 pre-installed. To make sure that our versions are up-to-date, we must update and upgrade the system with apt-get (mentioned in the prerequisites section):
sudo apt-get updatesudo apt-get -y upgrade
We can check the version of Python 3 that is installed in the system by typing:
python3 –V
It must return the python release (example: Python 3.5.2)
To manage software packages for Python, we must install pip utility:
sudo apt-get install -y python3-pip
There are a few more packages and development tools to install to ensure that we have a robust set-up for our programming environment.
sudo apt-get install build-essential libssl-dev libffi-dev python-dev
We need to first install the venv module, which allow us to create virtual environments:
sudo apt-get install -y python3-venv
Next, we have to create a directory for our environment
mkdir testenv
Now we have to go to this directory and create the environment (all environment file will be created inside a directory that we called my_env):
cd testenvpython3 -m venv my_env
We finished we can check the environment files created using the ls my_env
To use this environment, you need to activate it:
source my_env/bin/activate
First we need to open the .bashrc file
sudo gedit ~/.bashrc
And add the following lines:
export PYTHONPATH=/usr/lib/python3.5export PYSPARK_SUBMIT_ARGS=” -- master local[*] pyspark-shell”export PYSPARK_PYTHON=/usr/bin/python3.5
If we have Apache Spark installed on the machine we don’t need to install the pyspark library into our development environment. We need to install the findspark library which is responsible of locating the pyspark library installed with apache Spark.
pip3 install findspark
In each python script file we must add the following lines:
import findsparkfindspark.init()
The following script is to read from a file stored in hdfs
import findsparkfindspark.init()from pyspark.sql import SparkSessionsparkSession = SparkSession.builder.appName(“example-pyspark-hdfs”).getOrCreate()df_load = sparkSession.read.csv(‘hdfs://localhost:9000/myfiles/myfilename’)df_load.show()
We first must add the spark-streaming-kafka-0–8-assembly_2.11–2.3.1.jar library to our Apache spark jars directory /opt/spark/jars. We can download it from mvn repository:
- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-8-assembly_2.11/2.3.1
The following codes read messages from a Kafka topic consumer and print them line by line:
import findsparkfindspark.init()from kafka import KafkaConsumerfrom pyspark import SparkContextfrom pyspark.streaming import StreamingContextfrom pyspark.streaming.kafka import KafkaUtilsKAFKA_TOPIC = ‘KafkaTopicName’KAFKA_BROKERS = ‘localhost:9092’ZOOKEEPER = ‘localhost:2181’sc = SparkContext(‘local[*]’,’test’)ssc = StreamingContext(sc, 60)kafkaStream = KafkaUtils.createStream(ssc, ZOOKEEPER, ‘spark-streaming’, {KAFKA_TOPIC:1})lines = kafkaStream.map(lambda x: x[1])lines.pprint()ssc.start()ssc.awaitTermination()
[1] M. Litwintschik, “Hadoop 3 Single-Node Install Guide,” 19 March 2018. [Online]. Available: http://tech.marksblogg.com/hadoop-3-single-node-install-guide.html. [Accessed 01 June 2018].
[2] L. Tagiaferri, “How To Install Python 3 and Set Up a Local Programming Environment on Ubuntu 16.04,” 20 December 2017. [Online]. Available: https://www.digitalocean.com/community/tutorials/how-to-install-python-3-and-set-up-a-local-programming-environment-on-ubuntu-16-04. [Accessed 01 August 2018].
[3] “Apache Spark Official Documentation,” [Online]. Available: https://spark.apache.org/docs/latest/. [Accessed 05 August 2018].
[4] “Stack Overflow Q&A” [Online]. Available: https://stackoverflow.com/. [Accessed 01 June 2018].
[5] A. GUPTA, “Complete Guide on DataFrame Operations in PySpark,” 23 October 2016. [Online]. Available: https://www.analyticsvidhya.com/blog/2016/10/spark-dataframe-and-operations/. [Accessed 14 August 2018]. | [
{
"code": null,
"e": 288,
"s": 171,
"text": "This article is a quick guide to Apache Spark single node installation, and how to use Spark python library PySpark."
},
{
"code": null,
"e": 310,
"s": 288,
"text": "Hadoop Version: 3.1.0"
},
{
"code": null,
"e": 338,
"... |
How to use mat-icon in angular? | 09 Jun, 2020
To include icons in your webpage, you can use mat-icon directive. Previously it was known as md-icon. It is better to use mat-icon as they serve SVG icons i.e. vector-based icons which are adaptable to any resolution and dimension, on the other hand, raster-based icons have a fixed pattern of dots with specified values and if resized, the resolution changes.
Approach:
First of all we have to load the font library in your HTML file using the following syntax:<link href=”https://fonts.googleapis.com/icon?family=Material+Icons” rel=”stylesheet”>
First of all we have to load the font library in your HTML file using the following syntax:
<link href=”https://fonts.googleapis.com/icon?family=Material+Icons” rel=”stylesheet”>
Now import MatIconModule in the ngmodule.ts file by using this command:import {MatIconModule} from '@angular/material/icon';
Now import MatIconModule in the ngmodule.ts file by using this command:
import {MatIconModule} from '@angular/material/icon';
Use the following command to display an icon:<mat-icon>icon-name</mat-icon>
Use the following command to display an icon:
<mat-icon>icon-name</mat-icon>
You can change the color of the icons as per the requirement:
Primary .Accent.Warn.
Primary .
Accent.
Warn.
These icon may be used as buttons or may convey some message such as type of form field, status, etc.
Example:
Using mat-icon let’s create three different buttons.
In your index.html file, load the font library.
<html lang="en"><head> <meta charset="utf-8"> <title>Tutorial</title> <!--font library is loaded prior to using mat-icons--> <link href="https://fonts.googleapis.com/icon?family=Material+Icons&display=block" rel="stylesheet"> </head><body> <app-child></app-child></body></html>
Now use mat-icon as buttons.
import { Component } from '@angular/core'; @Component({ selector: 'app-child', template: ` <button ><mat-icon color = "primary">check</mat-icon></button> <button ><mat-icon color = "accent">check</mat-icon></button> <button ><mat-icon color = "warn">check</mat-icon></button> `, styleUrls: []})export class childComponent { }
Output:
AngularJS-Misc
Picked
AngularJS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Auth Guards in Angular 9/10/11
Routing in Angular 9/10
How to bundle an Angular app for production?
What is AOT and JIT Compiler in Angular ?
Angular PrimeNG Dropdown Component
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ? | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n09 Jun, 2020"
},
{
"code": null,
"e": 389,
"s": 28,
"text": "To include icons in your webpage, you can use mat-icon directive. Previously it was known as md-icon. It is better to use mat-icon as they serve SVG icons i.e. vector-based ic... |
How to Install PyGTK in Python on Windows? | 29 Oct, 2021
PyGTK is a Python package or module that enables developers to work with GTK+ GUI Toolkit. This is how WikiBooks describes GTK+:
“GTK+ is a highly usable, feature rich toolkit for creating graphical user interfaces which boasts cross platform compatibility and an easy to use API.”
And this is how gtk.org the official website of GTK markets it:
Offering a complete set of UI elements, GTK is suitable for projects ranging from small one-off tools to complete application suites.
Although Python comes with a built-in module called Tkinter which is used to make simple GUI applications. But many developers are unsatisfied by the appearance of the application made by it and it is not even feature-rich so as to help in making elaborate software. Therefore GTK+ in association with Python is considered to be a great alternative to Tkinter. Although GTK library supports many programming languages such as C, JavaScript, Perl, Rust and Vala. But it is most widely used in association with Python which employs a language wrapper to fully utilize the capabilities of the official GNOME binding to produce the most stable applications.
GTK library comes jam-packed with features that enable developers to do whatever they want. Here are some of the main selling points of GTK.
Portability
Stability
Language Binging
Interfaces
OpenSource
API
Accommodation
Foundations
Listing some of the famous applications which are made by this GTK are as follows:
GIMP
Transmission
Evolution
Polati
Games of all sorts
Etc.
Below we will go through all the steps to install PyGTK in the windows operating system in detail.
Prerequisite: Install Python interpreter.
If you don’t already have Python installed on your Windows system then follow this tutorial.
Step 1: Install MSYS2
The first step in order to install PyGTK in our system is to install MSYS2. MSYS2 is a set of libraries that makes it easy to install and work with native programs in Windows. To put it simply it is a command-line tool similar to what we get in Linux-based operating systems. This command-line tool is called Minty. It uses bash and is somewhat similar to Git version control system. Developers who have used Linux-based operating systems for development would very well understand the powers that this feature-packed toll brings to developers. It comes with a package manager of its own called Pacman. Some of its simple use cases are the complete upgrades of systems and packages.
To install this application we first need to visit the msys2.org . There on the home page itself, there is an installation guide and in the first step, one can find the button to download the 64-bit executable file of MSYS2.
After the download is complete the installer will be launched on double-click. There is nothing much to the installation process, only the destination folder is to be selected. But it is recommended you leave it as it is.
And when the installation process is finished remember to check the run option at the last window and click finish button.
Now we must be having a terminal window open after clicking on the finish button. This means the installation process was completed successfully.
You can also add the installation location of the MSYS2 in the Environment Variables so as to access it from anywhere in your system.
Step 2: Update System
Now what needs to be done is to issue two commands which will upgrade our system and all the existing packages to prevent a breakdown in the future. The first thing to do is to visit the installation location of MSYS2 in the C drive and open the mys2 application.
So the first command to be issued in the command line tool is as follows:
pacman -Syy
The above command will not take more than a minute to complete on a decent internet speed. And the second command is as follows:
pacman -Syuu
The above command will ask for confirmation, which can be given by pressing the y key.
Step 3: install GTK3
Now that the system is up to date we can proceed with the install of the GTK3 library. And for that, we need to run a command in the same terminal window as in the previous step.
pacman -S mingw-w64-x86_64-gtk3
Once this command is completely executed it will signify that all the main stuff for our endeavour is ready. Now only a few tweaks are needed which will be done in the coming steps.
Step 4: Install Glad
Glad is a tool which whose main function is to act as the GUI designer for GTK3. To install this tool we need to run the below command:
pacman -S mingw-w64-x86_64-glade
Step 5: Install Python Binding
Since our main motive is to work with GTK3 along with Python we also need to download one last tool or library that will bridge the gap between the Python script and GTK3 commands. The tool that is needed here is Python binding. Depending on whether you have python 2 or python 3 installed on your system you can run the suitable command.
Python 3:
pacman -S mingw-w64-x86_64-python3-gobject
Python 2:
pacman -S mingw-w64-x86_64-python2-gobject
So this completed all the steps that are there for the installation of PyGTK in a Windows system.
how-to-install
Picked
How To
Installation Guide
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 53,
"s": 25,
"text": "\n29 Oct, 2021"
},
{
"code": null,
"e": 183,
"s": 53,
"text": "PyGTK is a Python package or module that enables developers to work with GTK+ GUI Toolkit. This is how WikiBooks describes GTK+: "
},
{
"code": null,
"e": 337,
... |
How to Zip two lists of lists in Python? | 29 Jun, 2022
The normal zip function allows us the functionality to aggregate the values in a container. But sometimes, we have a requirement in which we require to have multiple lists and containing lists as index elements and we need to merge/zip them together. This is quite uncommon problem, but solution to it can still be handy. Let’s discuss certain ways in which solution can be devised.
Method #1 : Using map() + __add__ This problem can be solved using the map function with the addition operation. The map function performs the similar kind of function as the zip function and in this case can help to reach to a solution.
Python3
# Python3 code to demonstrate# zipping lists of lists# using map() + __add__ # initializing liststest_list1 = [[1, 3], [4, 5], [5, 6]]test_list2 = [[7, 9], [3, 2], [3, 10]] # printing original listsprint ("The original list 1 is : " + str(test_list1))print ("The original list 2 is : " + str(test_list2)) # using map() + __add__# zipping lists of listsres = list(map(list.__add__, test_list1, test_list2)) # printing resultprint ("The modified zipped list is : " + str(res))
Output :
The original list 1 is : [[1, 3], [4, 5], [5, 6]]
The original list 2 is : [[7, 9], [3, 2], [3, 10]]
The modified zipped list is : [[1, 3, 7, 9], [4, 5, 3, 2], [5, 6, 3, 10]]
Method #2 : Using itertools.chain() + zip() This combination of these two functions can be used to perform this particular task. The chain function can be used to perform the interlist aggregation, and the intralist aggregation is done by zip function.
Python3
# Python3 code to demonstrate# zipping lists of lists# using map() + __add__import itertools # initializing liststest_list1 = [[1, 3], [4, 5], [5, 6]]test_list2 = [[7, 9], [3, 2], [3, 10]] # printing original listsprint ("The original list 1 is : " + str(test_list1))print ("The original list 2 is : " + str(test_list2)) # using map() + __add__# zipping lists of listsres = [list(itertools.chain(*i)) for i in zip(test_list1, test_list2)] # printing resultprint ("The modified zipped list is : " + str(res))
Output :
The original list 1 is : [[1, 3], [4, 5], [5, 6]]
The original list 2 is : [[7, 9], [3, 2], [3, 10]]
The modified zipped list is : [[1, 3, 7, 9], [4, 5, 3, 2], [5, 6, 3, 10]]
surinderdawra388
Marketing
Python list-programs
python-list
Python
Python Programs
python-list
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n29 Jun, 2022"
},
{
"code": null,
"e": 438,
"s": 54,
"text": "The normal zip function allows us the functionality to aggregate the values in a container. But sometimes, we have a requirement in which we require to have multiple lists an... |
Python Program to check whether all elements in a string list are numeric | 24 Jan, 2021
Given a list that contains only string elements the task here is to write a Python program to check if all of them are numeric or not. If all are numeric return True otherwise, return False.
Input : test_list = [“434”, “823”, “98”, “74”]
Output : True
Explanation : All Strings are digits.
Input : test_list = [“434”, “82e”, “98”, “74”]
Output : False
Explanation : e is not digit, hence verdict is False.
Method 1 : Using all(), isdigit() and generator expression
In this, we check for number from isdigit(). all() is used to check for all strings to be number, iteration for each string is done using generator expression.
Example:
Python3
# initializing listtest_list = ["434", "823", "98", "74"] # printing original listprint("The original list is : " + str(test_list)) # checking all elements to be numeric using isdigit()res = all(ele.isdigit() for ele in test_list) # printing resultprint("Are all strings digits ? : " + str(res))
Output:
The original list is : [‘434’, ‘823’, ’98’, ’74’]
Are all strings digits ? : True
Method 2 : Using all(), isdigit() and map()
In this, we extend test logic to each string using map(), rather than generator expression. Rest all the functionalities are performed similar to above method.
Example:
Python3
# initializing listtest_list = ["434", "823", "98", "74"] # printing original listprint("The original list is : " + str(test_list)) # checking all elements to be numeric using isdigit()# map() to extend to each elementres = all(map(str.isdigit, test_list)) # printing resultprint("Are all strings digits ? : " + str(res))
Output:
The original list is : [‘434’, ‘823’, ’98’, ’74’]
Are all strings digits ? : True
Python list-programs
Python string-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to iterate through Excel rows in Python?
Rotate axis tick labels in Seaborn and Matplotlib
Deque in Python
Queue in Python
Defaultdict in Python
Defaultdict in Python
Python program to add two numbers
Python | Get dictionary keys as a list
Python Program for Fibonacci numbers
Python Program for factorial of a number | [
{
"code": null,
"e": 53,
"s": 25,
"text": "\n24 Jan, 2021"
},
{
"code": null,
"e": 244,
"s": 53,
"text": "Given a list that contains only string elements the task here is to write a Python program to check if all of them are numeric or not. If all are numeric return True otherwis... |
XML | Tags | 12 Apr, 2021
XML tags are the important features of XML document. It is similar to HTML but XML is more flexible then HTML. It allows to create new tags (user defined tags). The first element of XML document is called root element. The simple XML document contain opening tag and closing tag. The XML tags are case sensitive i.e. <root> and <Root> both tags are different. The XML tags are used to define the scope of elements in XML document.Property of XML Tags: There are many property of XML tags which are discussed below:
Every XML document must have a root tag which enclose the XML document. It is not necessary to name of root tag is root. The name of root tag is any possible tag name. Example:
html
<root> <name>GeeksforGeeks</name> <address> <sector>142</sector> <location>Noida</location> </address></root>
The XML document must have start-tag, so first starting tag is known as root tag. The opening tag started with < bracket followed by tag name or element name and close with > bracket. Example:
html
<Name>GeeksforGeeks<address>Noida
The tag which is started by start tag must end with the same tag with forward slash (end tag), or in other words every XML document must be ended with end-tag. The end tag started with < followed by / and its pair tag name ended with > Example:
html
<Name>GeeksforGeeks</Name><address>Noida</address>
In XML, tags are case sensitive. It means that <Root> and <root> both are different tags. Example:
html
<Name>Case sensitive</Name><name>name and Name are different tag</name>
The tag which contains no content are known as empty tags. Example:
html
<name> </name><address/>
XML tag must be close in appropriate order. For example an XML tag opened inside another element must be closed before the outer element is closed.
html
<root> <name>GeeksforGeeks</name> <address> <add>Sector 142 Noida</add> <pin>201302</pin> <country>India</country> </address></root>
simranarora5sos
HTML and XML
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
How to fetch data from an API in ReactJS ?
Differences between Functional Components and Class Components in React
Remove elements from a JavaScript Array
REST API (Introduction)
How to create footer to stay at the bottom of a Web page?
Difference Between PUT and PATCH Request
How to Open URL in New Tab using JavaScript ?
Roadmap to Learn JavaScript For Beginners
How to float three div side by side using CSS? | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n12 Apr, 2021"
},
{
"code": null,
"e": 569,
"s": 52,
"text": "XML tags are the important features of XML document. It is similar to HTML but XML is more flexible then HTML. It allows to create new tags (user defined tags). The first ele... |
How to set the Auto Size Mode of FlowLayoutPanel in C#? | 02 Aug, 2019
In Windows Forms, FlowLayoutPanel control is used to arrange its child controls in a horizontal or vertical flow direction. Or in other words, FlowLayoutPanel is a container which is used to organize different or same types of controls in it either horizontally or vertically. In FlowLayoutPanel control, you can set a value which indicates how FlowLayoutPanel behaves when the value of the AutoSize Property is set to true, using AutoSizeMode Property. This property has two different values that are defined under AutoSizeMode enum and the values are:
GrowOnly: This value indicates that the FlowLayoutPanel grow according to the content, but does not shrink if the content is less.
GrowAndShrink: This value indicates the FlowLayoutPanel grows and shrink according to the content present in it.
The default value of this property is GrowOnly. You can set this property in two different ways:
1. Design-Time: It is the easiest way to set the AutoSizeMode property of the FlowLayoutPanel as shown in the following steps:
Step 1: Create a windows form as shown in the below image:Visual Studio -> File -> New -> Project -> WindowsFormApp
Step 2: Next, drag and drop the FlowLayoutPanel control from the toolbox to the form as shown in the below image:
Step 3: After drag and drop you will go to the properties of the FlowLayoutPanel and set the AutoSizeMode property of the FlowLayoutPanel as shown in the below image:Output:
Output:
2. Run-Time: It is a little bit trickier than the above method. In this method, you can set the AutoSizeMode property of the FlowLayoutPanel control programmatically with the help of given syntax:
public virtual System.Windows.Forms.AutoSizeMode AutoSizeMode { get; set; }
Here, AutoSizeMode represents the size modes of the FlowLayoutPanel control. It will throw an InvalidEnumArgumentException if the value of this property does not belong to AutoSizeMode enum values. The following steps show how to set the AutoSizeMode property of the FlowLayoutPanel dynamically:
Step 1: Create a FlowLayoutPanel using the FlowLayoutPanel() constructor is provided by the FlowLayoutPanel class.// Creating a FlowLayoutPanel
FlowLayoutPanel f = new FlowLayoutPanel();
// Creating a FlowLayoutPanel
FlowLayoutPanel f = new FlowLayoutPanel();
Step 2: After creating FlowLayoutPanel, set the AutoSizeMode property of the FlowLayoutPanel provided by the FlowLayoutPanel class.// Setting the AutoSizeMode property
f.AutoSizeMode = AutoSizeMode.GrowAndShrink;
// Setting the AutoSizeMode property
f.AutoSizeMode = AutoSizeMode.GrowAndShrink;
Step 3: And last add this FlowLayoutPanel control to the form and also adding child controls in the FlowLayoutPanel using the following statements:// Adding a FlowLayoutPanel
// control to the form
this.Controls.Add(f);
and
// Adding child controls to
// the FlowLayoutPanel
f.Controls.Add(r1);
// Adding a FlowLayoutPanel
// control to the form
this.Controls.Add(f);
and
// Adding child controls to
// the FlowLayoutPanel
f.Controls.Add(r1);
Example:
using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp50 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the // properties of FlowLayoutPanel FlowLayoutPanel f = new FlowLayoutPanel(); f.Location = new Point(380, 124); f.AutoSize = true; f.AutoSizeMode = AutoSizeMode.GrowAndShrink; f.Name = "Mycontainer"; f.Font = new Font("Calibri", 12); f.FlowDirection = FlowDirection.RightToLeft; f.BorderStyle = BorderStyle.Fixed3D; f.ForeColor = Color.BlueViolet; f.BackColor = Color.LightPink; f.Visible = true; // Adding this control to the form this.Controls.Add(f); // Creating and setting the // properties of radio buttons RadioButton r1 = new RadioButton(); r1.Location = new Point(3, 3); r1.Size = new Size(95, 20); r1.Text = "R1"; // Adding this control // to the FlowLayoutPanel f.Controls.Add(r1); RadioButton r2 = new RadioButton(); r2.Location = new Point(94, 3); r2.Size = new Size(95, 20); r2.Text = "R2"; // Adding this control // to the FlowLayoutPanel f.Controls.Add(r2); RadioButton r3 = new RadioButton(); r3.Location = new Point(3, 26); r3.Size = new Size(95, 20); r3.Text = "R3"; // Adding this control // to the FlowLayoutPanel f.Controls.Add(r3); }}}
Output:
CSharp-Windows-Forms-Namespace
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Introduction to .NET Framework
C# | Delegates
C# | Multiple inheritance using interfaces
Differences Between .NET Core and .NET Framework
C# | Data Types
C# | Constructors
C# | String.IndexOf( ) Method | Set - 1
C# | Class and Object
Extension Method in C#
Difference between Ref and Out keywords in C# | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n02 Aug, 2019"
},
{
"code": null,
"e": 582,
"s": 28,
"text": "In Windows Forms, FlowLayoutPanel control is used to arrange its child controls in a horizontal or vertical flow direction. Or in other words, FlowLayoutPanel is a container w... |
p5.js | quad() Function | 12 Apr, 2019
The quad() function is an inbuilt function in p5.js which is used to draw a quadrilateral. A quadrilateral is a four-sided polygon. It function creates a shape similar to rectangle, but the angles between two edges are not constrained to 90 degrees.
Syntax:
quad(x1, y1, x2, y2, x3, y3, x4, y4)
or
quad(x1, y1, z1, x2, y2, z2, x3, y3, z3, x4, y4, z4)
Parameters: This function accepts twelve parameters as mentioned above and described below:
x1: This parameter takes the x-coordinate of the first point.
y1: This parameter takes the y-coordinate of the first point.
z1: This parameter takes the z-coordinate of the first point.
x2: This parameter takes the x-coordinate of the second point.
y2: This parameter takes the y-coordinate of the second point.
z2: This parameter takes the z-coordinate of the second point.
x3: This parameter takes the x-coordinate of the third point.
y3: This parameter takes the y-coordinate of the third point.
z3: This parameter takes the z-coordinate of the third point.
x4: This parameter takes the z-coordinate of the fourth point.
y4: This parameter takes the z-coordinate of the fourth point.
z4: This parameter takes the z-coordinate of the fourth point.
Below programs illustrates the quad() function in P5.js:
Example 1: This example uses quad() function to create a polygon without using z-coordinate.
function setup() { // Create canvas of given size createCanvas(400, 400);} function draw() { // Set the background color background(220); noStroke(); // Set the fill color fill('green'); // x1, y1 = 38, 31; x2, y2 = 300, 20; // x3, y3 = 100, 63; x4, y4 = 130, 250 quad(38, 31, 300, 20, 100, 63, 130, 250); }
Output:
Example 2: This example uses quad() function to create polygon with z-coordinate.
function setup() { // Create canvas of given size createCanvas(400, 400);} function draw() { // Set the background color background(99); noStroke(); // Set the filled color fill('pink'); // x1, y1, z1 = 38, 131, 100; // x2, y2, z2 = 320, 100, 63; // x3, y3, z3 = 130, 150, 134; // x4, y4, z4 = 155, 66, 88; quad(38, 131, 100, 320, 100, 63, 130, 150, 134, 155, 66, 88); }
Output:
Reference: https://p5js.org/reference/#/p5/quad
JavaScript-p5.js
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
Node.js | fs.writeFileSync() Method
Remove elements from a JavaScript Array
Form validation using HTML and JavaScript
How to insert spaces/tabs in text using HTML/CSS?
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Node.js fs.readFileSync() Method
How to set the default value for an HTML <select> element ? | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n12 Apr, 2019"
},
{
"code": null,
"e": 278,
"s": 28,
"text": "The quad() function is an inbuilt function in p5.js which is used to draw a quadrilateral. A quadrilateral is a four-sided polygon. It function creates a shape similar to rect... |
Lowest Common Ancestor in a Binary Tree | Set 2 (Using Parent Pointer) | 29 Jun, 2022
Given values of two nodes in a Binary Tree, find the Lowest Common Ancestor (LCA). It may be assumed that both nodes exist in the tree.
For example, consider the Binary Tree in diagram, LCA of 10 and 14 is 12 and LCA of 8 and 14 is 8.
Let T be a rooted tree. The lowest common ancestor between two nodes n1 and n2 is defined as the lowest node in T that has both n1 and n2 as descendants (where we allow a node to be a descendant of itself). Source : Wikipedia.
We have discussed different approaches to find LCA in set 1. Finding LCA becomes easy when parent pointer is given as we can easily find all ancestors of a node using parent pointer.
Below are steps to find LCA.
Create an empty hash table.Insert n1 and all of its ancestors in hash table.Check if n2 or any of its ancestors exist in hash table, if yes return the first existing ancestor.
Create an empty hash table.
Insert n1 and all of its ancestors in hash table.
Check if n2 or any of its ancestors exist in hash table, if yes return the first existing ancestor.
Below is the implementation of above steps.
C
Java
C#
// C++ program to find lowest common ancestor using parent pointer#include <bits/stdc++.h>using namespace std; // A Tree Nodestruct Node{ Node *left, *right, *parent; int key;}; // A utility function to create a new BST nodeNode *newNode(int item){ Node *temp = new Node; temp->key = item; temp->parent = temp->left = temp->right = NULL; return temp;} /* A utility function to insert a new node with given key in Binary Search Tree */Node *insert(Node *node, int key){ /* If the tree is empty, return a new node */ if (node == NULL) return newNode(key); /* Otherwise, recur down the tree */ if (key < node->key) { node->left = insert(node->left, key); node->left->parent = node; } else if (key > node->key) { node->right = insert(node->right, key); node->right->parent = node; } /* return the (unchanged) node pointer */ return node;} // To find LCA of nodes n1 and n2 in Binary TreeNode *LCA(Node *n1, Node *n2){ // Creata a map to store ancestors of n1 map <Node *, bool> ancestors; // Insert n1 and all its ancestors in map while (n1 != NULL) { ancestors[n1] = true; n1 = n1->parent; } // Check if n2 or any of its ancestors is in // map. while (n2 != NULL) { if (ancestors.find(n2) != ancestors.end()) return n2; n2 = n2->parent; } return NULL;} // Driver method to test above functionsint main(void){ Node * root = NULL; root = insert(root, 20); root = insert(root, 8); root = insert(root, 22); root = insert(root, 4); root = insert(root, 12); root = insert(root, 10); root = insert(root, 14); Node *n1 = root->left->right->left; Node *n2 = root->left; Node *lca = LCA(n1, n2); printf("LCA of %d and %d is %d \n", n1->key, n2->key, lca->key); return 0;}
import java.util.HashMap;import java.util.Map; // Java program to find lowest common ancestor using parent pointer// A tree nodeclass Node { int key; Node left, right, parent; Node(int key) { this.key = key; left = right = parent = null; }} class BinaryTree { Node root, n1, n2, lca; /* A utility function to insert a new node with given key in Binary Search Tree */ Node insert(Node node, int key) { /* If the tree is empty, return a new node */ if (node == null) return new Node(key); /* Otherwise, recur down the tree */ if (key < node.key) { node.left = insert(node.left, key); node.left.parent = node; } else if (key > node.key) { node.right = insert(node.right, key); node.right.parent = node; } /* return the (unchanged) node pointer */ return node; } // To find LCA of nodes n1 and n2 in Binary Tree Node LCA(Node n1, Node n2) { // Creata a map to store ancestors of n1 Map<Node, Boolean> ancestors = new HashMap<Node, Boolean>(); // Insert n1 and all its ancestors in map while (n1 != null) { ancestors.put(n1, Boolean.TRUE); n1 = n1.parent; } // Check if n2 or any of its ancestors is in // map. while (n2 != null) { if (ancestors.containsKey(n2) != ancestors.isEmpty()) return n2; n2 = n2.parent; } return null; } // Driver method to test above functions public static void main(String[] args) { BinaryTree tree = new BinaryTree(); tree.root = tree.insert(tree.root, 20); tree.root = tree.insert(tree.root, 8); tree.root = tree.insert(tree.root, 22); tree.root = tree.insert(tree.root, 4); tree.root = tree.insert(tree.root, 12); tree.root = tree.insert(tree.root, 10); tree.root = tree.insert(tree.root, 14); tree.n1 = tree.root.left.right.left; tree.n2 = tree.root.left; tree.lca = tree.LCA(tree.n1, tree.n2); System.out.println("LCA of " + tree.n1.key + " and " + tree.n2.key + " is " + tree.lca.key); }} // This code has been contributed by Mayank Jaiswal(mayank_24)
// C# program to find lowest common ancestor using parent pointer// A tree nodeusing System;using System.Collections;using System.Collections.Generic; public class Node { public int key; public Node left, right, parent; public Node(int key) { this.key = key; left = right = parent = null; }} class BinaryTree { Node root, n1, n2, lca; /* A utility function to insert a new node with given key in Binary Search Tree */ Node insert(Node node, int key) { /* If the tree is empty, return a new node */ if (node == null) return new Node(key); /* Otherwise, recur down the tree */ if (key < node.key) { node.left = insert(node.left, key); node.left.parent = node; } else if (key > node.key) { node.right = insert(node.right, key); node.right.parent = node; } /* return the (unchanged) node pointer */ return node; } // To find LCA of nodes n1 and n2 in Binary Tree Node LCA(Node n1, Node n2) { // Creata a map to store ancestors of n1 Dictionary<Node, Boolean> ancestors = new Dictionary<Node, Boolean>(); // Insert n1 and all its ancestors in map while (n1 != null) { ancestors.Add(n1,true); n1 = n1.parent; } // Check if n2 or any of its ancestors is in // map. while (n2 != null) { if (ancestors.ContainsKey(n2)) return n2; n2 = n2.parent; } return null; } // Driver code public static void Main(String []args) { BinaryTree tree = new BinaryTree(); tree.root = tree.insert(tree.root, 20); tree.root = tree.insert(tree.root, 8); tree.root = tree.insert(tree.root, 22); tree.root = tree.insert(tree.root, 4); tree.root = tree.insert(tree.root, 12); tree.root = tree.insert(tree.root, 10); tree.root = tree.insert(tree.root, 14); tree.n1 = tree.root.left.right.left; tree.n2 = tree.root.left; tree.lca = tree.LCA(tree.n1, tree.n2); Console.WriteLine("LCA of " + tree.n1.key + " and " + tree.n2.key + " is " + tree.lca.key); }} // This code is contributed by Arnab Kundu
LCA of 10 and 8 is 8
Note : The above implementation uses insert of Binary Search Tree to create a Binary Tree, but the function LCA is for any Binary Tree (not necessarily a Binary Search Tree).
Time Complexity : O(h) where h is height of Binary Tree if we use hash table to implement the solution (Note that the above solution uses map which takes O(Log h) time to insert and find). So the time complexity of above implementation is O(h Log h).
Auxiliary Space : O(h)
A O(h) time and O(1) Extra Space Solution:The above solution requires extra space because we need to use a hash table to store visited ancestors. We can solve the problem in O(1) extra space using following fact : If both nodes are at same level and if we traverse up using parent pointers of both nodes, the first common node in the path to root is lca.The idea is to find depths of given nodes and move up the deeper node pointer by the difference between depths. Once both nodes reach same level, traverse them up and return the first common node.
Thanks to Mysterious Mind for suggesting this approach.
C++
Java
C#
// C++ program to find lowest common ancestor using parent pointer#include <bits/stdc++.h>using namespace std; // A Tree Nodestruct Node{ Node *left, *right, *parent; int key;}; // A utility function to create a new BST nodeNode *newNode(int item){ Node *temp = new Node; temp->key = item; temp->parent = temp->left = temp->right = NULL; return temp;} /* A utility function to insert a new node withgiven key in Binary Search Tree */Node *insert(Node *node, int key){ /* If the tree is empty, return a new node */ if (node == NULL) return newNode(key); /* Otherwise, recur down the tree */ if (key < node->key) { node->left = insert(node->left, key); node->left->parent = node; } else if (key > node->key) { node->right = insert(node->right, key); node->right->parent = node; } /* return the (unchanged) node pointer */ return node;} // A utility function to find depth of a node// (distance of it from root)int depth(Node *node){ int d = -1; while (node) { ++d; node = node->parent; } return d;} // To find LCA of nodes n1 and n2 in Binary TreeNode *LCA(Node *n1, Node *n2){ // Find depths of two nodes and differences int d1 = depth(n1), d2 = depth(n2); int diff = d1 - d2; // If n2 is deeper, swap n1 and n2 if (diff < 0) { Node * temp = n1; n1 = n2; n2 = temp; diff = -diff; } // Move n1 up until it reaches the same level as n2 while (diff--) n1 = n1->parent; // Now n1 and n2 are at same levels while (n1 && n2) { if (n1 == n2) return n1; n1 = n1->parent; n2 = n2->parent; } return NULL;} // Driver method to test above functionsint main(void){ Node * root = NULL; root = insert(root, 20); root = insert(root, 8); root = insert(root, 22); root = insert(root, 4); root = insert(root, 12); root = insert(root, 10); root = insert(root, 14); Node *n1 = root->left->right->left; Node *n2 = root->right; Node *lca = LCA(n1, n2); printf("LCA of %d and %d is %d \n", n1->key, n2->key, lca->key); return 0;}
import java.util.HashMap;import java.util.Map; // Java program to find lowest common ancestor using parent pointer // A tree nodeclass Node { int key; Node left, right, parent; Node(int key) { this.key = key; left = right = parent = null; }} class BinaryTree { Node root, n1, n2, lca; /* A utility function to insert a new node with given key in Binary Search Tree */ Node insert(Node node, int key) { /* If the tree is empty, return a new node */ if (node == null) return new Node(key); /* Otherwise, recur down the tree */ if (key < node.key) { node.left = insert(node.left, key); node.left.parent = node; } else if (key > node.key) { node.right = insert(node.right, key); node.right.parent = node; } /* return the (unchanged) node pointer */ return node; } // A utility function to find depth of a node // (distance of it from root) int depth(Node node) { int d = -1; while (node != null) { ++d; node = node.parent; } return d; } // To find LCA of nodes n1 and n2 in Binary Tree Node LCA(Node n1, Node n2) { // Find depths of two nodes and differences int d1 = depth(n1), d2 = depth(n2); int diff = d1 - d2; // If n2 is deeper, swap n1 and n2 if (diff < 0) { Node temp = n1; n1 = n2; n2 = temp; diff = -diff; } // Move n1 up until it reaches the same level as n2 while (diff-- != 0) n1 = n1.parent; // Now n1 and n2 are at same levels while (n1 != null && n2 != null) { if (n1 == n2) return n1; n1 = n1.parent; n2 = n2.parent; } return null; } // Driver method to test above functions public static void main(String[] args) { BinaryTree tree = new BinaryTree(); tree.root = tree.insert(tree.root, 20); tree.root = tree.insert(tree.root, 8); tree.root = tree.insert(tree.root, 22); tree.root = tree.insert(tree.root, 4); tree.root = tree.insert(tree.root, 12); tree.root = tree.insert(tree.root, 10); tree.root = tree.insert(tree.root, 14); tree.n1 = tree.root.left.right.left; tree.n2 = tree.root.right; tree.lca = tree.LCA(tree.n1, tree.n2); System.out.println("LCA of " + tree.n1.key + " and " + tree.n2.key + " is " + tree.lca.key); }} // This code has been contributed by Mayank Jaiswal(mayank_24)
// C# program to find lowest common // ancestor using parent pointer using System; // A tree node public class Node{ public int key; public Node left, right, parent; public Node(int key) { this.key = key; left = right = parent = null; }} class GFG{public Node root, n1, n2, lca; /* A utility function to insert a new node with given key in Binary Search Tree */public virtual Node insert(Node node, int key){ /* If the tree is empty, return a new node */ if (node == null) { return new Node(key); } /* Otherwise, recur down the tree */ if (key < node.key) { node.left = insert(node.left, key); node.left.parent = node; } else if (key > node.key) { node.right = insert(node.right, key); node.right.parent = node; } /* return the (unchanged) node pointer */ return node;} // A utility function to find depth of a // node (distance of it from root) public virtual int depth(Node node){ int d = -1; while (node != null) { ++d; node = node.parent; } return d;} // To find LCA of nodes n1 and n2 // in Binary Tree public virtual Node LCA(Node n1, Node n2){ // Find depths of two nodes // and differences int d1 = depth(n1), d2 = depth(n2); int diff = d1 - d2; // If n2 is deeper, swap n1 and n2 if (diff < 0) { Node temp = n1; n1 = n2; n2 = temp; diff = -diff; } // Move n1 up until it reaches // the same level as n2 while (diff-- != 0) { n1 = n1.parent; } // Now n1 and n2 are at same levels while (n1 != null && n2 != null) { if (n1 == n2) { return n1; } n1 = n1.parent; n2 = n2.parent; } return null;} // Driver Codepublic static void Main(string[] args){ GFG tree = new GFG(); tree.root = tree.insert(tree.root, 20); tree.root = tree.insert(tree.root, 8); tree.root = tree.insert(tree.root, 22); tree.root = tree.insert(tree.root, 4); tree.root = tree.insert(tree.root, 12); tree.root = tree.insert(tree.root, 10); tree.root = tree.insert(tree.root, 14); tree.n1 = tree.root.left.right.left; tree.n2 = tree.root.right; tree.lca = tree.LCA(tree.n1, tree.n2); Console.WriteLine("LCA of " + tree.n1.key + " and " + tree.n2.key + " is " + tree.lca.key);}} // This code is contributed by Shrikant13
LCA of 10 and 22 is 20
You may like to see below articles as well :
Lowest Common Ancestor in a Binary Tree | Set 1
Lowest Common Ancestor in a Binary Search Tree.
Find LCA in Binary Tree using RMQ
This article is contributed by Dheeraj Gupta. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
RajSuvariya
shrikanth13
Chethan T S
andrew1234
LCA
Tree
Tree
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
AVL Tree | Set 1 (Insertion)
Introduction to Data Structures
What is Data Structure: Types, Classifications and Applications
A program to check if a binary tree is BST or not
Decision Tree
Top 50 Tree Coding Problems for Interviews
Segment Tree | Set 1 (Sum of given range)
Overview of Data Structures | Set 2 (Binary Tree, BST, Heap and Hash)
Complexity of different operations in Binary tree, Binary Search Tree and AVL tree
Lowest Common Ancestor in a Binary Search Tree. | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n29 Jun, 2022"
},
{
"code": null,
"e": 191,
"s": 54,
"text": "Given values of two nodes in a Binary Tree, find the Lowest Common Ancestor (LCA). It may be assumed that both nodes exist in the tree. "
},
{
"code": null,
"e": ... |
How to show/hide data when the particular condition is true in AngularJS ? | 18 Nov, 2020
In AngularJS, in order to hide or show data or content, we can use *ngIf structural directive. By using this, we can evaluate conditions and then *ngIf based on the condition displays the content. For example, if the condition for *ngIf is true then the content will be displayed and if the condition is false then the content is not displayed.
Approach:
In order to give a lucid and elaborate view, I’ll explain the concept using an example.
Consider we have an array of objects in .ts file and the objects in the array contain a list of technical and non-technical companies.
The objective of this experiment is to display the data for all the technical companies and to hide all the non-technical companies.
In order to traverse through the array, we will use the *ngFor directive in .html file.
Once you are done with the implementation start the server.
Implementation:
app.component.ts:
Javascript
import { Component } from '@angular/core'; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ]})export class AppComponent { companies = [ { name: "Microsoft", isTechnical : true, }, { name: "GeeksForGeeks", isTechnical : true }, { name: "Netflix", isTechnical : false }, { name: "TCS", isTechnical : true } ]}
app.module.ts:
Javascript
import { NgModule } from '@angular/core';import { BrowserModule } from '@angular/platform-browser';import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; @NgModule({ imports: [ BrowserModule, FormsModule ], declarations: [ AppComponent ], bootstrap: [ AppComponent ]})export class AppModule { }
app.component.html:
HTML
<ol> <li *ngFor="let company of companies"> <i *ngIf="company.isTechnical"> {{company.name}} </i> </li> </ol>
Output: You can clearly see the 3rd item is hidden because its condition is false. In order to give a more-visible example, I left it blank.
AngularJS-Misc
AngularJS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Routing in Angular 9/10
Angular PrimeNG Dropdown Component
Angular 10 (blur) Event
How to make a Bootstrap Modal Popup in Angular 9/8 ?
How to create module with Routing in Angular 9 ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Installation of Node.js on Linux
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ? | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n18 Nov, 2020"
},
{
"code": null,
"e": 373,
"s": 28,
"text": "In AngularJS, in order to hide or show data or content, we can use *ngIf structural directive. By using this, we can evaluate conditions and then *ngIf based on the condition ... |
C# | CompareOrdinal() Method | 31 May, 2022
In C#, CompareOrdinal() is a string method. This method is used to compare the two specified string objects or substrings using the numerical values of the corresponding Char objects in each string or substring. This method can be overloaded by passing different parameters to it.
CompareOrdinal(String, String)
CompareOrdinal(String, Int32, String, Int32, Int32)
CompareOrdinal(String, String)
This method is used to compare the two particular String objects by calculating the numeric values of the corresponding Char objects in each string.Syntax:
public static int CompareOrdinal(
string strA, string strB)
Parameters: This method accepts two parameters strA and strB. StrA is the first string to be compare and strB is the second string to be compare. The type of both the parameters is System.String.
Return Value: This method returns a integer value of type System.Int32. If both strings are equal, it returns 0. It returns positive number if the first string is greater than second string, otherwise, it returns the negative number.
Example:
Input:
string s1 = "GFG";
string s2 = "GFG";
string.CompareOrdinal(s1, s2)
Output: 0
Input:
string s1 = "hello";
string s2 = "csharp";
string.CompareOrdinal(s1, s2)
Output: 5
Input:
string s1 = "csharp";
string s2 = "mello";
string.CompareOrdinal(s1, s2)
Output: -5
Program: To illustrate the CompareOrdinal(string strA, string strB) method.
CSharp
// C# program to demonstrate the// CompareOrdinal(string strA, string strB)using System;class Geeks { // Main Method public static void Main(string[] args) { // strings to be compared string s1 = "GFG"; string s2 = "GFG"; string s3 = "hello"; string s4 = "csharp"; // using CompareOrdinal(string strA, string strB) // method to compare displaying resultant value Console.WriteLine(string.CompareOrdinal(s1, s2)); Console.WriteLine(string.CompareOrdinal(s1, s3)); Console.WriteLine(string.CompareOrdinal(s3, s4)); }}
0
-33
5
CompareOrdinal(String, Int32, String, Int32, Int32)
This method is used to compare the substrings of the two particular string objects by calculating the numeric values of the corresponding Char objects in each substring.Syntax:
public static int CompareOrdinal(
string strA,
int indexA,
string strB,
int indexB,
int length
)
Parameters: This method will take the five parameters, strA is the first string object and similarly strB is the second string object and indexA is the starting index of the substring in strA and indexB is the starting index of the substring in strB and length is the maximum number of characters in the substrings to compare. The type of strA and StrB is System.String and indexA, indexB and length is of type System.Int32.Return Value: This method will return a integer value of type System.Int32. This method will return 0 if both the substrings are equal and also the length is zero. It will return a positive value if the substring in the strA is greater than the substring in the strB otherwise returns negative value.Exception: This method will give ArgumentOutOfRangeException in three cases:
if strA is not null and indexA is greater than the Length of strA.
if indexA, indexB, or length is negative.
if strB is not null and the indexB is greater than the Length of strB.
Program: To illustrate the CompareOrdinal(string strA, int indexA, string strB, int indexB, int length):
CSharp
// C# program to illustrate the// CompareOrdinal(String, Int32,// String, Int32, Int32) methodusing System; class GFG { // Main Method static public void Main () { // strings to be compared string s1 = "GeeksforGeeks"; string s2 = "GforG"; // starting index of substrings int sub1 = 5; int sub2 = 1; // length (5th parameter) int l1 = 3; // using CompareOrdinal(String, Int32, // String, Int32, Int32) method and // storing the result in variable res int res = string.CompareOrdinal(s1, sub1, s2, sub2, l1); // Displaying the result Console.WriteLine("The Result is: " + res); }}
The Result is: 0
References:
https://msdn.microsoft.com/en-us/library/af26w0wa(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/es986b3k(v=vs.110).aspx
gulshankumarar231
rkbhola5
CSharp-method
CSharp-string
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
C# | Multiple inheritance using interfaces
Differences Between .NET Core and .NET Framework
Extension Method in C#
C# | List Class
HashSet in C# with Examples
C# | .NET Framework (Basic Architecture and Component Stack)
Switch Statement in C#
Partial Classes in C#
Lambda Expressions in C#
Hello World in C# | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n31 May, 2022"
},
{
"code": null,
"e": 310,
"s": 28,
"text": "In C#, CompareOrdinal() is a string method. This method is used to compare the two specified string objects or substrings using the numerical values of the corresponding Char ... |
How to use goto in Javascript ? | 11 Oct, 2019
There is no goto keyword in javascript. The reason being it provides a way to branch in an arbitrary and unstructured manner. This might make a goto statement hard to understand and maintain. But there are still other ways to get the desired result.The method for getting the goto result in JavaScript is use of Break and Continue.In addition to its use in switch statements, the break statement can also be used to provide a “civilized” form of goto. By using this form of break you can break out of one or more blocks of code.These blocks need not be a part of some loop or switch, just any block of code.You can also precisely specify where the execution will resume, because this form of break works with a label. So the conclusion is the break and continue is used to give you the benefits of goto without its drawbacks.
The general syntax of labelled break is:
break label;
Similar is done for continue.Here label can be the name of the block of codes, it can be any variable but not a javascript keyword.
Examples for Conversion:
var number = 0;
Start_Position
document.write("Anything you want to print");
number++;
if (number & lt; 100) goto start_position;
Note: This is not a code. Just an example where you want to use goto statement.
Now this will be achieved in JavaScript as follows:
var number = 0;
start_position: while (true) {
document.write("Anything you want to print");
number++;
if (number & lt; 100) continue start_position;
break;
}
Here Continue and break both are used with a label to shift control to different parts of the program. It can be used in loops to pass controls to other parts, works well after checking certain conditions and can be applied to many more logic statements.Now if we want to get out of the loop for a certain condition then we can use break keyword.
Take the above example and add break into it for a certain condition.
var i;
for (i = 1; i & lt; = 10; i++) {
document.write(i);
if (i === 9) {
break;
}
}
document.write( & quot; < br > Learnt something new ");
Output:
123456789
Here we just used the break keyword to get out of the loop.
Now again take the above example and add continue statement.
var i;
for(i=1;i<=10;i++){
if (i===4 || i===2) {
continue;
}
document.write(i);
if(i===6){
break;
}
}
document.write("Learnt something new");
Output:
1356
Thus same outputs can be achieved with break or continue and both are used to replace goto in JavaScript.Example:
<html> <body> <h2>JavaScript break</h2> <p id="demo"></p> <script> var cars = [ "BMW", "Volvo", "Maruti", "Honda"]; var text = ""; list: { text += cars[0] + "<br>"; text += cars[1] + "<br>"; break list; text += cars[2] + "<br>"; text += cars[3] + "<br>"; } document.getElementById( "demo").innerHTML = text; </script> </body> </html>
Output:
BMW
Volvo
Another program for Continue is given below.
<html> <body> <h2>JavaScript Loops</h2> <p>A loop with a <b>continue</b> statement.</p> <p>A loop which will skip the step where i = 3.</p> <p id="demo"></p> <script> var text = ""; var i; for (i = 0; i < 10; i++) { if (i === 3) { continue; } text += "The number is " + i + "<br>"; } document.getElementById("demo").innerHTML = text; </script> </body> </html>
Output:
JavaScript-Misc
Picked
JavaScript
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
Remove elements from a JavaScript Array
How to append HTML code to a div using JavaScript ?
Difference Between PUT and PATCH Request
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Installation of Node.js on Linux
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ? | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n11 Oct, 2019"
},
{
"code": null,
"e": 854,
"s": 28,
"text": "There is no goto keyword in javascript. The reason being it provides a way to branch in an arbitrary and unstructured manner. This might make a goto statement hard to understa... |
get_window_size driver method – Selenium Python | 03 Dec, 2020
Selenium’s Python Module is built to perform automated testing with Python. Selenium Python bindings provides a simple API to write functional/acceptance tests using Selenium WebDriver. To open a webpage using Selenium Python, checkout – Navigating links using get method – Selenium Python. Just being able to go to places isn’t terribly useful. What we’d really like to do is to interact with the pages, or, more specifically, the HTML elements within a page. There are multiple strategies to find an element using Selenium, checkout – Locating Strategies. Selenium WebDriver offers various useful methods to control the session, or in other words, browser. For example, adding a cookie, pressing back button, navigating among tabs, etc.
This article revolves around get_window_size driver method in Selenium. get_window_size method gets the width and height of the current window.
Syntax –
driver.get_window_size()
Example – Now one can use get_window_size method as a driver method as below –
driver.get("https://www.geeksforgeeks.org/")
driver.get_window_size()
To demonstrate, get_window_size method of WebDriver in Selenium Python. Let’ s visit https://www.geeksforgeeks.org/ and operate on driver object. Let’s get window size,
Program –
Python3
# import webdriverfrom selenium import webdriver # create webdriver objectdriver = webdriver.Firefox() # get geeksforgeeks.orgdriver.get("https://www.geeksforgeeks.org/") # get window sizeprint(driver.get_window_size())
Output – Screenshot added –
Terminal output –
Akanksha_Rai
Python-selenium
selenium
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Different ways to create Pandas Dataframe
Enumerate() in Python
How to Install PIP on Windows ?
*args and **kwargs in Python
Python Classes and Objects
Python OOPs Concepts
Convert integer to string in Python
Introduction To PYTHON
How to drop one or multiple columns in Pandas Dataframe
Python | os.path.join() method | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n03 Dec, 2020"
},
{
"code": null,
"e": 768,
"s": 28,
"text": "Selenium’s Python Module is built to perform automated testing with Python. Selenium Python bindings provides a simple API to write functional/acceptance tests using Selenium ... |
SASS | Interpolation | 11 Oct, 2019
Interpolation is basically an insertion. Interpolation allows us to interpolate sass expressions into a simple SASS or CSS code. Means, you can define ( some part or the whole ) selector name, property name, CSS at-rules, quoted or unquoted strings etc, as a variable. Interpolation is a new principle and it is widely used in SASS.
To interpolate an expression we need to wrap the expression using #{ }.
Syntax:
......#{$variable_name}........
where ..... represents some text.
See the example below to get more understanding:SASS file:
@mixin interpolation($changeable, $val, $val2, $prop1, $prop2)
{
background-#{$changeable}: $val;
position: $val2;
#{$prop1}: 0px;
#{$prop2}: 0px;
}
.blockarea{
@include interpolation("image", url("img.png"), absolute, top, right);
}
.anotherbloakarea{
@include interpolation("color", lightgray, absolute, top, left);
}
Compiled CSS file:
.blockarea {
background-image: url("img.png");
position: absolute;
top: 0px;
right: 0px;
}
.anotherbloakarea {
background-color: lightgray;
position: absolute;
top: 0px;
left: 0px;
}
Interpolation in SASS expressions always returns an unquoted string, no matter whether the string is quoted or not.
Uses of Interpolation:
To use dynamically created names as a property name, a variable name or for any other same purposes.To create a very reusable code; where you can define a property name, strings, selector names etc, as a variable.
To use dynamically created names as a property name, a variable name or for any other same purposes.
To create a very reusable code; where you can define a property name, strings, selector names etc, as a variable.
CSS-Advanced
SASS
CSS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to update Node.js and NPM to next version ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS?
How to create footer to stay at the bottom of a Web page?
CSS to put icon inside an input element in a form
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ? | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n11 Oct, 2019"
},
{
"code": null,
"e": 361,
"s": 28,
"text": "Interpolation is basically an insertion. Interpolation allows us to interpolate sass expressions into a simple SASS or CSS code. Means, you can define ( some part or the whole... |
Flutter – Adding 3D Objects | 17 Feb, 2021
3D objects are those objects which have 3 dimensions length, width, and depth. These objects provide a great user experience when used for various purposes. And adding such type of visualization to your app will be very helpful for the user, which in turn helps your app to grow and attract a large audience as well.
So today we will be building a simple flutter based app to demonstrate how you can add 3D objects to your app project.
Step 1: Creating a new flutter application project and adding necessary dependencies
Open VS Code, press “Ctrl+Shift+P” and select “Flutter: New Application Project”
Select the folder where you want to add this flutter project to or create a new one
Then after selecting the folder, give a name to your project and hit “Enter”
Flutter will create a new project for you, then on the left bottom side click on the “pubspec.yaml” file
Add the following dependencies, which includes the flutter cube package for adding the 3D objects to your project
dependencies:
flutter:
sdk: flutter
flutter_cube: ^0.0.6
Step 2: Creating the assets folder and adding the required assets.
On the left side look for the “New Folder” option, add a new folder and name it to “assets“
Right-click on the folder and click on “Reveal in File Explorer”.
Go to this link, download the folders, or you can choose your favorite 3D objects from here or from any other website which provides 3D models.
Copy these folders to the assets folder, open the “pubspec.yaml” file again, and add the following to the “flutter” section
flutter:
uses-material-design: true
assets:
- assets/Astronaut/
- assets/material/
- assets/earth/
Step 3: Dart code for adding the 3D objects.
This is the code for the “main.dart” file in the “lib” folder
Dart
import 'package:flutter/material.dart';import 'package:flutter_3d/home_page.dart'; import 'home_page.dart'; void main() { runApp(MyApp());} class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter 3D', debugShowCheckedModeBanner: false, theme: ThemeData( primarySwatch: Colors.blue, visualDensity: VisualDensity.adaptivePlatformDensity, ), home: HomePage(), ); }}
Step 4: Adding the Home page code to our project
Right-click on the “lib” folder, add the new file and name it “home_page.dart“
The following is the code for the “home_page.dart” file
Dart
// Dart Program to add 3D objects to your project // importing material.dartimport 'package:flutter/material.dart'; // importing flutter cube packageimport 'package:flutter_cube/flutter_cube.dart'; // creating class of stateful widgetclass HomePage extends StatefulWidget { @override _HomePageState createState() => _HomePageState();} class _HomePageState extends State<HomePage> { // adding necessary objects Object earth; Object astro; Object material; @override void initState() { // assigning name to the objects and providing the // object's file path (obj file) earth = Object(fileName: "assets/earth/earth_ball.obj"); astro = Object(fileName: "assets/Astronaut/Astronaut.obj"); material = Object(fileName: "assets/material/model.obj"); super.initState(); } @override Widget build(BuildContext context) { return Scaffold( extendBodyBehindAppBar: true, // creating appbar appBar: AppBar( centerTitle: true, title: Text( "3D Objects in Flutter", style: TextStyle( color: Colors.greenAccent, fontWeight: FontWeight.bold, fontSize: 25), ), backgroundColor: Colors.transparent, elevation: 0.0, ), body: Container( // providing linear gradient to the // background with two colours decoration: BoxDecoration( gradient: LinearGradient( colors: [Colors.blueAccent, Colors.greenAccent], begin: Alignment.topLeft, end: Alignment.bottomRight)), child: Column( children: [ Expanded( // adding the cube function to // create the scene of our object child: Cube( onSceneCreated: (Scene scene) { scene.world.add(material); scene.camera.zoom = 10; }, ), ), // adding the earth object Expanded( child: Cube( onSceneCreated: (Scene scene) { scene.world.add(earth); scene.camera.zoom = 10; }, ), ), // adding the astro object Expanded( child: Cube( onSceneCreated: (Scene scene) { scene.world.add(astro); scene.camera.zoom = 10; }, ), ), ], ), ), ); }}
Step 5: Adding a new device and running the project
Add a new device to your project like any android mobile emulator, real device, or chrome(web)
After that press “Ctrl + F5” or go to “Run”>”Run Without Debugging” and see the output on your connected device
Output:
android
Flutter UI-components
Technical Scripter 2020
Dart
Flutter
Technical Scripter
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
ListView Class in Flutter
Flutter - Search Bar
Flutter - Dialogs
Flutter - FutureBuilder Widget
Flutter - Flexible Widget
Flutter Tutorial
Flutter - Search Bar
Flutter - Dialogs
Flutter - FutureBuilder Widget
Flutter - Flexible Widget | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n17 Feb, 2021"
},
{
"code": null,
"e": 369,
"s": 52,
"text": "3D objects are those objects which have 3 dimensions length, width, and depth. These objects provide a great user experience when used for various purposes. And adding such t... |
Convert a Set of String to a comma separated String in Java - GeeksforGeeks | 11 Dec, 2018
Given a Set of String, the task is to convert the Set to a comma separated String in Java.
Examples:
Input: Set<String> = ["Geeks", "ForGeeks", "GeeksForGeeks"]
Output: "Geeks, For, Geeks"
Input: Set<String> = ["G", "e", "e", "k", "s"]
Output: "G, e, e, k, s"
Approach: This can be achieved with the help of join() method of String as follows.
Get the Set of String.Form a comma separated String from the Set of String using join() method by passing comma ‘, ‘ and the set as parameters.Print the String.
Get the Set of String.
Form a comma separated String from the Set of String using join() method by passing comma ‘, ‘ and the set as parameters.
Print the String.
Below is the implementation of the above approach:
// Java program to convert Set of String// to comma separated String import java.util.*; public class GFG { public static void main(String args[]) { // Get the Set of String Set<String> set = new HashSet<>( Arrays .asList("Geeks", "ForGeeks", "GeeksForGeeks")); // Print the Set of String System.out.println("Set of String: " + set); // Convert the Set of String to String String string = String.join(", ", set); // Print the comma separated String System.out.println("Comma separated String: " + string); }}
Set of String: [ForGeeks, Geeks, GeeksForGeeks]
Comma separated String: ForGeeks, Geeks, GeeksForGeeks
java-set
Java-Set-Programs
Java-String-Programs
Java-Strings
Java
Java-Strings
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Initialize an ArrayList in Java
Interfaces in Java
ArrayList in Java
Multidimensional Arrays in Java
Stack Class in Java
Singleton Class in Java
LinkedList in Java
Collections in Java
Overriding in Java
Queue Interface In Java | [
{
"code": null,
"e": 23879,
"s": 23851,
"text": "\n11 Dec, 2018"
},
{
"code": null,
"e": 23970,
"s": 23879,
"text": "Given a Set of String, the task is to convert the Set to a comma separated String in Java."
},
{
"code": null,
"e": 23980,
"s": 23970,
"text": ... |
VB.Net - Windows File System | VB.Net allows you to work with the directories and files using various directory and file-related classes like, the DirectoryInfo class and the FileInfo class.
The DirectoryInfo class is derived from the FileSystemInfo class. It has various methods for creating, moving, and browsing through directories and subdirectories. This class cannot be inherited.
Following are some commonly used properties of the DirectoryInfo class −
Attributes
Gets the attributes for the current file or directory.
CreationTime
Gets the creation time of the current file or directory.
Exists
Gets a Boolean value indicating whether the directory exists.
Extension
Gets the string representing the file extension.
FullName
Gets the full path of the directory or file.
LastAccessTime
Gets the time the current file or directory was last accessed.
Name
Gets the name of this DirectoryInfo instance.
Following are some commonly used methods of the DirectoryInfo class −
Public Sub Create
Creates a directory.
Public Function CreateSubdirectory (path As String ) As DirectoryInfo
Creates a subdirectory or subdirectories on the specified path. The specified path can be relative to this instance of the DirectoryInfo class.
Public Overrides Sub Delete
Deletes this DirectoryInfo if it is empty.
Public Function GetDirectories As DirectoryInfo()
Returns the subdirectories of the current directory.
Public Function GetFiles As FileInfo()
Returns a file list from the current directory.
For complete list of properties and methods please visit Microsoft's documentation.
The FileInfo class is derived from the FileSystemInfo class. It has properties and instance methods for creating, copying, deleting, moving, and opening of files, and helps in the creation of FileStream objects. This class cannot be inherited.
Following are some commonly used properties of the FileInfo class −
Attributes
Gets the attributes for the current file.
CreationTime
Gets the creation time of the current file.
Directory
Gets an instance of the directory, which the file belongs to.
Exists
Gets a Boolean value indicating whether the file exists.
Extension
Gets the string representing the file extension.
FullName
Gets the full path of the file.
LastAccessTime
Gets the time the current file was last accessed.
LastWriteTime
Gets the time of the last written activity of the file.
Length
Gets the size, in bytes, of the current file.
Name
Gets the name of the file.
Following are some commonly used methods of the FileInfo class −
Public Function AppendText As StreamWriter
Creates a StreamWriter that appends text to the file represented by this instance of the FileInfo.
Public Function Create As FileStream
Creates a file.
Public Overrides Sub Delete
Deletes a file permanently.
Public Sub MoveTo (destFileName As String )
Moves a specified file to a new location, providing the option to specify a new file name.
Public Function Open (mode As FileMode) As FileStream
Opens a file in the specified mode.
Public Function Open (mode As FileMode, access As FileAccess ) As FileStream
Opens a file in the specified mode with read, write, or read/write access.
Public Function Open (mode As FileMode, access As FileAccess, share As FileShare ) As FileStream
Opens a file in the specified mode with read, write, or read/write access and the specified sharing option.
Public Function OpenRead As FileStream
Creates a read-only FileStream
Public Function OpenWrite As FileStream
Creates a write-only FileStream.
For complete list of properties and methods, please visit Microsoft's documentation
The following example demonstrates the use of the above-mentioned classes −
Imports System.IO
Module fileProg
Sub Main()
'creating a DirectoryInfo object
Dim mydir As DirectoryInfo = New DirectoryInfo("c:\Windows")
' getting the files in the directory, their names and size
Dim f As FileInfo() = mydir.GetFiles()
Dim file As FileInfo
For Each file In f
Console.WriteLine("File Name: {0} Size: {1} ", file.Name, file.Length)
Next file
Console.ReadKey()
End Sub
End Module
When you compile and run the program, it displays the names of files and their size in the Windows directory.
63 Lectures
4 hours
Frahaan Hussain
103 Lectures
12 hours
Arnold Higuit
60 Lectures
9.5 hours
Arnold Higuit
97 Lectures
9 hours
Arnold Higuit
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2460,
"s": 2300,
"text": "VB.Net allows you to work with the directories and files using various directory and file-related classes like, the DirectoryInfo class and the FileInfo class."
},
{
"code": null,
"e": 2656,
"s": 2460,
"text": "The DirectoryInfo clas... |
How to compare two tensors in PyTorch? | To compare two tensors element-wise in PyTorch, we use the torch.eq() method. It compares the corresponding elements and returns "True" if the two elements are same, else it returns "False". We can compare two tensors with same or different dimensions, but the size of both the tensors must match at non-singleton dimension.
Import the required library. In all the following Python examples, the required Python library is torch. Make sure you have already
installed it.
Import the required library. In all the following Python examples, the required Python library is torch. Make sure you have already
installed it.
Create a PyTorch tensor and print it.
Create a PyTorch tensor and print it.
Compute torch.eq(input1, input2). It returns a tensor of "True" and/or "False". It compares the tensor element-wise, and returns True if the corresponding elements are equal, else it returns False.
Compute torch.eq(input1, input2). It returns a tensor of "True" and/or "False". It compares the tensor element-wise, and returns True if the corresponding elements are equal, else it returns False.
Print the returned tensor.
Print the returned tensor.
The following Python program shows how to compare two 1-D tensors
element-wise.
# import necessary library
import torch
# Create two tensors
T1 = torch.Tensor([2.4,5.4,-3.44,-5.43,43.5])
T2 = torch.Tensor([2.4,5.5,-3.44,-5.43, 43])
# print above created tensors
print("T1:", T1)
print("T2:", T2)
# Compare tensors T1 and T2 element-wise
print(torch.eq(T1, T2))
T1: tensor([ 2.4000, 5.4000, -3.4400, -5.4300, 43.5000])
T2: tensor([ 2.4000, 5.5000, -3.4400, -5.4300, 43.0000])
tensor([ True, False, True, True, False])
The following Python program shows how to compare two 2-D tensors
element-wise.
# import necessary library
import torch
# create two 4x3 2D tensors
T1 = torch.Tensor([[2,3,-32],
[43,4,-53],
[4,37,-4],
[3,75,34]])
T2 = torch.Tensor([[2,3,-32],
[4,4,-53],
[4,37,4],
[3,-75,34]])
# print above created tensors
print("T1:", T1)
print("T2:", T2)
# Conpare tensors T1 and T2 element-wise
print(torch.eq(T1, T2))
T1: tensor([[ 2., 3., -32.],
[ 43., 4., -53.],
[ 4., 37., -4.],
[ 3., 75., 34.]])
T2: tensor([[ 2., 3., -32.],
[ 4., 4., -53.],
[ 4., 37., 4.],
[ 3., -75., 34.]])
tensor([[ True, True, True],
[False, True, True],
[ True, True, False],
[ True, False, True]])
The following Python program shows how to compare a 1-D tensor with a 2-D
tensor element-wise.
# import necessary library
import torch
# Create two tensors
T1 = torch.Tensor([2.4,5.4,-3.44,-5.43,43.5])
T2 = torch.Tensor([[2.4,5.5,-3.44,-5.43, 7],
[1.0,5.4,3.88,4.0,5.78]])
# Print above created tensors
print("T1:", T1)
print("T2:", T2)
# Compare the tensors T1 and T2 element-wise
print(torch.eq(T1, T2))
T1: tensor([ 2.4000, 5.4000, -3.4400, -5.4300, 43.5000])
T2: tensor([[ 2.4000, 5.5000, -3.4400, -5.4300, 7.0000],
[ 1.0000, 5.4000, 3.8800, 4.0000, 5.7800]])
tensor([[ True, False, True, True, False],
[False, True, False, False, False]]) | [
{
"code": null,
"e": 1387,
"s": 1062,
"text": "To compare two tensors element-wise in PyTorch, we use the torch.eq() method. It compares the corresponding elements and returns \"True\" if the two elements are same, else it returns \"False\". We can compare two tensors with same or different dimensio... |
Get the fully-qualified name of a class in Java | A fully-qualified class name in Java contains the package that the class originated from. An example of this is java.util.ArrayList. The fully-qualified class name can be obtained using the getName() method.
A program that demonstrates this is given as follows −
Live Demo
public class Demo {
public static void main(String[] argv) throws Exception {
Class c = java.util.ArrayList.class;
String className = c.getName();
System.out.println("The fully-qualified name of the class is: " + className);
}
}
The fully-qualified name of the class is: java.util.ArrayList
Now let us understand the above program.
The getName() method is used to get the fully-qualified name of the class c which is stored in className. Then this is displayed. A code snippet which demonstrates this is as follows −
Class c = java.util.ArrayList.class;
String className = c.getName();
System.out.println("The fully-qualified name of the class is: " + className); | [
{
"code": null,
"e": 1270,
"s": 1062,
"text": "A fully-qualified class name in Java contains the package that the class originated from. An example of this is java.util.ArrayList. The fully-qualified class name can be obtained using the getName() method."
},
{
"code": null,
"e": 1325,
... |
Difference between Schema and Database in MySQL? | In MySQL, schema is synonymous with database. As the query is written to create the
database, similarly the query can be written to create the schema.
Logical structure can be used by the schema to store data while memory component can be
used by the database to store data. Also, a schema is collection of tables while a database is a
collection of schema.
To clarify this concept, a database and a schema are created. The steps for this are as follows −
First, a database is created with the following syntax −
create database yourDatabaseName;
The above syntax is used in a query as follows −
mysql> create database DatabaseSample;
Query OK, 1 row affected (0.14 sec)
The syntax to create a schema is as follows −
create schema yourSchemaName;
The above syntax is used in a query as follows −
mysql> create schema SchemaSample;
Query OK, 1 row affected (0.19 sec)
Now both the database and the schema have been created
To display the database and the schema as well, the show command is used. The query for that
is as follows −
mysql> show databases;
The following is the output of the above query
+--------------------+
| Database |
+--------------------+
| business |
| databasesample |
| hello |
| information_schema |
| mybusiness |
| mysql |
| performance_schema |
| sample |
| schemasample |
| sys |
| test |
+--------------------+
11 rows in set (0.07 sec)
In the oracle database, the schema can be used to represent a part of the database. | [
{
"code": null,
"e": 1213,
"s": 1062,
"text": "In MySQL, schema is synonymous with database. As the query is written to create the\ndatabase, similarly the query can be written to create the schema."
},
{
"code": null,
"e": 1420,
"s": 1213,
"text": "Logical structure can be used ... |
Evolution of a salesman: A complete genetic algorithm tutorial for Python | by Eric Stoltz | Towards Data Science | Drawing inspiration from natural selection, genetic algorithms (GA) are a fascinating approach to solving search and optimization problems. While much has been written about GA (see: here and here), little has been done to show a step-by-step implementation of a GA in Python for more sophisticated problems. That’s where this tutorial comes in! Follow along and, by the end, you’ll have a complete understanding of how to deploy a GA from scratch.
In this tutorial, we’ll be using a GA to find a solution to the traveling salesman problem (TSP). The TSP is described as follows:
“Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?”
Given this, there are two important rules to keep in mind:
Each city needs to be visited exactly one timeWe must return to the starting city, so our total distance needs to be calculated accordingly
Each city needs to be visited exactly one time
We must return to the starting city, so our total distance needs to be calculated accordingly
Let’s start with a few definitions, rephrased in the context of the TSP:
Gene: a city (represented as (x, y) coordinates)
Individual (aka “chromosome”): a single route satisfying the conditions above
Population: a collection of possible routes (i.e., collection of individuals)
Parents: two routes that are combined to create a new route
Mating pool: a collection of parents that are used to create our next population (thus creating the next generation of routes)
Fitness: a function that tells us how good each route is (in our case, how short the distance is)
Mutation: a way to introduce variation in our population by randomly swapping two cities in a route
Elitism: a way to carry the best individuals into the next generation
Our GA will proceed in the following steps:
1. Create the population
2. Determine fitness
3. Select the mating pool
4. Breed
5. Mutate
6. Repeat
Now, let’s see this in action.
While each part of our GA is built from scratch, we’ll use a few standard packages to make things easier:
import numpy as np, random, operator, pandas as pd, matplotlib.pyplot as plt
We first create a City class that will allow us to create and handle our cities. These are simply our (x, y) coordinates. Within the City class, we add a distance calculation (making use of the Pythagorean theorem) in line 6 and a cleaner way to output the cities as coordinates with __repr__ in line 12.
We’ll also create a Fitness class. In our case, we’ll treat the fitness as the inverse of the route distance. We want to minimize route distance, so a larger fitness score is better. Based on Rule #2, we need to start and end at the same place, so this extra calculation is accounted for in line 13 of the distance calculation.
We now can make our initial population (aka first generation). To do so, we need a way to create a function that produces routes that satisfy our conditions (Note: we’ll create our list of cities when we actually run the GA at the end of the tutorial). To create an individual, we randomly select the order in which we visit each city:
This produces one individual, but we want a full population, so let’s do that in our next function. This is as simple as looping through the createRoute function until we have as many routes as we want for our population.
Note: we only have to use these functions to create the initial population. Subsequent generations will be produced through breeding and mutation.
Next, the evolutionary fun begins. To simulate our “survival of the fittest”, we can make use of Fitness to rank each individual in the population. Our output will be an ordered list with the route IDs and each associated fitness score.
There are a few options for how to select the parents that will be used to create the next generation. The most common approaches are either fitness proportionate selection (aka “roulette wheel selection”) or tournament selection:
Fitness proportionate selection (the version implemented below): The fitness of each individual relative to the population is used to assign a probability of selection. Think of this as the fitness-weighted probability of being selected.
Tournament selection: A set number of individuals are randomly selected from the population and the one with the highest fitness in the group is chosen as the first parent. This is repeated to chose the second parent.
Another design feature to consider is the use of elitism. With elitism, the best performing individuals from the population will automatically carry over to the next generation, ensuring that the most successful individuals persist.
For the purpose of clarity, we’ll create the mating pool in two steps. First, we’ll use the output from rankRoutes to determine which routes to select in our selection function. In lines 3–5, we set up the roulette wheel by calculating a relative fitness weight for each individual. In line 9, we compare a randomly drawn number to these weights to select our mating pool. We’ll also want to hold on to our best routes, so we introduce elitism in line 7. Ultimately, the selection function returns a list of route IDs, which we can use to create the mating pool in the matingPool function.
Now that we have the IDs of the routes that will make up our mating pool from the selection function, we can create the mating pool. We’re simply extracting the selected individuals from our population.
With our mating pool created, we can create the next generation in a process called crossover (aka “breeding”). If our individuals were strings of 0s and 1s and our two rules didn’t apply (e.g., imagine we were deciding whether or not to include a stock in a portfolio), we could simply pick a crossover point and splice the two strings together to produce an offspring.
However, the TSP is unique in that we need to include all locations exactly one time. To abide by this rule, we can use a special breeding function called ordered crossover. In ordered crossover, we randomly select a subset of the first parent string (see line 12 in breed function below) and then fill the remainder of the route with the genes from the second parent in the order in which they appear, without duplicating any genes in the selected subset from the first parent (see line 15 in breed function below).
Next, we’ll generalize this to create our offspring population. In line 5, we use elitism to retain the best routes from the current population. Then, in line 8, we use the breed function to fill out the rest of the next generation.
Mutation serves an important function in GA, as it helps to avoid local convergence by introducing novel routes that will allow us to explore other parts of the solution space. Similar to crossover, the TSP has a special consideration when it comes to mutation. Again, if we had a chromosome of 0s and 1s, mutation would simply mean assigning a low probability of a gene changing from 0 to 1, or vice versa (to continue the example from before, a stock that was included in the offspring portfolio is now excluded).
However, since we need to abide by our rules, we can’t drop cities. Instead, we’ll use swap mutation. This means that, with specified low probability, two cities will swap places in our route. We’ll do this for one individual in our mutate function:
Next, we can extend the mutate function to run through the new population.
We’re almost there. Let’s pull these pieces together to create a function that produces a new generation. First, we rank the routes in the current generation using rankRoutes. We then determine our potential parents by running the selection function, which allows us to create the mating pool using the matingPool function. Finally, we then create our new generation using the breedPopulation function and then applying mutation using the mutatePopulation function.
We finally have all the pieces in place to create our GA! All we need to do is create the initial population, and then we can loop through as many generations as we desire. Of course we also want to see the best route and how much we’ve improved, so we capture the initial distance in line 3 (remember, distance is the inverse of the fitness), the final distance in line 8, and the best route in line 9.
With everything in place, solving the TSP is as easy as two steps:
First, we need a list of cities to travel between. For this demonstration, we’ll create a list of 25 random cities (a seemingly small number of cities, but brute force would have to test over 300 sextillion routes!):
Then, running the genetic algorithm is one simple line of code. This is where art meets science; you should see which assumptions work best for you. In this example, we have 100 individuals in each generation, keep 20 elite individuals, use a 1% mutation rate for a given gene, and run through 500 generations:
It’s great to know our starting and ending distance and the proposed route, but we would be remiss not to see how our distance improved over time. With a simple tweak to our geneticAlgorithm function, we can store the shortest distance from each generation in a progress list and then plot the results.
Run the GA in the same way as before, but now using the newly created geneticAlgorithmPlot function:
I hope this was a fun, hands-on way to learn how to build your own GA. Try it for yourself and see how short of a route you can get. Or go further and try to implement a GA on another problem set; see how you would change the breed and mutate functions to handle other types of chromosomes. We’re just scratching the surface here!
You can find a consolidated notebook here. | [
{
"code": null,
"e": 621,
"s": 172,
"text": "Drawing inspiration from natural selection, genetic algorithms (GA) are a fascinating approach to solving search and optimization problems. While much has been written about GA (see: here and here), little has been done to show a step-by-step implementati... |
How to submit a form using jQuery click event? | To submit a form using jQuery click event, you need to detect the submit event. Let’s submit a form using click event and without using click event.
You can try to run the following code to learn how to submit a form using jQuery click event −
Live Demo
<!DOCTYPE html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script>
$(document).ready(function(){
$(function() {
$('#submit1').click(function(e) {
e.preventDefault();
$("#Demo").submit();
});
$('#submit2').click(function(e) {
e.preventDefault();
$("#Demo").submit();
});
});
});
</script>
</head>
<body>
<form action="javascript:alert('submitted');" method="post" id="Demo">
<label>Team</label>
<input type="text" name="name" value="" />
<input type="submit" name="submitme" value="Submit" id="submit1" />
<p><a href="#" id="submit2">Submit</a></p>
</form>
</body>
</html> | [
{
"code": null,
"e": 1211,
"s": 1062,
"text": "To submit a form using jQuery click event, you need to detect the submit event. Let’s submit a form using click event and without using click event."
},
{
"code": null,
"e": 1306,
"s": 1211,
"text": "You can try to run the following ... |
Overuse of lambda expressions in Python - GeeksforGeeks | 04 Mar, 2022
What are lambda expressions? A lambda expression is a special syntax to create functions without names. These functions are called lambda functions. These lambda functions can have any number of arguments but only one expression along with an implicit return statement. Lambda expressions return function objects. For Example consider the lambda expression:
lambda (arguments) : (expression)
This lambda expression defines an unnamed function, which accepts two arguments and returns the sum of the two arguments. But how do we call an unnamed function? The above defined unnamed lambda function can be called as:
(lambda x, y: x + y)(1, 2)
Code 1:
Python3
# Python program showing a use# lambda function # performing a addition of three numberx1 = (lambda x, y, z: (x + y) * z)(1, 2, 3)print(x1) # function using a lambda function x2 = (lambda x, y, z: (x + y) if (z == 0) else (x * y))(1, 2, 3)print(x2)
Output:
9
2
Though it is not encouraged, the function object returned by a lambda expression can be assigned to a variable. See the example below in which a variable sum is assigned a function object returned by a lambda expression.
Python3
# Python program showing# variable is storing lambda# expression # assigned to a variablesum = lambda x, y: x + yprint(type(sum)) x1 = sum(4, 7)print(x1)
Output:
11
Common uses of lambda expressions :
Since lambda functions are anonymous and do not require a name to be assigned, they are usually used to call functions(or classes) which require a function object as an argument. Defining separate functions for such function arguments is of no use because, the function definition is usually short and they are required only once or twice in the code. For example, the key argument of the inbuilt function, sorted().
Python3
# Python program showing# using of normal functiondef Key(x): return x%2nums = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]sort = sorted(nums, key = Key)print(sort)
Output:
[0, 2, 4, 6, 8, 1, 3, 5, 7, 9]
Python3
# Python program showing use# of lambda function nums = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]sort_lambda = sorted(nums, key = lambda x: x%2)print(sort_lambda)
Output:
[0, 2, 4, 6, 8, 1, 3, 5, 7, 9]
Lambda functions are inline functions and hence they are used whenever there is a need of repetitive function calls to reduce execution time. Some of the examples of such scenarios are the functions: map(), filter() and sorted(). For example,
Python3
# Python program showing a use# of lambda function nums = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0] # using map() functionsquares = map(lambda x: x * x, nums)print(list(squares)) # using filter() functionevens = filter(lambda x: True if (x % 2 == 0) else False, nums)print(list(evens))
Pros and Cons of lambda functions : Pros of lambda functions:
Being anonymous, lambda functions can be easily passed without being assigned to a variable.
Lambda functions are inline functions and thus execute comparatively faster.
Many times lambda functions make code much more readable by avoiding the logical jumps caused by function calls. For example, read the following blocks of code.
Python3
# Python program performing# operation using def()def fun(x, y, z): return x*y+za = 1b = 2c = 3 # logical jumpd = fun(a, b, c)print(d)
Output:
5
Python3
# Python program performing# operation using lambda d = (lambda x, y, z: x*y+z)(1, 2, 3)print(d)
Output:
5
Cons on lambda functions:
Lambda functions can have only one expression.
Lambda functions cannot have a docstring.
Many times lambda functions make code difficult to read. For example, see the blocks of code given below.
Python3
def func(x): if x == 1: return "one" else if x == 2: return "two" else if x == 3: return "three" else: return "ten"num = func(3)print(num)
Output:
three
Python3
# Python program showing use# of lambda functionnum = (lambda x: "one" if x == 1 else( "two" if x == 2 else ("three" if x == 3 else "ten")))(3)print(num)
Output:
three
Misuse of Lambda expressions :
Assignment of lambda expressions : The official python style guide PEP8, strongly discourages the assignment of lambda expressions as shown in the example below.
func = lambda x, y, z: x*y + z
Instead, it is recommended to write a one-liner function as,
def func(x, y, z): return x*y + z
While the second method encourages the fact that lambda functions are anonymous, it is also useful for tracebacks during debugging. Run the code below to see how def makes tracebacks much useful.
Python3
func = lambda x, y, z: x * y + zprint(func)def func(x, y, z): return x * y + zprint(func)
Wrapping lambda expressions around functions : Many times, lambda expressions are needlessly wrapped around functions as shown below.
nums = [-2, -1, 0, 1, 2]
sort = sorted(nums, key=lambda x: abs(x))
While the above syntax is absolutely correct, programmers must understand that all functions in python can be passed as function objects. Hence the same code can(and should) be written as,
sort = sorted(nums, key=abs)
Passing functions unnecessarily : Many times, programmers pass functions which perform only a single operation. See the following code.
nums = [1, 2, 3, 4, 5]
summation = reduce(lambda x, y: x + y, nums)
The lambda function passed above performs only a single operation, adding the two arguments. The same result can be obtained by the using the built-in function sum, as shown below.
nums = [1, 2, 3, 4, 5]
summation = sum(nums)
Programmers should avoid using lambda expressions for common operations, because it is highly probable to have a built-in function providing the same results.
Overuse of lambda expressions :
Using lambda for non-trivial functions :Sometimes, simple functions can be non-trivial. See the code below.
Python3
details = [{'p':100, 'r':0.01, 'n':2, 't':4}, {'p':150, 'r':0.04, 'n':1, 't':5}, {'p':120, 'r':0.05, 'n':5, 't':2}]sorted_details = sorted(details, key=lambda x: x['p']*((1 + x['r']/ x['n'])**(x['n']*x['t'])))print(sorted_details)
Output:
[{‘n’: 2, ‘r’: 0.01, ‘t’: 4, ‘p’: 100}, {‘n’: 5, ‘r’: 0.05, ‘t’: 2, ‘p’: 120}, {‘n’: 1, ‘r’: 0.04, ‘t’: 5, ‘p’: 150}]
Here, we are sorting the dictionaries on the basis of the compound interest. Now, see the code written below, which uses def.
Python3
details = [{'p':100, 'r':0.01, 'n':2, 't':4}, {'p':150, 'r':0.04, 'n':1, 't':5}, {'p':120, 'r':0.05, 'n':5, 't':2}]def CI(det): '''sort key: compound interest, P(1 + r/n)^(nt)''' return det['p']*((1 + det['r']/det['n'])**(det['n']*det['t']))sorted_details = sorted(details, key=CI)print(sorted_details)
Output:
[{‘n’: 2, ‘r’: 0.01, ‘t’: 4, ‘p’: 100}, {‘n’: 5, ‘r’: 0.05, ‘t’: 2, ‘p’: 120}, {‘n’: 1, ‘r’: 0.04, ‘t’: 5, ‘p’: 150}]
Though both codes do the same thing, the second one which uses def is much more readable. The expression written here under the lambda might be simple, but it has a meaning(formula for compound interest). Hence, the expression is non-trivial and deserves a name. Using lambda expressions for non-trivial functions reduces the readability of the code.
Using lambdas when multiple lines would help :If using a multiple line function makes the code more readable, using lambda expressions to reduce some lines of code is not worth it. For example, see the code below.
people = [(‘sam’, ‘M’, 18), (‘susan’, ‘F’, 22), (‘joy’, ‘M’, 21), (‘lucy’, ‘F’, 12)] sorted_people = sorted(people, key=lambda x: x[1])
Also see the following code which uses def.
def Key(person):
name, sex, age = person
return sex
sorted_people = sorted(people, key=Key)
See how tuple unpacking in the second block of code makes it much more readable and logical. Readability of the code should the be utmost priority of a programmer, working in a collaborative environment.
Using lambda expressions for map and filter : Lambdas are very commonly used with map() and filter() as shown.
Python3
nums = [0, 1, 2, 3, 4, 5]mapped = map(lambda x: x * x, nums)filtered = filter(lambda x: x % 2, nums)print(list(mapped))print(list(filtered))
Following is another block of code which uses generator expressions to achieve similar results.
Python3
nums = [0, 1, 2, 3, 4, 5]mapped = (x * x for x in nums)filtered = (x for x in nums if x % 2 == 1)print(list(mapped))print(list(filtered))
Unlike map() and filter(), generator expressions are general purpose features of python language. Thus generators enhance the readability of the code. While, map() and filter() require prior knowledge of these functions.
Use of higher order functions : The functions which accept other function objects as arguments are called higher order functions (i.e. map() and filter()), which are common in functional programming. As stated above, lambda expressions are commonly used as the function arguments of higher order functions. Compare the two code blocks shown below. Using high order function reduce()
nums = [1, 2, 3, 4, 5]
product = reduce(lambda x, y: x*y, nums, 1)
Without using high order function
nums = [1, 2, 3, 4, 5]
def multiply(nums):
prod = 1
for number in nums:
prod *= number
return prod
product = multiply(nums)
While the first block uses fewer lines of code and is not that difficult to understand, programmers with no background of functional programming will find the second block of code much readable. Unless practicing proper functional programming paradigm, passing one function to another function is not appreciated, as it can reduce readability.
surindertarika1234
simmytarika5
surinderdawra388
gulshankumarar231
sumitgumber28
Picked
python-lambda
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
Read a file line by line in Python
Enumerate() in Python
How to Install PIP on Windows ?
Iterate over a list in Python
Different ways to create Pandas Dataframe
Python program to convert a list to string
Python String | replace()
Reading and Writing to text files in Python
sum() function in Python | [
{
"code": null,
"e": 23747,
"s": 23719,
"text": "\n04 Mar, 2022"
},
{
"code": null,
"e": 24107,
"s": 23747,
"text": "What are lambda expressions? A lambda expression is a special syntax to create functions without names. These functions are called lambda functions. These lambda f... |
How to install Jupyter Notebook on Windows? - GeeksforGeeks | 05 Oct, 2021
Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations, and narrative text. Uses include data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.
Jupyter has support for over 40 different programming languages and Python is one of them. Python is a requirement (Python 3.3 or greater, or Python 2.7) for installing the Jupyter Notebook itself.
YouTubeGeeksforGeeks500K subscribersInstall & Setup Anaconda Python, Jupyter Notebook and Spyder on Windows 10 | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 14:53•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=FTkUcSicRIA" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>
Jupyter Notebook can be installed by using either of the two ways described below:
Using Anaconda:Install Python and Jupyter using the Anaconda Distribution, which includes Python, the Jupyter Notebook, and other commonly used packages for scientific computing and data science. To install Anaconda, go through How to install Anaconda on windows? and follow the instructions provided.
Using PIP:Install Jupyter using the PIP package manager used to install and manage software packages/libraries written in Python. To install pip, go through How to install PIP on Windows? and follow the instructions provided.
Using PIP:Install Jupyter using the PIP package manager used to install and manage software packages/libraries written in Python. To install pip, go through How to install PIP on Windows? and follow the instructions provided.
Anaconda is an open-source software that contains Jupyter, spyder, etc that are used for large data processing, data analytics, heavy scientific computing. Anaconda works for R and python programming language. Spyder(sub-application of Anaconda) is used for python. Opencv for python will work in spyder. Package versions are managed by the package management system called conda.
To install Jupyter using Anaconda, just go through the following instructions:
Launch Anaconda Navigator:
Click on the Install Jupyter Notebook Button:
Beginning the Installation:
Loading Packages:
Finished Installation:
Launching Jupyter:
PIP is a package management system used to install and manage software packages/libraries written in Python. These files are stored in a large “on-line repository” termed as Python Package Index (PyPI).pip uses PyPI as the default source for packages and their dependencies.
To install Jupyter using pip, we need to first check if pip is updated in our system. Use the following command to update pip:
python -m pip install --upgrade pip
After updating the pip version, follow the instructions provided below to install Jupyter:
Command to install Jupyter:python -m pip install jupyter
python -m pip install jupyter
Beginning Installation:
Downloading Files and Data:
Installing Packages:
Finished Installation:Launching Jupyter:Use the following command to launch Jupyter using command-line:jupyter notebookMy Personal Notes
arrow_drop_upSave
Launching Jupyter:Use the following command to launch Jupyter using command-line:
jupyter notebook
how-to-install
python-basics
How To
Installation Guide
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Align Text in HTML?
How to Install OpenCV for Python on Windows?
Java Tutorial
How to Install FFmpeg on Windows?
How to filter object array based on attributes?
Installation of Node.js on Linux
How to Install OpenCV for Python on Windows?
How to Install FFmpeg on Windows?
How to Install Pygame on Windows ?
How to Install and Run Apache Kafka on Windows? | [
{
"code": null,
"e": 24581,
"s": 24553,
"text": "\n05 Oct, 2021"
},
{
"code": null,
"e": 24893,
"s": 24581,
"text": "Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations, and narrative t... |
JavaScript Number toExponential() Method - GeeksforGeeks | 22 Dec, 2021
Below is the example of the Number toExponential() Method.
Example: <script type="text/javascript"> var num = 212.13456; document.write(num.toExponential(4)); </script>
<script type="text/javascript"> var num = 212.13456; document.write(num.toExponential(4)); </script>
Output:Output :2.1213e+2
Output :2.1213e+2
The toExponential() method in JavaScript is used to convert a number to its exponential form. It returns a string representing the Number object in exponential notation.
Syntax:
number.toExponential(value)
The toExponential() method is used with a number as shown in the above syntax using the ‘.’ operator. This method will convert a number to its exponential form.
Parameters: This method accepts a single parameter value . It is an optional parameter and it represents the value specifying the number of digits after the decimal point.
Return Value: The toExponential() method in JavaScript returns a string representing the given number in exponential notation with one digit before the decimal point.
The below examples illustrates the working of the toExponential() method in JavaScript:
Passing a number as an argument in the toExponential() method. If a number is passed as an argument to the toExponential() method then it represents the number of digits after the decimal point.Code# 1: <script type="text/javascript"> var num = 2.13456; document.write(num.toExponential(2)); </script>Output:2.13e+0Passing no parameter in the toExponential() method. Below program illustrates this:Code# 2: <script type="text/javascript"> var num = 2.13456; document.write(num.toExponential()); </script>Output:2.13456e+0Passing a value which has more than 1 digit before the decimal point in the toExponential() method. Below program illustrates this:Code# 3: <script type="text/javascript"> var num=212.13456; document.write(num.toExponential()); </script>Output:2.1213456e+2Passing zero as a parameter in the toExponential() method. Below program illustrates this:Code# 4: <script type="text/javascript"> var num = 212.13456; document.write(num.toExponential(0)); </script>Output:Output :2e+2
Passing a number as an argument in the toExponential() method. If a number is passed as an argument to the toExponential() method then it represents the number of digits after the decimal point.Code# 1: <script type="text/javascript"> var num = 2.13456; document.write(num.toExponential(2)); </script>Output:2.13e+0
<script type="text/javascript"> var num = 2.13456; document.write(num.toExponential(2)); </script>
Output:
2.13e+0
Passing no parameter in the toExponential() method. Below program illustrates this:Code# 2: <script type="text/javascript"> var num = 2.13456; document.write(num.toExponential()); </script>Output:2.13456e+0
<script type="text/javascript"> var num = 2.13456; document.write(num.toExponential()); </script>
Output:
2.13456e+0
Passing a value which has more than 1 digit before the decimal point in the toExponential() method. Below program illustrates this:Code# 3: <script type="text/javascript"> var num=212.13456; document.write(num.toExponential()); </script>Output:2.1213456e+2
<script type="text/javascript"> var num=212.13456; document.write(num.toExponential()); </script>
Output:
2.1213456e+2
Passing zero as a parameter in the toExponential() method. Below program illustrates this:Code# 4: <script type="text/javascript"> var num = 212.13456; document.write(num.toExponential(0)); </script>Output:Output :2e+2
<script type="text/javascript"> var num = 212.13456; document.write(num.toExponential(0)); </script>
Output:
Output :2e+2
Exceptions:
Range Error: This exception is thrown when the value parameter passed is too small or too large. Values between 0 and 20, inclusive, will not cause a RangeError. If you want to pass larger or smaller values than specified by this range then you have to accordingly implement the toExponential() method.Type Error: This exception is thrown when the toFixed() method is invoked on an object that is not of type number.
Range Error: This exception is thrown when the value parameter passed is too small or too large. Values between 0 and 20, inclusive, will not cause a RangeError. If you want to pass larger or smaller values than specified by this range then you have to accordingly implement the toExponential() method.
Type Error: This exception is thrown when the toFixed() method is invoked on an object that is not of type number.
Supported Browsers:
Google Chrome 1 and above
Internet Explorer 5.5 and above
Firefox 1 and above
Apple Safari 2 and above
Opera 7 and above
ysachin2314
JavaScript-Methods
JavaScript-Numbers
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between var, let and const keywords in JavaScript
Convert a string to an integer in JavaScript
Differences between Functional Components and Class Components in React
How to append HTML code to a div using JavaScript ?
How to Open URL in New Tab using JavaScript ?
Top 10 Front End Developer Skills That You Need in 2022
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 24878,
"s": 24850,
"text": "\n22 Dec, 2021"
},
{
"code": null,
"e": 24937,
"s": 24878,
"text": "Below is the example of the Number toExponential() Method."
},
{
"code": null,
"e": 25069,
"s": 24937,
"text": "Example: <script type=\"... |
How to enable the “disabled” attribute using jQuery on button click? | Set the disabled property to true in jQuery −
<input type="text" disabled=true id="showTxtBoxHide" value="My Name is David..">
Following is the code −
Live Demo
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<link rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css">
<script src="https://code.jquery.com/jquery-1.12.4.js"></script>
<script src="https://code.jquery.com/ui/1.12.1/jquery-ui.js"></script>
<body>
<input type="text" disabled=true id="showTxtBoxHide" value="My Name is David..">
<br>
<br>
<input type="button" id="change" value="ClickTheButtonToEnableTextBox">
</body>
<script>
$("#change").click(function (buttonEvent) {
buttonEvent.preventDefault();
$('#showTxtBoxHide').prop("disabled", false);
});
</script>
</html>
To run the above program, save the file name anyName.html(index.html). Right click on the file and select the option “Open with live server” in VS Code editor −
The output is as follows −
The text box property is disabled. To enable, you need to click the button − | [
{
"code": null,
"e": 1108,
"s": 1062,
"text": "Set the disabled property to true in jQuery −"
},
{
"code": null,
"e": 1189,
"s": 1108,
"text": "<input type=\"text\" disabled=true id=\"showTxtBoxHide\" value=\"My Name is David..\">"
},
{
"code": null,
"e": 1213,
"s... |
Output of Java Program | Set 20 (Inheritance) - GeeksforGeeks | 14 Aug, 2019
Prerequisite – Inheritance in Java
Predict the output of following Java Programs.Program 1 :
class A{ public A(String s) { System.out.print("A"); }} public class B extends A { public B(String s) { System.out.print("B"); } public static void main(String[] args) { new B("C"); System.out.println(" "); }}
Output: Compilation fails
prog.java:12: error: constructor A in class A cannot be applied to given types;
{
^
required: String
found: no arguments
reason: actual and formal argument lists differ in length
1 error
Explanation: The implied super() call in B’s constructor cannot be satisfied because there isn’t a no-arg constructor in A. A default, no-arg constructor is generated by the compiler only if the class has no constructor defined explicitly.For detail See – Constructors in Java
Program 2 :
class Clidder { private final void flipper() { System.out.println("Clidder"); }} public class Clidlet extends Clidder { public final void flipper() { System.out.println("Clidlet"); } public static void main(String[] args) { new Clidlet().flipper(); }}
Output:
Clidlet
Explanation: Although a final method cannot be overridden, in this case, the method is private, and therefore hidden. The effect is that a new, accessible, method flipper is created. Therefore, no polymorphism occurs in this example, the method invoked is simply that of the child class, and no error occurs.
Program 3 :
class Alpha { static String s = " "; protected Alpha() { s += "alpha "; }}class SubAlpha extends Alpha { private SubAlpha() { s += "sub "; }} public class SubSubAlpha extends Alpha { private SubSubAlpha() { s += "subsub "; } public static void main(String[] args) { new SubSubAlpha(); System.out.println(s); }}
Output:
alpha subsub
Explanation: SubSubAlpha extends Alpha! Since the code doesnt attempt to make a SubAlpha, the private constructor in SubAlpha is okay.
Program 4 :
public class Juggler extends Thread { public static void main(String[] args) { try { Thread t = new Thread(new Juggler()); Thread t2 = new Thread(new Juggler()); } catch (Exception e) { System.out.print("e "); } } public void run() { for (int i = 0; i < 2; i++) { try { Thread.sleep(500); } catch (Exception e) { System.out.print("e2 "); } System.out.print(Thread.currentThread().getName()+ " "); } }}
Output: No OutputExplanation: In main(), the start() method was never called to start ”t” and ”t2”, so run() never ran.For detail: See Multithreading in Java
Program 5 :
class Grandparent { public void Print() { System.out.println("Grandparent's Print()"); } } class Parent extends Grandparent { public void Print() { System.out.println("Parent's Print()"); } } class Child extends Parent { public void Print() { super.super.Print(); System.out.println("Child's Print()"); } } public class Main { public static void main(String[] args) { Child c = new Child(); c.Print(); }}
Output: Compiler Error in super.super.Print()Explanation: In Java, it is not allowed to do super.super. We can only access Grandparent’s members using Parent. See Inheritance in Java
Related Article: Quiz on Inheritance in Java
This article is contributed by Pavan Gopal Rayapati. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
MayankTiwari6
Java-Output
Program Output
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Different ways to copy a string in C/C++
Runtime Errors
unsigned specifier (%u) in C with Examples
Output of C programs | Set 57 (for loop)
Output of Java program | Set 26
Output of C++ Program | Set 1
Output of python program | Set 12(Lists and Tuples)
error: call of overloaded ‘function(x)’ is ambiguous | Ambiguity in Function overloading in C++
How to show full column content in a PySpark Dataframe ?
Discrete Fourier Transform and its Inverse using MATLAB | [
{
"code": null,
"e": 24310,
"s": 24282,
"text": "\n14 Aug, 2019"
},
{
"code": null,
"e": 24345,
"s": 24310,
"text": "Prerequisite – Inheritance in Java"
},
{
"code": null,
"e": 24403,
"s": 24345,
"text": "Predict the output of following Java Programs.Program 1... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.