url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://mathoverflow.net/questions/102488/is-there-a-security-analysis-of-the-gq-digital-signature-scheme
# Is there a security analysis of the GQ digital signature scheme? I'm doing summer cryptography research and I am have been looking for a security analysis of the Guillou-Quisquater (GQ) digital signature scheme, but I have been unable to find one. Since this is not a very common digital signature scheme I will mention the protocol. GQ: Public: $n,e,I$ has function $H$, where $I \equiv S^{e} \mod n$ Private: $s$ Signature: $(x,y)$ where $x \equiv r^{e} \mod n, c=h(m,x)$, and $y \equiv rS^{c} \mod n$ To verify: Check that $y^{e} \equiv x I^{h(m,x)} \mod n$ (this works because $y^{e} \equiv (rS^{c})^{e} \equiv r^{e}S^{ce} \equiv xI^{c}$) Any references to papers in which this could be found would be very helpful. Thank you! - I think your notation is inconsistent. Also, see the following: scholar.google.com/… – Steve Huntsman Jul 17 '12 at 21:16 @Steve Huntsman: Care to extrapolate upon your comment about the inconsistency of my notation? – Samuel Reid Jul 18 '12 at 7:30
2016-05-01 21:54:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7078774571418762, "perplexity": 1069.9328003472724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116929.30/warc/CC-MAIN-20160428161516-00145-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.ipsecu.com/articles/resolution-in-security/
# Resolution in security This chapter suggests reading in conjunction with Related concepts of video coding. ### Pixel Pixel, the basic unit of image display, is translated from the English “pixel”. pix is the common abbreviation of the English word picture. Add the English word “element” to get pixel, so “pixel” means “image element”. Sometimes called pel (picture element). Each pixel contains a series of numbers that describe its color or intensity. The precision with which a pixel can specify a color is called its bit or color depth. Each pixel can have its own color value and can be displayed in three primary colors, so it is divided into three sub-pixels (RGB color gamut) of red, green and blue, or cyan, magenta, yellow and black (CMYK color gamut, printing industry and printer Common). A photo is a collection of sampling points. Under the premise that the image is not compressed incorrectly/lossy or the camera lens is suitable, the more pixels in a unit area, the higher the resolution, and the displayed image will be close to Real objects. ### Resolution unit Commonly used units to describe resolution are: dpi (dots per inch), lpi (line per inch), and ppi (pixels per inch). But only lpi is the scale that describes the optical resolution. Although dpi and ppi are also units in the resolution category, their meaning is different from lpi. Moreover, lpi and dpi cannot be converted, and can only be estimated based on experience. Since the pixel is only a single unit describing the image information, its size needs to be specified in order to describe the actual display effect of the image. Therefore, the term “Pixels per inch (PPI, Pixels per inch)” was introduced to correlate this theoretical pixel unit with the actual visual resolution. It should be noted that PPI and image resolution are not positively correlated. PPI reflects the pixel density, while the horizontal resolution × vertical resolution reflects the total number of pixels. “Pixels per inch (PPI)” describes how many pixels the image contains per inch of distance (horizontal or vertical). The image resolution is expressed as the number of pixels in each direction. Generally, the image resolution is described as the number of pixels in the horizontal direction of the image × the number of pixels in the vertical direction of the image, such as 1920×1080, which means that there are 1920 pixels in the horizontal direction of the image. Pixels, there are 1080 pixels in the vertical direction, and the total number of pixels is 1920×1080=2073600, rounded up, which is 2 million pixels. $PPI = \frac{\sqrt Horizontal pixels^2+Vertical pixels^2}{screen size}$ ### Image aspect ratio Aspect ratio, which is the ratio of the width of an image divided by the height, is usually expressed as “x:y” or “x×y”, where the colon and multiplication sign means “ratio” in Chinese. Common aspect ratios are: • 1.19:1 • 1.25:1, such as 1280×1024 • 1.33:1, which is 4:3 • 1.37:1 • 1.43:1 • 1.5:1, that is 3:2, for example, the resolution is 1440×960 • 1.56:1, which is 14:9 • 1.6:1, or 16:10 (8:5), is a common ratio of computer wide screens, used for WSXGAPlus, WUXGA • 1.66:1, 5:3, sometimes the exact mark is 1.67 • 1.75:1 • 1.77:1, the so-called 16:9, or 42:32 • 1.85:1 • 2:1, which is 18:9 • 2.2:1 • 2.35:1 • 2.370:1: 21:9, the actual value is 64:27 (43:33) • 2.39:1 • 2.4:1, which is 12:5 • 2.55:1 • 2.59:1 • 2.76:1 • 4:1 The above is the original aspect ratio of each image resolution. Sometimes the image display device, such as the resolution aspect ratio supported by the monitor, is inconsistent with the original aspect ratio of the image and video. In order to adapt to the monitor, the original video is generally stretched and cropped. , Add black borders and other methods to change the original aspect ratio. This will cause the video image we see to be distorted, and there are black borders on the left and right or around the image. #### Corridor mode In the security field, for some special scenes, such as long and narrow passages, corridors, halls, etc., the area that needs to be monitored is more vertical than horizontal in shape. In order to better display the monitoring scene, the 16:9 image will be inverted into 9 :16, this can show more images on the long and narrow width, generally we call this 9:16 aspect ratio mode as the corridor mode. Tags: 0 replies
2022-10-06 06:28:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4830630421638489, "perplexity": 2102.0480920328846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00576.warc.gz"}
https://www.endtoend.ai/fastpapers2/observational-overfitting-in-reinforcement-learning/
# Observational Overfitting in Reinforcement Learning Song et al., 2019 | https://arxiv.org/abs/1912.02975 • Agents can overfit to parts of observation irrelevant to MDP dynamics such as the scoreboard or the background, as they are correlated with progress. • Observational overfitting hurts agent's generalization. • Overparametrization can mitigate observational overfitting and improve generalization.
2020-11-29 23:00:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9272008538246155, "perplexity": 10690.297512892888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141203418.47/warc/CC-MAIN-20201129214615-20201130004615-00186.warc.gz"}
https://goodboychan.github.io/python/machine_learning/natural_language_processing/vision/2020/10/26/01-K-Means-Clustering-for-Imagery-Analysis.html
## Required Packages import sys import sklearn import numpy as np import matplotlib.pyplot as plt %matplotlib inline ## Version Check print('Python: {}'.format(sys.version)) print('Scikit-learn: {}'.format(sklearn.__version__)) print('NumPy: {}'.format(np.__version__)) Python: 3.7.6 (default, Jan 8 2020, 20:23:39) [MSC v.1916 64 bit (AMD64)] Scikit-learn: 0.22.1 NumPy: 1.18.1 For the convenience, we will load the MNIST dataset from tensorflow Keras Library. Or you can download it directly from here. import tensorflow as tf from tensorflow.keras.datasets import mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() # Print shape of dataset print("Training: {}".format(X_train.shape)) print("Test: {}".format(X_test.shape)) Training: (60000, 28, 28) Test: (10000, 28, 28) As you can see, the original dataset contains 28x28x1 pixel image. Let's print it out, and what it looks like. fig, axs = plt.subplots(3, 3, figsize = (12, 12)) plt.gray() # loop through subplots and add mnist images for i, ax in enumerate(axs.flat): ax.imshow(X_train[i]) ax.axis('off') ax.set_title('Number {}'.format(y_train[i])) # display the figure plt.show() ## Preprocessing ### Reshape Images stored as NumPy arrays are 2-dimensional arrays. However, the K-means clustering algorithm provided by scikit-learn ingests 1-dimensional arrays; as a result, we will need to reshape each image. (in other words, we need to flatten the data) Clustering algorithms almost always use 1-dimensional data. For example, if you were clustering a set of X, Y coordinates, each point would be passed to the clustering algorithm as a 1-dimensional array with a length of two (example: [2,4] or [-1, 4]). If you were using 3-dimensional data, the array would have a length of 3 (example: [2, 4, 1] or [-1, 4, 5]). MNIST contains images that are 28 by 28 pixels; as a result, they will have a length of 784 once we reshape them into a 1-dimensional array. X_train = X_train.reshape(len(X_train), -1) print(X_train.shape) (60000, 784) ### Normalization Also, One approach to help training is normalization. In order to do this, we need to convert each pixel value into 0 to 1 range. The maximum value of pixel in grayscale is 255, so it can normalize it by dividing 255. Of course, its overall shape is same as before. X_train = X_train.astype(np.float32) / 255. ## Applying K-means Clustering Since the size of the MNIST dataset is quite large, we will use the mini-batch implementation of k-means clustering (MiniBatchKMeans) provided by scikit-learn. This will dramatically reduce the amount of time it takes to fit the algorithm to the data. Here, we just choose the n_clusters argument to the n_digits(the size of unique labels, in our case, 10), and set the default parameters in MiniBatchKMeans. And as you know that, K-means clustering is one of the unsupervised learning. That means it doesn't require any label to train. from sklearn.cluster import MiniBatchKMeans n_digits = len(np.unique(y_train)) print(n_digits) 10 kmeans = MiniBatchKMeans(n_clusters=n_digits) kmeans.fit(X_train) MiniBatchKMeans(batch_size=100, compute_labels=True, init='k-means++', init_size=None, max_iter=100, max_no_improvement=10, n_clusters=10, n_init=3, random_state=None, reassignment_ratio=0.01, tol=0.0, verbose=0) We can find the labels of each input that is generated from K means model. kmeans.labels_ array([7, 8, 3, ..., 7, 9, 7]) But these are not real label of each image, since the output of the kmeans.labels_ is just group id for clustering. For example, 6 in kmeans.labels_ has similar features with another 6 in kmeans.labels_. There is no more meaning from the label. To match it with real label, we can tackle the follow things: • Combine each images in the same group • Check Frequency distribution of actual labels (using np.bincount) • Find the Maximum frequent label (through np.argmax), and set the label. def infer_cluster_labels(kmeans, actual_labels): """ Associates most probable label with each cluster in KMeans model returns: dictionary of clusters assigned to each label """ inferred_labels = {} # Loop through the clusters for i in range(kmeans.n_clusters): # find index of points in cluster labels = [] index = np.where(kmeans.labels_ == i) # append actual labels for each point in cluster labels.append(actual_labels[index]) # determine most common label if len(labels[0]) == 1: counts = np.bincount(labels[0]) else: counts = np.bincount(np.squeeze(labels)) # assign the cluster to a value in the inferred_labels dictionary if np.argmax(counts) in inferred_labels: # append the new number to the existing array at this slot inferred_labels[np.argmax(counts)].append(i) else: # create a new array in this slot inferred_labels[np.argmax(counts)] = [i] return inferred_labels def infer_data_labels(X_labels, cluster_labels): """ Determines label for each array, depending on the cluster it has been assigned to. returns: predicted labels for each array """ # empty array of len(X) predicted_labels = np.zeros(len(X_labels)).astype(np.uint8) for i, cluster in enumerate(X_labels): for key, value in cluster_labels.items(): if cluster in value: predicted_labels[i] = key return predicted_labels cluster_labels = infer_cluster_labels(kmeans, y_train) X_clusters = kmeans.predict(X_train) predicted_labels = infer_data_labels(X_clusters, cluster_labels) print(predicted_labels[:20]) print(y_train[:20]) [8 0 4 1 7 2 1 8 1 7 3 1 3 6 1 7 2 8 6 7] [5 0 4 1 9 2 1 3 1 4 3 5 3 6 1 7 2 8 6 9] As a result, some predicted label is mismatched, but most of case, the k-means model can correctly cluster of each group. ## Evaluating Clustering Algorithm With the functions defined above, we can now determine the accuracy of our algorithms. Since we are using this clustering algorithm for classification, accuracy is ultimately the most important metric; however, there are other metrics out there that can be applied directly to the clusters themselves, regardless of the associated labels. Two of these metrics that we will use are inertia and homogeneity. (See the detailed description of homogeneity_score) Furthermore, earlier we made the assumption that K = 10 was the appropriate number of clusters; however, this might not be the case. Let's fit the K-means clustering algorithm with several different values of K, than evaluate the performance using our metrics. from sklearn.metrics import homogeneity_score def calc_metrics(estimator, data, labels): print('Number of Clusters: {}'.format(estimator.n_clusters)) # Inertia inertia = estimator.inertia_ print("Inertia: {}".format(inertia)) # Homogeneity Score homogeneity = homogeneity_score(labels, estimator.labels_) print("Homogeneity score: {}".format(homogeneity)) return inertia, homogeneity from sklearn.metrics import accuracy_score clusters = [10, 16, 36, 64, 144, 256] iner_list = [] homo_list = [] acc_list = [] for n_clusters in clusters: estimator = MiniBatchKMeans(n_clusters=n_clusters) estimator.fit(X_train) inertia, homo = calc_metrics(estimator, X_train, y_train) iner_list.append(inertia) homo_list.append(homo) # Determine predicted labels cluster_labels = infer_cluster_labels(estimator, y_train) prediction = infer_data_labels(estimator.labels_, cluster_labels) acc = accuracy_score(y_train, prediction) acc_list.append(acc) print('Accuracy: {}\n'.format(acc)) Number of Clusters: 10 Inertia: 2383375.0 Homogeneity score: 0.46576292303121536 Accuracy: 0.56465 Number of Clusters: 16 Inertia: 2208197.5 Homogeneity score: 0.5531322770474518 Accuracy: 0.65095 Number of Clusters: 36 Inertia: 1961340.875 Homogeneity score: 0.6783212163972349 Accuracy: 0.767 Number of Clusters: 64 Inertia: 1822361.625 Homogeneity score: 0.727585914263205 Accuracy: 0.7895166666666666 Number of Clusters: 144 Inertia: 1635514.25 Homogeneity score: 0.8048996371912126 Accuracy: 0.8673833333333333 Number of Clusters: 256 Inertia: 1519708.25 Homogeneity score: 0.8428113183818001 Accuracy: 0.9000333333333334 fig, ax = plt.subplots(1, 2, figsize=(16, 10)) ax[0].plot(clusters, iner_list, label='inertia', marker='o') ax[1].plot(clusters, homo_list, label='homogeneity', marker='o') ax[1].plot(clusters, acc_list, label='accuracy', marker='^') ax[0].legend(loc='best') ax[1].legend(loc='best') ax[0].grid('on') ax[1].grid('on') ax[0].set_title('Inertia of each clusters') ax[1].set_title('Homogeneity and Accuracy of each clusters') plt.show() As a result, we found out that when the K value is increased, the accuracy and homogeneity is also increased. We can also check the performance on test dataset. X_test = X_test.reshape(len(X_test), -1) X_test = X_test.astype(np.float32) / 255. kmeans = MiniBatchKMeans(n_clusters=256) kmeans.fit(X_test) cluster_labels = infer_cluster_labels(kmeans, y_test) test_clusters = kmeans.predict(X_test) prediction = infer_data_labels(kmeans.predict(X_test), cluster_labels) print('Accuracy: {}'.format(accuracy_score(y_test, prediction))) Accuracy: 0.8877 There we have MiniBatchKmeans Clustering model with almost 90% accuracy. One definite way to check the model performance is to visualize the real image. For the convenience, we decrease the n_clusters to 36. kmeans = MiniBatchKMeans(n_clusters = 36) kmeans.fit(X_test) # record centroid values centroids = kmeans.cluster_centers_ # reshape centroids into images images = centroids.reshape(36, 28, 28) images *= 255 images = images.astype(np.uint8) # determine cluster labels cluster_labels = infer_cluster_labels(kmeans, y_test) prediction = infer_data_labels(kmeans.predict(X_test), cluster_labels) # create figure with subplots using matplotlib.pyplot fig, axs = plt.subplots(6, 6, figsize = (20, 20)) plt.gray() # loop through subplots and add centroid images for i, ax in enumerate(axs.flat): # determine inferred label using cluster_labels dictionary for key, value in cluster_labels.items(): if i in value: ax.set_title('Inferred Label: {}'.format(key), color='blue')
2022-06-29 03:04:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39139536023139954, "perplexity": 5724.748362006289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103620968.33/warc/CC-MAIN-20220629024217-20220629054217-00227.warc.gz"}
https://www.vacorse.be/Feb-08/relation-between-thermal-conductivity-to-in-france-20530.html
## Products Home Productsrelation between thermal conductivity to in france # relation between thermal conductivity to in france ### The Difference Between Thermal Conductivity and Thermal Oct 16, 2018· One important aspect when selecting a TIM for your appliion is knowing the material’s ability to transfer heat, which is often given by way of thermal conductivity and/or thermal impedance. Across the industry, manufacturers often publish thermal conductivity in units of Watts / meter-Kelvin as well as thermal impedance in units of °C ### Relationship between thermal conductivity and framework Nov 01, 2014· MOF-5 has poor thermal conductivity (0.31 W/m K , similar to concrete) and this impedes the removal of latent heat of adsorption , . Currently, to meet DoE’s performance targets for H 2 storage, at least a fivefold increase in thermal conductivity is required , . Heat transport in MOF-5 … ### Introduction to thermal and electrical conductivity - DoITPoMS The Wiedemann-Franz law states that the ratio of thermal conductivity to the electrical conductivity of a metal is proportional to its temperature. $LT = \frac{\kappa }{\sigma }$ Where L the proportionality constant (also known as the Lorenz nuer), is: ### Relationship between thermal conductivity and water There is no simple and general relationship between the thermal conductivity of a soil, λ , and its volumetric water content, θ , because the porosity, n , and the thermal conductivity of the solid fraction, λ s, play a major part.Experimental data including measurements of all the variables are scarce. ### Thermal Resistivity and Conductivity - Engineering ToolBox Related Topics . Thermodynamics - Effects of work, heat and energy on systems; Related Documents . Butane - Thermal Conductivity - Online calculators, figures and tables showing thermal conductivity of liquid and gaseous butane, C 4 H 10, at varying temperarure and pressure, SI and Imperial units; Calcium Silie Insulation - Thermal conductivity of calcium silie insulation - temperature ### A Simple Relation Between Thermal Conductivity, Specific A relation of the form k aC = K 1 T + K 2, between thermal conductivity k, atomic heat (aC), and absolute temperature T, is shown to hold for zinc, sodium, lithium, copper, lead, aluminum and mercury.The possibility is indied of an equation of this sort based on the assumption of a double mechanism of heat conduction—an atomic lattice along which energy is transmitted as elastic waves ### Thermal Conductivity - Definition and Detailed Explanation The Wiedemann-Franz law that provides a relation between electrical conductivity and thermal conductivity is only applicable to metals. The heat conductivity of non-metals is relatively unaffected by their electrical conductivities. Influence of magnetic fields ### Thermal Conductivity Formula: Definition, Equations and Thermal conductivity is the ability of a given material to conduct or transfer heat. It is generally denoted by the syol ‘k’ or sometimes lamda. The reciprocal of this physical quantity is referred to as thermal resistivity. Learn thermal conductivity formula here. ### Thermal Conductivity of Ionic Liquids | IntechOpen Sep 05, 2018· Figure 5 shows the relation between alkyl chain length and thermal conductivity at 293 K. The thermal conductivity of n-alkanes was calculated by REFPROP 9.0 . The results indied that the alkyl chain length does not significantly affect the thermal conductivity. ### Relationship between electrical and thermal conductivity Sep 01, 2013· While the thermal properties of ITO at room temperature are determined by the electronic band structure (and, therefore, related to the electrical conductivity via the Wiedemann–Franz law) the thermal properties of graphene-based materials are dominated by lattice vibrations , , which makes the relationship between the thermal and electrical ### Thermal Resistance & Thermal Conductance – C-Therm Oct 31, 2020· The reciprocal of the thermal transmittance (of conduction) is called thermal resistance (of conduction) R: R: = 1 Λ R = Δx λ thermal resistance of conduction. As a component-dependent quantity, the thermal resistance of conduction describes the insulating effect of a component with regard to thermal conduction! ### (PDF) Relationship between porosity, thermal conductivity Optical scanning provides us with a good knowledge of local increase of thermal conductivity due to sealed fracture or quartz-cemented matrix The relationship between porosity and thermal ### Thermal conductance and its relation to thermal time Jul 01, 1981· Thermal conductivity of interior points increased by a factor of three from the low est temperature to the highest temperature. so would be the thermal time constant for a conductor with constant ther mal conductivity at its lowest value and with insulation capable of supporting 1/3 of the surface to core tempera ture difference. tially the ### How Thermal Conductivity Relates to Electrical May 01, 2000· where k is the thermal conductivity in W/mK, T is the absolute temperature in K, is the electrical conductivity in -1 m-1, and L is the Lorenz nuer, equal to 2.45 10-8 W /K 2. Clearly there is a world of difference between the measurement of electrical conductivity and that of thermal conductivity. ### Relationship between pressure and thermal conductivity Feb 22, 2016· $\begingroup$ I appreciate your response, but I am still not seeing a derivation that allows for a direct relationship between thermal conductivity and pressure. You talk about electrical conductivity. Remeer, I would like to use the kinetic theory of gases to derive this relationship. ### Thermal Conductivity and the Wiedemann-Franz Law Thermal Conductivity Heat transfer by conduction involves transfer of energy within a material without any motion of the material as a whole. The rate of heat transfer depends upon the temperature gradient and the thermal conductivity of the material. Thermal conductivity … ### Temperature dependence of the relationship of thermal A general relationship between k(T) and k(0) First, a relationship for the temperature dependence of thermal conductivity is derived. The detailed procedure is described in Vosteen and Schellschmidt (2003). The resulting equations allow the determination of thermal conductivity at temperatures T if only its value at aient conditions (&25 C ### Thermal Conductivity & Coefficient of Expansion - RF Cafe Thermal conductivity is analogous to electrical conductivity. Similarly, thermal resistance is the inverse of thermal conductivity as electrical resistance is the inverse of electrical conductivity. Coefficient of expansion is the rate at which a material will grow in length with an increase in temperature. ### What is Thermal Resistance - Thermal Resistivity - Definition May 22, 2019· Analogy to Electric Resistance. The equation above for heat flow is analogous to the relation for electric current flow I, expressed as:. where R e = L/σ e A is the electric resistance and V 1 – V 2 is the voltage difference across the resistance (σ e is the electrical conductivity). The analogy between both equations is obvious. The rate of heat transfer through a layer corresponds to the ### Relationship between Thermal Conductivity and Diffusivity The relationship between the thermal conductivity and thermal diffusivity of a sandy loam soil with moisture content is presented in Fig 6.1. Thermal diffusivity of soil increased exponentially with the increasing bulk density; heat capacity and the degree of saturation with moisture. At a constant heat flow, the abundance of temperature ### Relationship between thermal conductivity and structure of Relationship between thermal conductivity and structure of nacre from Haliotis fulgens - Volume 26 Issue 10 ### Relation between thermal conductivity and molecular Nov 01, 2005· The relation between the thermal conductivity and the aligned molecular direction of the films was investigated. The homogeneous film showed the largest magnitude of the thermal conductivity at the direction along the molecular long axis (0.69 W/m K). ### Relationship between electrical and thermal conductivity Sep 01, 2013· While the thermal properties of ITO at room temperature are determined by the electronic band structure (and, therefore, related to the electrical conductivity via the Wiedemann–Franz law) the thermal properties of graphene-based materials are dominated by lattice vibrations , , which makes the relationship between the thermal and electrical ### Relationship between the thermal conductivity and The relationship between the thermal conductivity and some mechanical properties of Uludağ fir and black poplar specimens were determined based on related standards. It was hypothesized that thermal conductivity can be used as a predictor for wood properties. The hot plate test method was used as a thermal conductivity testing method. ### Transmittance, resistance and thermal conductivity - Vimark The Thermal resistance or R- value of a wall consisting of multiple layers is the sum of the thermal resistances of each component of the layer. The conductivity or thermal conductivity λ or K is the amount of heat transferred in a direction perpendicular to a surface of unit area, due to a temperature gradient, per unit of time and steady state. ### Thermal Conduction - Heat Conduction | Definition Thermal conduction, also called heat conduction, occurs within a body or between two bodies in contact without the involvement of mass flow and mixing. It is the direct microscopic exchange of kinetic energy of particles through the boundary between two systems. Heat transfer by conduction is dependent upon the driving “force” of temperature difference and the thermal conductivity (or the ### Thermal Conductivity Formula: Definition, Equations and Thermal conductivity is the ability of a given material to conduct or transfer heat. It is generally denoted by the syol ‘k’ or sometimes lamda. The reciprocal of this physical quantity is referred to as thermal resistivity. Learn thermal conductivity formula here. ### Wiedemann–Franz law - Wikipedia In physics, the Wiedemann–Franz law states that the ratio of the electronic contribution of the thermal conductivity ( κ) to the electrical conductivity ( σ) of a metal is proportional to the temperature ( T ). κ σ = L T {\displaystyle {\frac {\kappa } {\sigma }}=LT} Theoretically, the proportionality constant L, known as the Lorenz nuer, is equal to. ### Thermal Conductivity - Definition and Detailed Explanation The Wiedemann-Franz law that provides a relation between electrical conductivity and thermal conductivity is only applicable to metals. The heat conductivity of non-metals is relatively unaffected by their electrical conductivities. Influence of magnetic fields ### Effect of particle size on the thermal conductivity of Mar 22, 2011· Equations 3 and 5 relate the thermal conductivity of metallic nanoparticles to their characteristic size, and is illustrated in Figure 1 for copper nanoparticles. The solid line in Figure 1 was obtained using Equation 3 to calculate the thermal conductivity when Kn > 5, and Eq. 5 when Kn < 1. In the intermediate region (1 < Kn < 5), the thermal conductivity was obtained by interpolation.
2021-06-19 18:15:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9212499260902405, "perplexity": 1208.786688862996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649688.44/warc/CC-MAIN-20210619172612-20210619202612-00036.warc.gz"}
https://scipost.org/submissions/1904.03233v5/
# Number-resolved imaging of $^{88}$Sr atoms in a long working distance optical tweezer ### Submission summary As Contributors: Ryan Hanley · Matthew Hill · Niamh Jackson · Matthew Jones Arxiv Link: https://arxiv.org/abs/1904.03233v5 (pdf) Date accepted: 2020-02-07 Date submitted: 2020-02-03 Submitted by: Hill, Matthew Submitted to: SciPost Physics Discipline: Physics Subject area: Atomic, Molecular and Optical Physics - Experiment Approach: Experimental ### Abstract We demonstrate number-resolved detection of individual strontium atoms in a long working distance low numerical aperture (NA = 0.26) tweezer. Using a camera based on single-photon counting technology, we determine the presence of an atom in the tweezer with a fidelity of 0.989(6) (and loss of 0.13(5)) within a 200 $\mu$s imaging time. Adding continuous narrow-line Sisyphus cooling yields similar fidelity, at the expense of much longer imaging times (30 ms). Under these conditions we determine whether the tweezer contains zero, one or two atoms, with a fidelity $>$0.8 in all cases with the high readout speed of the camera enabling real-time monitoring of the number of trapped atoms. Lastly we show that the fidelity can be further improved by using a pulsed cooling/imaging scheme that reduces the effect of camera dark noise. Published as SciPost Phys. 8, 038 (2020) ### List of changes Value of loss added to abstract ### Submission & Refereeing History Resubmission 1904.03233v5 on 3 February 2020 Resubmission 1904.03233v4 on 14 January 2020 Resubmission 1904.03233v3 on 2 October 2019 Submission 1904.03233v2 on 24 April 2019
2020-07-12 18:42:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3656090199947357, "perplexity": 10136.407333464389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657139167.74/warc/CC-MAIN-20200712175843-20200712205843-00027.warc.gz"}
https://www.gamedev.net/forums/topic/117061-id3dxsprite-problem/
#### Archived This topic is now archived and is closed to further replies. # ID3DXSprite Problem!! This topic is 5768 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hey all, Ive recently started writing a simple libary for the use in my games. I decided to use ID3DXSprite because it simplifies things a little. But im now having problems with the second parameter, inwhich you specify a source rectangle to blit from. If i specify the size of my texture, the part which is drawn is always smaller than what i specify. I cant work out why... Its not the parameters ive got wrong because ive checked the values by using the debugger. Ive also noted that im using a 32bit texture, and a 32bit screen display. So the loss of texture information isnt lost there.. Im stuck for any other ideas as to why this may be happening. Any help would be greatly appreciated. Thanks, Heres a snippet of the code I use. void CSprite::Draw() { RECT srcRect; srcRect.bottom = m_CellHeight * this->m_CurFrameY + m_CellHeight; srcRect.top = m_CellHeight * this->m_CurFrameY; srcRect.right = m_CellWidth * this->m_CurFrameX + m_CellWidth; srcRect.left = m_CellWidth * this->m_CurFrameX; D3DXVECTOR2 translationVector; translationVector.x = m_X; translationVector.y = m_Y; D3DXVECTOR2 centerVector; centerVector.x = m_X+m_CellWidth/2; centerVector.y = m_Y+m_CellHeight/2; m_pSprite->Draw(m_pSpriteTexture->m_Texture, &srcRect, NULL,¢erVector, (0.0174 * m_Angle), &translationVector,D3DCOLOR_RGBA(255,255,255,this->m_Alpha)); } ##### Share on other sites Perhaps a look at the article "Dissecting Sprites in Direct3D" will help (to be found in the DirectGraphics section right here ;-)). 1. 1 2. 2 Rutin 21 3. 3 JoeJ 18 4. 4 5. 5 • 14 • 39 • 23 • 13 • 13 • ### Forum Statistics • Total Topics 631717 • Total Posts 3001878 ×
2018-07-17 08:35:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20581378042697906, "perplexity": 5912.513047967823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589618.52/warc/CC-MAIN-20180717070721-20180717090721-00396.warc.gz"}
http://icrhp9.icrr.u-tokyo.ac.jp/abstract/Rikkyo/Takashi_Sako.html
# Detection of VHE Gamma Rays from PSR1509-58 ## Takashi Sako ### Solar Terrestrial Environment Laboratory, Nagoya University Very high energy (VHE) gamma-ray emission from the gamma-ray pulsar PSR1509-58 is detected by using the CANGAROO 3.8m telescope. Observations have been performed in 1996 and 1997 with the threshold energy of 3.0TeV and 1.5TeV, respectively. The gamma-ray signal is obtained in the lower threshold observation with the integral flux of $(4.3\pm0.7)\times10^{-12} cm^{-2}s^{-1}$, although only the upper limit(3$\sigma$) of $1.2\times10^{-12} cm^{-2}s^{-1}$ is obtained in the '96 observation. The test for the periodicity with the pulsar rotation period was carried out and only the upper limits are obtained. While this pulsar is the second youngest pulsar, the previously estimated magnetic field strength of the pulsar nebula is much smaller than that of the Crab. A fast expansion of the remnant resulted from a powerful initial supernova explosion or a low density of the ambient interstellar matter is proposed to explain the large size of the nebula and the weak magnetic field. Our VHE results support this weak magnetic field, which leads the primary electrons to survive from a severe synchrotron energy loss. This fourth evidence of VHE gamma-ray emission from the pulsar nebula will confirm that the young spin powered pulsars are most possible acceleration site of electrons in our Galaxy.
2017-06-27 03:30:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6267834901809692, "perplexity": 1471.1083546590257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320915.38/warc/CC-MAIN-20170627032130-20170627052130-00311.warc.gz"}
http://www.haskell.org/pipermail/haskell-cafe/2010-April/075568.html
# [Haskell-cafe] OT: the format checker for ICFP 2010 papers, help! Iustin Pop iusty at k1024.org Thu Apr 1 12:52:30 EDT 2010 On Thu, Apr 01, 2010 at 05:25:44PM +0100, Thomas Schilling wrote: > Do you perhaps have some text that run into the margins? If I have > references of the form "Longname~\emph{et~al.}~\cite{foobar}" Latex > does not know how to split this up the text extends into the margins. > A similar problem might occur for verbatim sections. I submitted a > paper based on the standard stylesheet earlier today and did not > encounter any problems. No, it was the wrong template, as it turned out. I did check for the "Overfull hbox" message from latex and had none. Interesting that your paper didn't trigger the error though… thanks, iustin
2014-03-12 08:32:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8505843281745911, "perplexity": 8594.966498111971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021542591/warc/CC-MAIN-20140305121222-00051-ip-10-183-142-35.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/451938/how-to-add-a-label-to-a-vector-and-an-angle
# How to add a label to a vector and an angle? I'm trying to draw the unit circle with two vectors of the same size and an angle between them. So far I've been able to draw the circle and the vectors, however I don't know how to add the angle and the label to the coordinates. I have this code: \begin{tikzpicture}[scale=5] % draw the coordinates \draw[->] (-0.2cm,0cm) -- (1.2cm,0cm) node[right,fill=white] {$x$}; \draw[->] (0cm,-0.2cm) -- (0cm,0.88cm) node[above,fill=white] {$y$}; % draw arc \draw [black,loosely dashed,domain=13:47] plot ({cos(\x)}, {sin(\x)}); % draw vectors \draw[black,-latex] (0cm,0cm) -- (20:1cm); \draw[black,-latex] (0cm,0cm) -- (40:1cm); \end{tikzpicture} That generates this image: How can I add a label to the coordinate at the end of each vector? And how can I add an angle inside the vectors, with an arrow pointing up, similar to this image below. Thank you so much! Sorry, I was busy when writing the first version of the answer very quickly, and thus got i and r confused. Angles can be drawn with the angles library, and quotes are needed to annotate them. The coordinate nodes can be achieved in the same way you label the axes x and y. \documentclass[tikz,border=3.14mm]{standalone} \usetikzlibrary{angles,quotes} \begin{document} \begin{tikzpicture}[scale=5] % draw the coordinates \draw[->] (-0.2cm,0cm) -- (1.2cm,0cm) node[right,fill=white] {$x$}; \draw[->] (0cm,-0.2cm) -- (0cm,0.88cm) node[above,fill=white] {$y$}; % draw arc \draw [black,loosely dashed,domain=13:47] plot ({cos(\x)}, {sin(\x)}); % draw vectors \draw[black,-latex] (0cm,0cm) coordinate(O) -- (20:1cm) coordinate (r) node[pos=1.02,anchor=west]{$(x_r,y_r)$}; \draw[black,-latex] (0cm,0cm) -- (40:1cm) coordinate (i) node[pos=1.02,anchor=west]{$(x_i,y_i)$}; \draw pic ["$\theta$",angle eccentricity=1.33,draw,-latex,angle radius=1cm,fill=blue!50] {angle = r--O--i}; \end{tikzpicture} \end{document}
2020-10-23 07:58:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7803547382354736, "perplexity": 3313.1789376735314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880878.30/warc/CC-MAIN-20201023073305-20201023103305-00660.warc.gz"}
http://eprint.iacr.org/2006/277/20060817:085936
## Cryptology ePrint Archive: Report 2006/277 On Expected Probabilistic Polynomial-Time Adversaries -- A suggestion for restricted definitions and their benefits Oded Goldreich Abstract: This paper concerns the possibility of developing a coherent theory of security when feasibility is associated with expected probabilistic polynomial-time (expected PPT). The source of difficulty is that the known definitions of expected PPT strategies (i.e., expected PPT interactive machines) do not support natural results of the type presented below. To overcome this difficulty, we suggest new definitions of expected PPT strategies, which are more restrictive than the known definitions (but nevertheless extend the notion of expected PPT non-interactive algorithms). We advocate the conceptual adequacy of these definitions, and point out their technical advantages. Specifically, identifying a natural subclass of black-box simulators, called normal, we prove the following two results: (1) Security proofs that refer to all strict PPT adversaries (and are proven via normal black-box simulators), extend to provide security with respect to all adversaries that satisfy the restricted definitions of expected PPT. (2) Security composition theorems of the type known for strict PPT hold for these restricted definitions of expected PPT, where security means simulation by normal black-box simulators. Specifically, a normal black-box simulator is required to make an expected polynomial number of steps, when given oracle access to any strategy, where each oracle call is counted as a single step. This natural property is satisfies by most known simulators and is easy to verify. Category / Keywords: foundations / Zero-Knowledge, secure multi-party computation, protocol composition, black-box simulation, reset attacks, Publication Info: Will be posted also on ECCC Date: received 17 Aug 2006 Contact author: oded goldreich at weizmann ac il Available format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | BibTeX Citation Short URL: ia.cr/2006/277 [ Cryptology ePrint archive ]
2016-07-25 18:26:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8988006711006165, "perplexity": 4285.01728015318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824337.54/warc/CC-MAIN-20160723071024-00294-ip-10-185-27-174.ec2.internal.warc.gz"}
https://meridian.allenpress.com/cia/article/9/2/P7/171541/Insights-into-Large-Audit-Firm-Sampling-Policies
## SUMMARY Changes in the audit profession after Sarbanes-Oxley, including mandatory audits of internal control over financial reporting and PCAOB oversight and inspection of audit work, have potentially changed the nature and extent of audit sampling in the largest accounting firms. In our study, “Behind the Numbers: Insights into Large Audit Firm Sampling Policies” (Christensen, Elder, and Glover 2015), we administered an extensive, open-ended survey to the national offices of the Big 4 and two other international accounting firms regarding their firm's audit sampling policies. We find variation among the largest firms' policies in their use of different sampling methods and in inputs used in the sampling applications that could result in different sample sizes. We also provide evidence of some of the sampling topics firms find most problematic, as well as changes to firms' policies regarding revenue testing due to PCAOB inspections. Our evidence provides important insights into current sampling policies, which may be helpful to audit firms in evaluating their sampling inputs and overall sampling approaches. ## INTRODUCTION Over the last two decades there have been significant changes in audit approaches, including federally mandated audits of internal control over financial reporting for large public companies as a result of the Sarbanes-Oxley Act of 2002 (SOX). These changes have the potential to change the nature and extent of audit sampling techniques. Our recently published study, “Behind the Numbers: Insights into Large Audit Firm Sampling Policies” (Christensen, Elder, and Glover 2015), seeks to provide insights into the current state of audit sampling. To do so, we asked open-ended questions to the national sampling experts at the Big 4 and two other international accounting firms regarding sampling policies and practices currently in place at each firm. In this summary, we focus on important differences between the firms. For a more detailed discussion, see Christensen et al. (2015). Our analysis of the firms' sampling approaches highlights important similarities and differences among the firms' policies. For tests of controls and details, the firms are divided among use of statistical and nonstatistical sampling. This variation in approaches among firms is different than earlier time periods when almost all firms either followed statistical approaches (Akresh 1980) or nonstatistical approaches (Sullivan 1992). We also report differences in the sampling inputs used by firms, thus resulting in different sample sizes, regardless of whether the firm follows a statistical or nonstatistical sampling approach.1 Depending on the level of assurance obtained from other audit procedures, differences in sample sizes raise the possibility that different levels of assurance are obtained to support audit opinions. Interestingly, most firms use identical sampling approaches and parameters for public and private clients despite the differences in business and engagement risk. We also report differences in error projection methods used and how firms respond to identified errors and misstatements. Finally, we show that some firms now rely more heavily on substantive testing using sampling when testing revenue (i.e., testing a sample of individual revenue transactions) than other substantive testing, such as analytical procedures. Our study provides evidence on current sampling practices and identifies important differences in sampling policies among the largest audit firms. These findings provide insights into sampling policies and procedures that are important to better understand the application of audit sampling in the current audit environment. This evidence may also be helpful to audit firms in evaluating their sampling inputs and overall sampling approaches. ## TESTS OF CONTROLS ### Sampling in Tests of Controls: Application and Parameters While sampling is not required to test many types of controls, firms replied that sampling is frequently used for tests that involve inspection or re-performance of manual controls, but is less frequently used to test controls that operate at the entity level or those that are automated. When deciding to use sampling in tests of controls, auditors choose between statistical or nonstatistical sampling approaches. According to auditing standards, auditors selecting a nonstatistical approach should arrive at a sample size that is “comparable to the sample size resulting from an efficient and effectively designed statistical sample, considering the same sampling parameters” (AICPA 2011, §530.A14; PCAOB 2003, §350.23). While either method is acceptable under auditing standards, statistical sampling requires a statistically acceptable selection method (i.e., random selection, but not haphazard selection) and allows the auditor to quantify sampling risk in evaluating the results of testing. Our study reports an equal division among the six participating firms' approaches in this regard. Based on survey responses, firm guidelines appear to either explicitly require the use of statistical methods or, when nonstatistical methods are permitted, include guidance based on statistical theory that results in these methods arriving at a sample size and conclusion similar to what would have been reached using a statistical method.2 Our survey did not address why a firm chose to use a statistical or nonstatistical approach. Once the firm decides on the general approach (e.g., statistical versus nonstatistical), the sample size is calculated based on a set of inputs: desired confidence level, expected deviation rate, and tolerable deviation rate. Table 1, which is reproduced from our original study (Christensen et al. 2015), reports the typical values used by each firm for these key inputs, as reported by the respondents. TABLE 1 Inputs for Application of Sampling in Tests of Controls The range of 90–95 percent confidence is consistent with audit firms providing a high level of assurance (Christensen, Glover, and Wood 2012; AICPA 2012, §3.42), which AS 5 (PCAOB 2007) requires for integrated audits. Levels of confidence below 90 percent, such as reported by Respondent 1, could be used for non-integrated audits. Responses consistently indicated that engagement teams typically plan for zero deviations when calculating sample size for control tests. Regarding tolerable deviation rates, two respondents indicated 10 percent as a standard tolerable deviation rate, whereas the remaining respondents provided ranges, including 6 to 9.5 percent, 6 to 10 percent, and 5 to 10 percent. Based on the inputs reflected in Table 1, the range of the sample size is from 22 (0 expected deviations, 10 percent tolerable deviation rate, 90 percent confidence) to 59 (0 expected deviations, 5 percent tolerable deviation rate, 95 percent confidence).3 While comparisons of sample sizes between firms is incomplete without the fuller context of the other audit procedures performed, differences in sample-size inputs reported by the firms could result in substantially different sample sizes. ### Sample Selection Process After determining sample size, the engagement team selects the items from the population to test. A variety of sample selection methods exist including random, haphazard, stratified, and systematic selection. Three respondents stated that random or systematic selection methods are preferred and encouraged, but haphazard selection is allowed. Of the five firms that permit haphazard selection, only one noted that such samples are penalized with larger sample sizes. It is important to note that haphazard selection is permitted by auditing standards (AICPA 2011, §530.A17; PCAOB 2003, §350.24) and the Audit Guide, Audit Sampling (AICPA 2012). However, with programs like Microsoft Excel, selecting a random sample is straightforward and there is some evidence that auditors may struggle to select unbiased samples using non-random methods (e.g., Hall, Higson, Pierce, Price, and Skousen 2012). ### Evaluation of Results and Resolution of Deviations When sample results indicate control deviations, engagement teams are faced with three options: (1) expand testing of the control, (2) test compensating or redundant controls, or (3) conclude that the control is ineffective, evaluate the severity of the control failure, and revise the nature, timing, and/or extent of planned substantive testing accordingly. Two respondents indicated that if it is deemed effective to expand testing of the control, the sample size can be doubled. If no additional deviations are found in this larger sample, the auditor can conclude that the control is operating effectively. However, another respondent indicated that it is more common to modify planned substantive tests and noted that “we typically do not expand our sample because it is likely that we will continue to discover deviations in the expanded sample.” When the control in question has failed, several respondents noted the importance of identifying compensating controls. As one respondent noted very clearly, “[I]f these controls cannot be found or are found to not be effective, substantive testing will be expanded.” These responses suggest different firm preferences as to how to respond to deviations identified in the course of controls testing. ## SUBSTANTIVE TESTS OF DETAILS ### Sampling in Substantive Testing: Application and Parameters While AS 5 has dramatically altered auditors' use of sampling for test of controls, other changes, such as PCAOB inspections, also have the potential to impact the application of sampling in substantive testing. Our study reports that sampling is commonly used when testing accounts that cannot be efficiently tested using specific identification testing, such as accounts receivable confirmations, inventory price testing, loan and deposit confirmations, and inventory test counts. Regarding the choice between statistical and nonstatistical sampling, four of the six firms emphasized the use of statistical sampling methods, with monetary unit sampling (MUS) being the dominant method used in practice. As summarized in Table 2, which is taken from our original study, most respondents focused on three key inputs to determine sample size: required confidence level, tolerable misstatement, and expected misstatement.4 The required confidence levels varied both within and between firms, although the high end of the confidence range is consistently at or near 95 percent. The desired level of assurance from sampling is affected by the assessed account risk as well as the assurance provided by other tests. For example, Respondent 1 indicated that a confidence level of 30 percent would be deemed appropriate “when analytical procedures are effective and inherent and control risk are assessed as being low,” but 95 percent is appropriate when “the assertion subject to testing includes significant risks, control risk is high, and analytical procedures are ineffective.” TABLE 2 Inputs for Application of Sampling in Substantive Tests of Details As indicated in Table 2, the firms differed in the extent to which misstatements were planned for in tests of detail sampling, which can substantially impact the calculated sample size. Finally, all respondents indicated that tolerable misstatement is set equal to or less than performance materiality. As with tests of controls, statistical and nonstatistical approaches are designed to yield similar sample sizes. However, differences in planning inputs such as those reported in Table 2 can result in significant differences in samples sizes, regardless of the sampling approach followed.5 ### Sample Selection Process Sample items for tests of details can be selected by one of several methods including specific identification, stratification, random selection, haphazard selection, or systematic selection. Unique to tests of details, all respondents indicated that firm guidance either explicitly requires or encourages that all items greater than tolerable misstatement are selected for specific identification testing. This approach is consistent with guidance in the 2012 Audit Guide, Audit Sampling because these items can present high risk and are therefore tested separately from the items selected by applying sampling (AICPA 2012, paras. 4.11 and 4.18). Regarding the selection of items that are not separately tested, three respondents indicated that systematic or random selection is used when the sample size is calculated using statistical methods, and haphazard selection (with some penalty) is used when nonstatistical methods are used.6 On the other hand, three other respondents indicated that various methods are allowed, but that no penalties are levied for the use of haphazard selection. Therefore, while haphazard selection is used across all participating firms, some firms impose a larger sample size for haphazard selection of nonstatistical samples and other firms do not. ### Evaluation of Results and Resolution of Misstatements We asked respondents whether firm policy requires a projection of identified misstatements to the population and, if so, what projection method is typically used. All respondents indicated that projection of errors is generally required by firm policy. The two methods most commonly referenced were ratio projection (applies the misstatement ratio observed in the sample to the entire population) and difference projection (projects the average misstatement of each item in the sample to all items in the population). One respondent indicated that both methods are used for each misstatement, and the larger of the two projected amounts is used. Another respondent indicated that the ratio method is preferred per firm guidance, but difference projection may be used if the misstatements relate more to the occurrence of a transaction and not the volume or dollar value. While firm policy generally requires error projection, we also asked respondents how frequently they believe that misstatements are treated as anomalies and thus are not projected to the full population. One respondent indicated that firm policy explicitly prohibits this treatment, whereas another stated that isolation of errors occurs less than half of the time sampling is applied in substantive testing and that when it does occur, no consultation outside the engagement team is necessary. A third respondent identified a policy somewhere in between the first two. Taken together, responses indicate a fairly wide range of policies regarding error projection and isolation of misstatements. Further discussion with respondents indicated that, consistent with prior research (e.g., Burgstahler and Jiambalvo 1986; Elder and Allen 1998) and PCAOB inspection reports (PCAOB 2008), engagement teams have difficulty understanding how to treat misstatements identified during testing when sampling is used. For example, one respondent said, “[T]eams sometimes fail to project an error because the sample error is relatively small, and they fail to recognize that a projected error coupled with sampling risk might result in a material misstatement.” Similarly, another respondent stated that “most auditors cannot manually recalculate the projection and do not understand which errors cause the large projection of an error.” Respondents' comments suggest that additional training in the logic underlying sampling and/or sampling templates (see Durney, Elder, and Glover 2014) may help improve auditors' ability to correctly project errors. ### PCAOB versus AICPA Guidance We asked respondents whether their firm has different sampling policies for audits performed under PCAOB auditing standards and those performed under AICPA auditing standards.7 Whereas two of the six respondents stated that different control testing policies exist for integrated and non-integrated audits, none of the firms indicated differences in the overall sampling approaches when performing tests of details. This similarity in sampling approaches across different entities subject to very different regulatory regimes is somewhat surprising given the fact that higher assurance levels may be required for public companies as auditors seek to reduce litigation and regulatory risk through additional audit effort (Badertscher, Jorgensen, Katz, and Kinney 2014; DeFond and Zhang 2014). ### Revenue Testing In recent years, the PCAOB has increasingly focused on revenue testing in the inspection and standard-setting process (Hanson 2013; Rand 2012). We asked respondents about their use of audit sampling in testing revenue and if the sampling policy for revenue is the same as for other accounts. One respondent stated that while substantive analytical procedures are permitted when testing revenue, auditors on PCAOB engagements are “required to also perform tests of details and the minimum sample size is 25.” Another firm “now strongly encourages test of details of the revenue account.” Two respondents stated that the use of sampling when testing revenue accounts is not uncommon, but that their firms do not have specific sampling policies for revenue. Finally, one firm's expert said, “[W]e do not typically use sampling to provide substantive evidence for income statement related accounts.” While in the past many firms may have relied in part on substantive analytical procedures to obtain assurance over revenue, based on these responses it appears that most participating firms now also use sampling in the testing of revenue (see Glover, Prawitt, and Drake [2015] for a recent commentary on the regulatory impact on auditing revenue). ## CONCLUSION AND LIMITATIONS The concept of assurance obtained by examining items on a test basis referenced in the standard PCAOB audit report speaks to the importance of sampling during the performance of an audit of financial statements and internal control over financial reporting. Given regulatory changes brought about through the Sarbanes-Oxley Act of 2002 and the creation of the PCAOB, our study asked open-ended questions regarding firm-specific sampling policies and practices to the leading sampling expert from each of the Big 4 and two other large international firms. While we do not provide a detailed discussion of all results in this summary, Table 3 provides a comprehensive review of similarities and differences among the firms' approaches, along with their implications for practice. TABLE 3 Summary of Findings and Implications We find that sampling methods differ significantly among the largest auditing firms; while some emphasize statistical methods, others use nonstatistical methods. Somewhat surprisingly, we find that each firm generally applies the chosen sampling method and sampling parameters for audits of both its private and public clients. Further, firms frequently use different inputs to these sampling models, thus potentially resulting in relatively different sample sizes. This variation in sampling approaches and inputs appears to be different than in previous time periods, and the variety of approaches used is interesting given the highly regulated auditing environment and PCAOB criticism of sampling in areas such as revenue (PCAOB 2014). Nonstatistical methods are allowed under AICPA and PCAOB auditing standards. Although firms that use nonstatistical sampling were clear that their methodology was designed to result in sample sizes and sample evaluations that are similar to those determined using statistical sampling, additional guidance may be needed to ensure that conclusions reached using nonstatistical methods are similar to those reached using statistical methods. Due to the identified differences in sample size inputs, firms should also evaluate whether sample sizes are sufficient to achieve the level of assurance desired by the test. Finally, firms also often select samples haphazardly, and auditors may need additional guidance to increase the likelihood that representative samples are selected. Additionally, we find differences among firms regarding the response to identified errors and misstatements. Sampling experts inform us that responding to and resolving identified misstatements is one of the biggest hurdles that audit engagement teams from all firms face when using sampling techniques, and auditors have also struggled to effectively resolve errors in the past (PCAOB 2008). Additional training and use of templates may assist auditors in projecting errors and evaluating sampling risk. In particular, firms appear to differ in the extent to which they allow identified errors to be treated as anomalies. While ISA 530 (IFAC 2009) notes that some misstatements may be anomalies, AU-C 530 paragraph 0.13 indicates that “the auditor should project the results of audit sampling to the population” (AICPA 2011). The AICPA Audit Guide, Audit Sampling (AICPA 2012, 4.101–4.104) provides guidance on when it may be appropriate to not project an error and the documentation necessary to support this decision. We recommend that guidance on the treatment and documentation of anomalies be specifically addressed in AICPA and PCAOB auditing standards. Finally, we present evidence that some firms have significantly changed their approach to revenue testing due to PCAOB inspections, relying more heavily on testing individual transactions selected by sampling than other substantive testing, such as analytical procedures. Given the limited evidence on firms' sampling policies after the Sarbanes-Oxley Act, our study provides insights into sampling policies and procedures that are important for practitioners, researchers, educators, and regulators to better understand the application of audit sampling in the current audit environment. ## REFERENCES REFERENCES Akresh , A . 1980 . Statistical sampling in public accounting . The CPA Journal 50 ( 7 ): 20 26 . American Institute of Certified Public Accountants (AICPA) . 2011 . Audit Sampling . AU-C 530 . New York, NY : AICPA . American Institute of Certified Public Accountants (AICPA) . 2012 . Audit Sampling. Audit Guide . New York, NY : AICPA . , B ., B . Jorgensen , S . Katz , and W . Kinney . 2014 . Public equity and audit pricing in the United States . Journal of Accounting Research 52 ( 2 ): 303 339 .10.1111/1475-679X.12041 Burgstahler , D ., and J . Jiambalvo . 1986 . Sample error characteristics and projection of error to audit populations . The Accounting Review 61 ( 2 ): 233 248 . Christensen , B ., S . Glover , and D . Wood . 2012 . Extreme estimation uncertainty in fair value estimates: Implications for audit assurance . Auditing: A Journal of Practice & Theory 31 ( 1 ): 127 146 .10.2308/ajpt-10191 Christensen , B ., R . Elder , and S . Glover . 2015 . Behind the numbers: Insights into large audit firm sampling policies . Accounting Horizons 29 ( 1 ): 61 82 .10.2308/acch-50921 DeFond , M. L ., and J . Zhang . 2014 . A review of archival auditing research . Journal of Accounting & Economics 58 ( 2/3 ): 275 326 .10.1016/j.jacceco.2014.09.002 Durney , M ., R . Elder , and S . Glover . 2014 . Error rates, error projection, and consideration of sampling risk: Audit sampling data from the field . Auditing: A Journal of Practice & Theory 33 ( 2 ): 79 110 .10.2308/ajpt-50669 Elder , R ., and R . Allen . 1998 . An empirical investigation of the auditor's decision to project errors . Auditing: A Journal of Practice & Theory 17 ( 2 ): 71 87 . Glover , S. M ., D. F . Prawitt , and M. S . Drake . 2015 . Between a rock and a hard place: Is there a continued role for substantive analytical procedures in auditing large P&L accounts? Auditing: A Journal of Practice & Theory 34 ( 3 ).10.2308/ajpt-50978 Hall , T. W ., A. W . Higson , B. J . Pierce , K. H . Price , and C . Skousen . 2012 . Haphazard sampling: Selection biases induced by control listing properties and the estimation consequences of these biases . Behavioral Research in Accounting 24 ( 2 ): 101 132 .10.2308/bria-50132 Hanson , J . 2013 . Remarks Given at the Brigham Young University Accountancy Alumni Conference, October 25, 2013 . International Federation of Accountants (IFAC) . 2009 . Audit Sampling . ISA 530 . New York, NY : IFAC . Public Company Accounting Oversight Board (PCAOB) . 2003 . Audit Sampling . AU Section 350 . Washington, DC : PCAOB . Public Company Accounting Oversight Board (PCAOB) . 2007 . An Audit of Internal Control over Financial Reporting That Is Integrated with an Audit of Financial Statements. Auditing Standard No. 5 . Washington, DC : PCAOB . Public Company Accounting Oversight Board (PCAOB) . 2008 . Report on the PCAOB's 2004, 2005, 2006, and 2007 Inspections of Domestic Annually Inspected Firms . Release No. 2008-008 . Washington, DC : PCAOB . Public Company Accounting Oversight Board (PCAOB) . 2014 . Matters Related to Auditing Revenue in an Audit of Financial Statements . Staff Audit Practice Alert No. 12 . Washington, DC : PCAOB . Rand , J. A . 2012 . What Is Happening at the PCAOB? Sullivan , J . 1992 . . Proceedings of the 1992 Deloitte & Touche/University of Kansas Symposium on Auditing Problems, Auditing Symposium XI . 49 59 . Lawrence , KS . 1 Audit sampling is “[T]he selection and evaluation of less than 100 percent of the population of audit relevance such that the auditor expects the items selected (the sample) to be representative of the population and, thus, likely to provide a reasonable basis for conclusions about the population. In this context, representative means that evaluation of the sample will result in conclusions that, subject to the limitations of sampling risk, are similar to those that would be drawn if the same procedures were applied to the entire population” (AICPA 2011, §530.05; emphasis in the original). A full sampling application includes the following three stages: (1) the determination of sample size, (2) sample item selection, and (3) evaluation of results. A sampling approach is deemed nonstatistical if any one of the three stages is not consistent with statistical theory. For example, haphazard selection or judgmental evaluation of results would render a sampling application as nonstatistical. 2 Regardless of whether statistical or nonstatistical sampling is used, if the determined attribute sample size is appropriate given the statistical sample size planning parameters and the selection technique is statistically based (e.g., random selection), the results of a sample will be acceptable (i.e., provide the desired level of confidence and precision for sampling risk) whenever the observed sample deviation rate is less than the expected deviation rate used in planning the sample. Similarly, a larger than expected sample deviation rate indicates the sample results did not achieve the desired objective. This relationship of observed error to expected error does not always hold when testing monetary values. 3 Sample sizes are calculated using the Audit Guide, Audit Sampling (AICPA 2012, Tables A-1 and A-2). 4 Other factors were also mentioned, including extent of evidence from other procedures, risk of material misstatement, and audit posting threshold. 5 Respondents indicated that typical sample sizes ranged from 1 to 200 items, with most falling between 10 and 100 items. One respondent indicated a predetermined maximum limit, and only then in “limited low risk circumstances in testing revenue.” Most respondents indicated their firm has established nonstatistical minimum sample sizes (e.g., a minimum of 5 or 10) to be used for small populations. 6 In regard to penalties for nonstatistical methods, the Audit Guide, Audit Sampling (AICPA 2012) suggests that when penalties are imposed, they should be between 10 and 50 percent of the computed sample size, depending on error frequency. 7 PCAOB and AICPA auditing standards are similar in their requirements. However, audits performed under PCAOB auditing standards are subject to PCAOB inspections, while audits performed under AICPA auditing standards are subject to AICPA peer review requirements. PCAOB audits include integrated audits of the financial statements and internal control over financial reporting for accelerated filers, and financial statement audits for other issuers. Audits performed under AICPA auditing standards are mostly financial statement audits, although audits of financial institutions with assets above $1 billion ($500 million before 2005) also include an audit of internal control under the FDIC Improvement Act of 1991. Audits of governmental entities and nonprofits whose federal grant expenditures exceed reporting thresholds (currently \$750,000) are also required to have a single audit that includes testing of internal controls and federal grant compliance, in addition to the audit of the financial statements. ## Author notes We thank the sampling experts from the six participating audit firms for their time and participation in this study. Brant E. Christensen acknowledges funding from the Deloitte Foundation and Steven M. Glover acknowledges funding from the K. Fred Skousen Endowed Professorship.
2020-07-11 05:07:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31825020909309387, "perplexity": 3951.7597547154965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655921988.66/warc/CC-MAIN-20200711032932-20200711062932-00116.warc.gz"}
http://mathhelpforum.com/discrete-math/53754-proofs-set-theory.html
# Math Help - proofs in set theory 1. ## proofs in set theory Let A,B,C,X,Y be subsets of E,and A' MEAN the compliment of A in E i.e A'=E-A,AND A^B = A $\cap$B Then prove the following: a) (A^B^X)U(A^B^C^X^Y)U(A^X^A')= A^B^X b) (A^B^C)U(A'^B^C)UB' U C' = E Thanks 2. Hint: use the axiom of extensionality, i.e., $A = B$ iff $\forall x: x \in A \Leftrightarrow x \in B$ for sets A and B. Then the set formulae with union, intersection and complement reduce to logical formulae with or, and and not.
2014-09-22 16:04:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9758249521255493, "perplexity": 3574.4332261259055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137056.60/warc/CC-MAIN-20140914011217-00338-ip-10-234-18-248.ec2.internal.warc.gz"}
https://stacks.math.columbia.edu/tag/0EF2
Lemma 51.5.4. Let $A$ be a Noetherian ring. Let $T \subset \mathop{\mathrm{Spec}}(A)$ be a subset stable under specialization. The functor $D^+(\text{Mod}_{A, T}) \to D^+_ T(A)$ is an equivalence. Proof. Let $M$ be an object of $\text{Mod}_{A, T}$. Choose an embedding $M \to J$ into an injective $A$-module. By Dualizing Complexes, Proposition 47.5.9 the module $J$ is a direct sum of injective hulls of residue fields. Let $E$ be an injective hull of the residue field of $\mathfrak p$. Since $E$ is $\mathfrak p$-power torsion we see that $H^0_ T(E) = 0$ if $\mathfrak p \not\in T$ and $H^0_ T(E) = E$ if $\mathfrak p \in T$. Thus $H^0_ T(J)$ is injective as a direct sum of injective hulls (by the proposition) and we have an embedding $M \to H^0_ T(J)$. Thus every object $M$ of $\text{Mod}_{A, T}$ has an injective resolution $M \to J^\bullet$ with $J^ n$ also in $\text{Mod}_{A, T}$. It follows that $RH^0_ T(M) = M$. Next, suppose that $K \in D_ T^+(A)$. Then the spectral sequence $R^ qH^0_ T(H^ p(K)) \Rightarrow R^{p + q}H^0_ T(K)$ (Derived Categories, Lemma 13.21.3) converges and above we have seen that only the terms with $q = 0$ are nonzero. Thus we see that $RH^0_ T(K) \to K$ is an isomorphism. Thus the functor $D^+(\text{Mod}_{A, T}) \to D^+_ T(A)$ is an equivalence with quasi-inverse given by $RH^0_ T$. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2023-02-07 06:06:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9955897927284241, "perplexity": 96.88178560420155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500384.17/warc/CC-MAIN-20230207035749-20230207065749-00792.warc.gz"}
http://www.gradesaver.com/all-the-pretty-horses/q-and-a/why-does-john-gradys-grandfather-reflect-upon-the-laws-of-primogeniture-242053
Why does John Gradys grandfather reflect upon the laws of primogeniture? Why does he refer to the laws of being first born? Answers 1 Check out #3 in the link below: Source(s) http://kingraham.blogspot.ca/2012/02/all-pretty-horses-discussion-questions.html
2017-02-25 21:07:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621285557746887, "perplexity": 5980.533644455565}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00507-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathonline.wikidot.com/the-monotonicity-property-of-the-lebesgue-integral-of-simple
The Monotonicity Property of the Lebesgue Integral of Simple Functions # The Monotonicity Property of the Lebesgue Integral of Simple Functions Recall from The Linearity Property of the Lebesgue Integral of Simple Functions page that if $\varphi$ and $\psi$ are simple functions defined on a Lebesgue measurable set $E$ with $m(E) < \infty$ then for any $\alpha, \beta \in \mathbb{R}$ we have that: (1) \begin{align} \quad \int_E (\alpha \varphi + \beta \psi) = \alpha \int_E \varphi + \beta \int_E \psi \end{align} We will now show that the Lebesgue integral of simple functions also has a monotonicity property by first proving an important lemma. Lemma 1: Let $\varphi$ be a simple function defined on a Lebesgue measurable set $E$ with $m(E) < \infty$. If $\varphi(x) \geq 0$ for all $x \in E$ then $\displaystyle{\int_E \varphi \geq 0}$. • Proof: Let $\varphi$ be a simple function defined on a Lebesgue measurable set $E$ with $m(E) < \infty$ and let $\varphi(x)$ have canonical representation $\displaystyle{\varphi(x) = \sum_{k=1}^{n} a_k \chi_{E_k}(x)}$. Since $\varphi(x) \geq 0$ for all $x \in E$ we must have that $a_k \geq 0$ for each $k \in \{ 1, 2, ..., n \}$. Noting that the Lebesgue measurable of a set is always nonnegative we have that: (2) \begin{align} \quad \int_E \varphi = \sum_{k=1}^{n} a_k m(E_k) \geq 0 \quad \blacksquare \end{align} Theorem 2 (Monotonicity of the Lebesgue Integral for Simple Functions): Let $\varphi$ and $\psi$ be simple functions defined on a Lebesgue measurable set $E$ with $m(E) < \infty$. If $\varphi (x) \leq \psi(x)$ for all $x \in E$ then $\displaystyle{\int_E \varphi \leq \int_E \psi}$. • Let $\varphi$ and $\psi$ be simple functions defined on a Lebesgue measurable set $E$ with $m(E) < \infty$. Since $\varphi(x) \leq \psi(x)$ for all $x \in E$ we have that $\psi(x) - \varphi(x) \geq 0$ for all $x \in E$. By Lemma 1 this means that: (3) \begin{align} \quad \int_E (\psi - \varphi) \geq 0 \end{align} • And by the linearity of the Lebesgue integral of simple functions we have that: (4) \begin{align} \quad 0 \leq \int_E (\psi - \varphi) = \int_E \psi - \int_E \varphi \end{align} • Hence $\displaystyle{\int_E \varphi \leq \int_E \psi}$. $\blacksquare$ Theorem 3: Let $\varphi$ be a simple function defined on a Lebesgue measurable set $E$ with $m(E) < \infty$. Then $\displaystyle{\biggr \lvert \int_E \varphi \biggr \rvert \leq \int_E |\varphi|}$. • Proof: Let $\varphi$ be a simple function defined on a Lebesgue measurable set $E$ with $m(E) < \infty$. Then clearly $| \varphi |$ and $-|\varphi|$ are simple functions defined on $E$. Furthermore, $-| \varphi(x) | \leq \varphi(x) \leq | \varphi(x) |$ for all $x \in E$. So by Theorem 2: (5) \begin{align} \quad \int_E -|\varphi| \leq \int_E \varphi \leq \int_E | \varphi | \end{align} • By the linearity property of the Lebesgue integral of simple functions we have that: (6) \begin{align} \quad -\int_E |\varphi| \leq \int_E \varphi \leq \int_E | \varphi | \quad \Leftrightarrow \quad \biggr \lvert \int_E \varphi \biggr \rvert \leq \int_E |\varphi| \quad \blacksquare \end{align}
2018-04-26 15:15:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999521970748901, "perplexity": 148.0721462045612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948285.62/warc/CC-MAIN-20180426144615-20180426164615-00178.warc.gz"}
https://www.hackmath.net/en/math-problem/852?tag_id=100
# Medicament Same type of medicament produces a number of manufacturers in a variety of packages with different content of active substance. Pack 1: includes 60 pills of 600 mg of active substance per pack cost 9 Eur. Pack 2: includes 150 pills of 500 mg of active substance per pack cost 28.125 Eur. Which is the more cost effective medicament (type 1 or 2)? Result x =  1 #### Solution: $x_1 = \dfrac{ 60 \cdot 600 }{ 9 } = 4000\ mg/Eur \ \\ x_2 = \dfrac{ 150 \cdot 500 }{ 28.125 } = 2666.67\ mg/Eur \ \\ \ \\ x_1 > x_2$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! ## Next similar math problems: 1. Pupil I'm a primary school pupil. I attended the exercises of parents with children 1/4 of my age, 1/3 for drawing, and 1/6 for flute. For the first three years of my life, I had no ring, and I never went to two rings at the same time. How old am I? 2. A number 2 A number decreased by the difference between four and the number Why does 1 3/4 + 2 9/10 equal 4.65? How do you solve this? 4. Sum of the digits How many are two-digit natural numbers that have the sum of the digits 9? 5. Temperature variations Today's temperature was 80 degrees, and then the temperature dropped 10 degrees. Then it dropped 15 degrees again, then the next day, the temperature went up 2 degrees. What would the temperature be? 6. Evaluate expression If x=2, y=-5 and z=3 what is the value of x-2y 7. Aquarium Try to estimate the weight of the water in an aquarium 50cm long 30cm wide, when poured to a height of 25cm. Calculates the weight of the aquarium's water. 8. Discount The new phone was discounted by 2800kč. After this discount, 5 phones cost 5530 CZK more than 3 phones before the discount. How much did the phone cost before the discount? 9. Playground The land for the construction of the school playground has the shape of a rectangle with a shorter side of 370 m. Its other side is 260 m longer. How many meters of the fence does the school need to buy on the fencing playground? 10. Evaluate 5 Evaluate expression x2−7x+12x−4 when x=−1 11. Temperature change 3 At 2 p. M. The temperature was 76 degrees Fahrenheit. At 8 p. M. The temperature was 58 degrees Fahrenheit. What was the change in temperature? 12. Startup Jaxon’s start up business makes a profit of $450 during the first month. However, the company records a profit of -$60 per month for the next four months and a profit of \$125 for the final month. What is the total profit for the first six months of Jaxon’ 13. Father and sons Father is 27 and his sons 2 and one year. In how many years will his sons sum up to half his age? 14. Evaluate - order of ops Evaluate the expression: 32+2[5×(24-6)]-48÷24 Pay attention to the order of operation including integers 15. Football field The soccer field may have a width of 45 meters. This is 45 meters less than the length of the course. What can be the length of the football field? 16. Calculate Calculate the square area if its perimeter is 14dm. 17. Family The twins Nina and Ema have five years younger brother Michal. Together have 43 years. How old is Michal?
2020-04-04 02:46:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.297330766916275, "perplexity": 1892.5990480157832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00413.warc.gz"}
https://www.electro-tech-online.com/threads/cleaning-waste-motor-oils-for-oil-burning-type-boilers.153890/page-2
# Cleaning waste motor oils for oil burning type boilers #### large_ghostman ##### Well-Known Member Most Helpful Member Maybe I know some things you don't. Now that makes me wonder if HHO works I would be grateful if you could video your attempt and upload it, i am sure it would be of great educational value. #### JimB ##### Super Moderator Most Helpful Member I would be grateful if you could video your attempt and upload it, i am sure it would be of great educational value. What an excellent suggestion. Now that you mention it, how about some pictures of some of your technological adventures and exploits? As they like to say on a machining site which I visit from time to time... "Pictures, or it did not happen" Get that camera going LLG. JimB #### large_ghostman ##### Well-Known Member Most Helpful Member What an excellent suggestion. Now that you mention it, how about some pictures of some of your technological adventures and exploits? As they like to say on a machining site which I visit from time to time... "Pictures, or it did not happen" Get that camera going LLG. JimB Where you been? i do upload pics! But ok Jim tell you what, as this site has real low upload limits (not an insult but its a problem) and through uploding here means going from RAW to png to screen shot to jpeg, you choose which exploit you want most to see. If i havnt uploaded it yet i will down grade the pics and upload for you. Last time you asked (i did upload foy you after all the agro of doing that) i asked you a question based on the pics and datasheet i uploaded (was radio micro board and micro sized antenna question), you didnt bother replying to it, So this time you choose which exploit you want pics of, if i got them i will load tonight if not i will do them for you, you might consider then cutting me the same slack you do others. #### JimB ##### Super Moderator Most Helpful Member Last time you asked (i did upload foy you after all the agro of doing that) i asked you a question based on the pics and datasheet i uploaded (was radio micro board and micro sized antenna question), you didnt bother replying to it, I don't remember that, can you show me? JimB #### large_ghostman ##### Well-Known Member Most Helpful Member I don't remember that, can you show me? JimB I will look for it tomorrow, it was a while back. i remember it because i was well miffed spending all that time uploading and then you didnt answer the question i posed in response to yours!! Shouldnt be too hard to find, i think you posed a question on a transceiver board i wanted to use. So i posted some pics of the devs kits and chips i wanted to use (kitchen table i think), what is really annoying is i cant remember the question i asked, but i never did find the answer out!! I use a different chip set now, but the question would likely still stand as i do use similar boards still. While we are mentioning it. ANY chance we can get upload limit increased a bit? Modern DSLR cameras take pretty big pics even if you screen shot them. I am positive it puts people off posting more pics, it certainly does me. Yes you could shoot jpeg but then a screen shot downgrades it alot, or bring back some the old pic servers that were around Actually whats the limit of linked pics?. I might be able to upload to one my own sites and just link using the URL thing in the code HINT pick nickel stuff first, as i just found some of those pics of me dissolving it in ferric chlorides, shows how slow it is and whats left over. but you get the chloride Bio reactor stuff i have posted a few times, i have it working so would be before pics only and not ones showing how we connect the chambers. Chem pics of oil i am doing for this thread anyway. #### tcmtech ##### Banned Most Helpful Member Now that makes me wonder if HHO works I would be grateful if you could video your attempt and upload it, i am sure it would be of great educational value. I don't follow the connection or implication. Are you implying that other heat sources cant heat oil and that only electricity works to heat it, or that HHO could work for heating it? Your continued lack of direction and clarity in your comments make your understandings and intent hard to follow what you are going for way too often. As for documenting it I will likely take pictures and do a write up when I get to it someday. But for now it's on the lower end of the priorities list. As for video I don't do video and have no interest in ever starting to. #### Western ##### Member You won't learn it all in day but in a week of spare time you will be someplace comfortable on the basic concepts and shortly after that you will have a very solid idea of what you would need to make a system work for yourself. Ok, thanks. I've been reading up and trying to make sense of it all. Just some questions regarding your system. You showed your fuel nozzle and pre-heater earlier ... I gather the pre-heater needs to run continually ... you don't turn it off once the system is running ... and rely on the operating temperature within the heater? What temperature range do you aim for the oil arriving at the nozzle. With your system ... what wattage is your blower fan. Thanks. #### tcmtech ##### Banned Most Helpful Member You showed your fuel nozzle and pre-heater earlier ... I gather the pre-heater needs to run continually ... you don't turn it off once the system is running ... and rely on the operating temperature within the heater? Its controlled by the PID loop controller unit which keeps the nozzle around 300F. It's about 500 watts but cycles on and off in short bursts so the overall heating wattage is maybe half that. The burner and blower is just a common fuel oil furnace burner unit which is something like 1/6 Hp and draws maybe 100 - 150 watts. It could be way smaller if that motor didn't have to drive the little pump that supplies the nozzle with oil at 120 - 150 PSI. #### Western ##### Member Its controlled by the PID loop controller unit which keeps the nozzle around 300F. It's about 500 watts but cycles on and off in short bursts so the overall heating wattage is maybe half that. The burner and blower is just a common fuel oil furnace burner unit which is something like 1/6 Hp and draws maybe 100 - 150 watts. It could be way smaller if that motor didn't have to drive the little pump that supplies the nozzle with oil at 120 - 150 PSI. Thanks, that all helps. Problem with researching ... you come across a million different ways of doing things. #### tcmtech ##### Banned Most Helpful Member Problem with researching ... you come across a million different ways of doing things. Yep. that's the hard part. The initial learning curve is a bit overwhelming regarding narrowing down which parts of all that info best fit your needs and abilities plus numerous other variables that will show up as you start to flesh out your design. With DIY projects I tend to error towards the simple so that when things don't work exactly as planned I don't have much invested into things. For your application I think the simple liquid to air heat exchanger/radiator in your air conditioning system is probably the most realistic way to go just for low cost and simplicity of install sake on the plumbing end of things. The main thing to consider is the sitting down and putting hard numbers to the prices of the different ways you can go with the system. Just because something costs a lot does not mean it's best or that it's best for what you may need to do. Also, don't forget that some things, like with the Pex type tubing, you need a special crimping tool to put it together so either you need one you can borrow our you may be out some more money to have one of your own. #### shortbus= ##### Well-Known Member Also, don't forget that some things, like with the Pex type tubing, you need a special crimping tool to put it together so either you need one you can borrow our you may be out some more money to have one of your own. Wow those Pex tools are out of most peoples reach! Let's see now a guy is going to spend$1k or more to do the tr*vis heat system, but can't be able to afford a Pex crimper? https://www.homedepot.com/p/SharkBite-1-2-in-and-3-4-in-Dual-PEX-Copper-Crimp-Ring-Tool-23251/202270489 #### tcmtech ##### Banned Most Helpful Member Wow those Pex tools are out of most peoples reach! Let's see now a guy is going to spend$1k or more to do the tr*vis heat system, but can't be able to afford a Pex crimper? The point behind my comment was to think beyond just the cost of building the system and to look at any possible specialty tools or work you may need or just want a reason to justify buying for what components or materials you chose to work with. Ancillary costs add up fast so they need to be kept in mind. For example with my system where the old shop boiler is ~350 feet from the house where anyone, unlike me, who has to rent or hire a backhoe to trench in a underground lineset that long could have huge additional bill to go with the build. Mine cost me half a day and 10 - 15 gallons of fuel but to hire it done would have been $750 -$1000+ in my parts which in a low heat usage application would be very difficult to cost justify on its own. Getting rental backhoe (if you even know how to run one) for ~$200 -$300 a day, maybe. BTW, thanks for proving the point of my new tagline, already. #### large_ghostman ##### Well-Known Member Most Helpful Member I don't follow the connection or implication. Are you implying that other heat sources cant heat oil and that only electricity works to heat it, or that HHO could work for heating it? Your continued lack of direction and clarity in your comments make your understandings and intent hard to follow what you are going for way too often. As for documenting it I will likely take pictures and do a write up when I get to it someday. But for now it's on the lower end of the priorities list. As for video I don't do video and have no interest in ever starting to. Look it dosnt take a genius to know that cracking oil with a naked flame is just plain stupid, so your comment about knowing things i dont is just you being you, you not liking the fact others can spot your mistakes a mile off. The fact that when gaping holes are found you would rather fight a lost point than admit someone knew better than you, i suspect it erks even more when they are my age. Truth is i study this stuff at advanced level, my chemistry and labs skills have been honed in a lab almost my entire life, sometimes people forget my dad was a renowned scientist with a PhD, he also tried to keep our farm going. There are numurous threads where i make mistakes, sometimes even months later i spot one and fgo back and openly correct it and say i made a mistake, you might want to try it yourself. Dont play word games with me, i tend to really check my information and with these kind of subjects i happen to know my stuff, so dont expect me to back down just to save your face when your wrong. #### large_ghostman ##### Well-Known Member Most Helpful Member Thanks, that all helps. Problem with researching ... you come across a million different ways of doing things. Keep in mind when you centrifuge waste oil you get the thinner fraction, i get the feeling the oil is much cleaner than TCM is using. I use a higher pressure pump than standard and a slightly bigger jet, but my preheat sounds very different to his. I had a show weekend, its lead to alot of work this week, i should be back properly Friday and as far as I know no shows next weekend. So i should finally get all the info published for you, one thing i did do and recommend you do, get the boiler flu gas tested yearly as part of a service. What you can get away with in the states is not always the same here or where you are, there can be a high financial penalty for not complying to or for breaking rules, and this dosnt take into account the toxic and sometimes deadly fumes you can create and put into the house or your environment. Most EU based energy sites that advocate waste oil systems, all mention the horrors of carbon monoxide or other gases we cant know about. Never forgot your burning a mix of things, it stuff that gets thrown into waste oil barrels. An example is one source i cant/wont use, the garage has a bodywork shop in it and they dump waste solvents in the waste oil barrels. It would be stupid to risk it as they tend to have a high ratio of solvent to oil. How do i know? Gas chromatograph, but in this case you dont need one, they use so much of it you can smell it in the oil. I wont be posting alternative energy stuff here any more, the threads get swamped with nonsense. You will know where to find the real info, its your choice what you do. So far i have always backed every claim with decent scientific evidence and not hill billy rhetoric. I will hand over the renewable section to the resident expert until Darwin makes some room. #### tcmtech ##### Banned Most Helpful Member Look it dosnt take a genius to know that cracking oil with a naked flame is just plain stupid, so your comment about knowing things i dont is just you being you, you not liking the fact others can spot your mistakes a mile off. The fact that when gaping holes are found you would rather fight a lost point than admit someone knew better than you, i suspect it erks even more when they are my age. Where did I ever say I was using and open flame and doing it directly? Great and incorrect over assumptions on your part does not make my knowledge and understandings invalid. And, BTW, do a bit of research on your own before claiming false authority on something. Direct flame cracking is used everywhere for crude and used oil distillation/fractionation systems from the huge industrial scale down to the micro DIY level. Youtube even has a number of videos on DIYers and micro commercial rigs that use direct flame heating methods which rather proves who really has the 'gaping holes' in their knowledge and experience base and on more than one level at that. Truth is i study this stuff at advanced level, my chemistry and labs skills have been honed in a lab almost my entire life, Yes, your whole 17 year life, (I have clothing that's older and more experienced than you.) of which you have only spent a small part of it learning and experiencing anything in the adult real world level. Come back in 15 years after your out of college and had a decade of real world applications work and tell me what you know. Odds are it will be a lot more than you do now. #### tcmtech ##### Banned Most Helpful Member Keep in mind when you centrifuge waste oil you get the thinner fraction, i get the feeling the oil is much cleaner than TCM is using. I use a higher pressure pump than standard and a slightly bigger jet, but my preheat sounds very different to his. I tend to run raw oil that's just been passed through a 20 - 50 micron filter. No need to over complicate a simple process if its not really needed. And yes. there are a number of ways to burn the stuff. I like my method but other may have their own way of doing things too. Most EU based energy sites that advocate waste oil systems, all mention the horrors of carbon monoxide or other gases we cant know about. Rather why they get used as outdoor or in shop or commercial type heat systems rather than in house home heating. Its why I have my boilers in low occupancy hell vented locations like my small work shed and the old shop where venting the places out is of no issue should there be a excessive smoke back problem. Same reason I recommend going full outdoor boiler designs as well where the mess and smoke issues are of little to no real concern unless you are doing something extremely bad in highly unusual conditions. Although realistically if the design is solid and sealed properly plus has its intake air coming from outside and the exhaust vented high enough to pass any relevant codes a common wood burning systems would require the realistic chance of CO or other such combustion gas poisoning is extremely low. Badly designed and poorly maintained worst case scenarios do not represent the bulk of reality no matter how bad someone what to push that narative. Never forgot your burning a mix of things, it stuff that gets thrown into waste oil barrels. An example is one source i cant/wont use, the garage has a bodywork shop in it and they dump waste solvents in the waste oil barrels. It would be stupid to risk it as they tend to have a high ratio of solvent to oil. How do i know? Gas chromatograph, but in this case you dont need one, they use so much of it you can smell it in the oil. Quite true. Unless you know how to deal with high solvent/gasoline/high flammability low viscosity mystery mixes it can cause problems. That was largely why I developed mine to work as it does. I have ran mixes of 50 - 60% old gasoline without issue simply because the preheating temperature can be adjusted to keep the nozzle flow rates in the correct range as ot not cause overfiring and other related incomplete combustion processes from running away. Something that standard air atomizing nozzle based used oil burners have very hard time coping with safely. I wont be posting alternative energy stuff here any more, the threads get swamped with nonsense. If I was you I would start by doing my research far better and backing my claims up with actual verifiable data and even extended real world hands on experiments wasn't just a controlled lab environment to slow that problem down a bit. I've tried to set a good example adn even a moderator or two have pointed out that flaw in your supperont your claims, What an excellent suggestion. Now that you mention it, how about some pictures of some of your technological adventures and exploits? As they like to say on a machining site which I visit from time to time... "Pictures, or it did not happen" Get that camera going LLG. JimB but so far you haven't gotten the hang of it. Also, claiming anyone who disagrees with you (by their bringing verifiable facts, evidenced experience to support themself while refuting you) to be liar really doesn't help you on your credibility either. You may not like my methods or approaches or anything else, that doesn't fit your self proclaimed superior moral narratives, but the reality is I have actual threads that show what sort of work I do that are based on real world experience I started gaining well before you were born and some of those threads showcasing that knowledge have been here on this forum longer than you have by a wide margin as well. I may not be the world's top expert on custom built multi fuel boilers but I do know that my first one was built about the same time you were born and I have been on a ever increasing pursuit of knowledge and experience to make it better, more reliable, more efficient and safer the whole time and now because of that I can do things safely and reliably that people like you are sure are wildly dangerous to outright criminal even though they are in fact not. I simply have a working knowledge, experience and skill set base to work with you don't, yet. Like you I want to make the world better for my existence however I don't make wild unfounded unsupportable claims to know things I clearly don't have true well founded working knowledge, experience and understanding of while using false claims to shut down anyone else who can show they know and understand something I don't. I give what I can and leave the few who may ever look at my work to decide if it's right for them or not based on their moral imperatives and not anyone else's, especially mine. #### unclejed613 ##### Well-Known Member Most Helpful Member The main thing to consider is the sitting down and putting hard numbers to the prices of the different ways you can go with the system. Just because something costs a lot does not mean it's best or that it's best for what you may need to do. there's a lot to be said for simple solutions. somebody once told me about a US contractor that came up with an expensive "cannon" that shot BBs at hundreds of thousands of feet per second. after the cold war ended the contractor went and saw a russian engineer that used a mailing tube filled with solid rocket propellant, and a block of explosives (to the tune of about $5 worth of materials) to achieve the same results. btw, i think the US contractor's experimental setup in the story is the hypervelocity test facility run by NASA. https://www.nasa.gov/centers/wstf/laboratories/hypervelocity/gasguns.html #### large_ghostman ##### Well-Known Member Most Helpful Member Direct flame cracking is used everywhere for crude and used oil distillation/fractionation systems from the huge industrial scale down to the micro DIY level Yes and where you live its considered reasonable for any novice to make high explosives and set them off with next to zero knowledge. Direct flame cracking on a small scale is not safe, on a large scale its being phased out. Where did I ever say I was using and open flame and doing it directly? Your only mention is having plenty of BTU's at your disposal and references to used tires etc. The fact you mention open flame cracking i would consider proves my point your thinking of open flame. Other countries read this forum, the advice your giving pertains mainly to one part of the world. Your information is extremely dangerous and while you may care, i certainly care if people think its perfectly safe to do just because you said it was. I regularly distill petrol for pet ether, and make Diethyl ether. I use laboratory equipment in a proper lab setting with alot of safety precautions, i wouldnt give details on how to do it with a steam bath let alone a open flame or even a hotplate. Rather why they get used as outdoor or in shop or commercial type heat systems rather than in house home heating. Its why I have my boilers in low occupancy hell vented locations like my small work shed and the old shop where venting the places out is of no issue should there be a excessive smoke back problem. Same reason I recommend going full outdoor boiler designs as well where the mess and smoke issues are of little to no real concern unless you are doing something extremely bad in highly unusual conditions. Again another big difference between where you live (which is almost third world in its approach to environmental issues) and the rest of us, its better to clean an oil to the point it dosnt need to be out doors to be safe or meet regs, rather than just take a lazy dont care attitude and spew toxins everywhere. but so far you haven't gotten the hang of it. Also, claiming anyone who disagrees with you (by their bringing verifiable facts, evidenced experience to support themself while refuting you) to be liar really doesn't help you on your credibility either. You more than anyone should be aware i am ALWAYS able to back what i say with high quality scientific papers from quality journals, i unlike yourself do not rely on random websites with no credibility. I havnt called you a liar or suggested you are one, other words spring to mind way before those ones, but people can judge on what is posted to back what you say. If I was you I would start by doing my research far better and backing my claims up with actual verifiable data and even extended real world hands on experiments wasn't just a controlled lab environment to slow that problem down a bit. I tried all that in the begging by posting papers in other threads, it was clear you were unable to understand the content. So rather than try and make you look silly i stopped posting them, in one thread i posted so much evidence that you still refuted with nothing more than your opinion, it got to the point even the mods asked i didnt post so much reference material (you were starting to look very silly). So i would advise you not to encourage me to going back to posting material you do not understand. I notice you mention my age yet again, i am sure you think this is some kind of trigger for me, i can assure you it isnt. I am proud of my age, i am proud that despite not doing a degree first i was excepted on and i am doing extremely well on a post undergraduate degree, i was allowed to do this because of the depth of both ability and knowledge i displayed to a panel of highly educated people. Those that actually know me, and indeed one well respected member on here who is in business with me, will tell you that my attention to detail and research skills are way above normal. That might sound arrogant and to others i am sorry if it does, but the point is when i post on these types of subjects i know what i am talking about. As for real life experience, i have designed (and i am still doing so) systems that are in use. One particular system is being evaluated as a local authority blue print system. In other words its a design likely to be put on the list of systems the Scottish government approve local authorities to use. Anyone can burn oil or make methane, but try and so it efficiently,safely and within all MODERN legal frameworks. @ everyone else I am sorry for the above, i tried in the beginning, in other threads to provide decent back up references. Even when it became clear who was working from facts and who was not, the other poster is insecure and will not admit it. I dont mind that at all, my concern and the sole reason i have posted so strongly this time, i do care that many people who never register here but use the information, are given the correct information. In future i will post back up evidence for quality sources, i wont bother responding to links from random websites or pure speculation. If a point is raised and i ignore it but you want clarification, then please do ask and i will do my best to go through it. It matters to me that people do things correctly and dont do things that are dangerous or break laws. I am also like many others and strongly believe in not turning the planet we live on into a soup of toxic waste when there is a better way. Most chop and change things or whatever you want to do, or see this as a post where i am simply squeezing pus out of a spot. There wont be many more posts like this, i have made my points clear nd done my best to be reasonable. If the other person posts anything of value in these threads i will answer it, if its just the normal random rubbish i will ignore it. #### tcmtech ##### Banned Most Helpful Member there's a lot to be said for simple solutions. somebody once told me about a US contractor that came up with an expensive "cannon" that shot BBs at hundreds of thousands of feet per second. after the cold war ended the contractor went and saw a russian engineer that used a mailing tube filled with solid rocket propellant, and a block of explosives (to the tune of about$5 worth of materials) to achieve the same results. btw, i think the US contractor's experimental setup in the story is the hypervelocity test facility run by NASA. https://www.nasa.gov/centers/wstf/laboratories/hypervelocity/gasguns.html Kind of like the pile of money spent to make a erasable pen that wrote in zero G and the Russians just used pencils. That's why I try to make a point of pushing the sitting down and doing hard planning and any amount of research into some new project anyone is going to consider taking on. Too often high tech and spendy can be done cheaply and relatively easily if a bit of thinking, and asking the right questions in the right crowds, is done ahead of time. No rational planning and asking the wrong crowd will get you nowhere you aren't already at. Same concept with assuming that just because you or someone else doesn't know or understand something that it automatically means nobody else does or ever will either. I don't want to talk to the 50 people who say I can't do something, because they don't see how it can be done by their knowledge base or standards . I want to hear from the one person who says they are all wrong and can prove it, because they have been doing for so long it's old news to them. #### unclejed613 ##### Well-Known Member Most Helpful Member as a history nut, i get to see a lot of things like that, like the MiG-25 using vacuum tube electronics to counter EMP, or, in WWII, the germans spent tons of money and research on the V2 rocket at one end of the spectrum, versus a small aircraft with a ramjet (which is just a hollow tube) that cost about \$750 to make (the V1). both of them were high tech innovations, but the V1 was super simple and cheap, and had the same purpose. often, cheap also means not as much flexibility in use, but for a single purpose use, you don't need the flexibility. the russian hypervelocity experiment could only work with ball bearings, while NASAs hypervelocity gun can fire a lot of different projectiles. if you look at some of the gas engine books from the very early 20th century, a lot of the big industrial engines could run on just about anything in liquid or gaseous state that was flammable, and since producer gas could be made from coal, or wood gas from wood, they literally could run on any fuel. Loading
2019-06-27 11:16:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25041425228118896, "perplexity": 1627.1752523003952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001089.83/warc/CC-MAIN-20190627095649-20190627121649-00206.warc.gz"}
http://www.anvari.org/fortune/Mav_Flame/383_linux-is-ir-ir-of-course-is-a-form-of-hypereviscerated-reiyk.html
# Linux Is Ir. Ir, Of Course, Is A Form Of Hypereviscerated Reiyk. Linux is Ir. Ir, of course, is a form of hypereviscerated Reiyk. Of course, having been at least incepted in Suomi, the influence of Pohjola is obvious to any but the most subnegated of birdwatchers. Suomi being spelled (and, of course, pronounced) with a bilabial nasal, care must be taken to not disrespect and otherwise impugn or misrepresent the cerebral alveodental nasal which is inherent in the prolixifications of Linux which are of less pecuniary propensity than others. -- Marc A. Volovic on linux-il, 14 Dec, 2000
2013-05-24 21:33:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8711744546890259, "perplexity": 14337.441920771376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705069221/warc/CC-MAIN-20130516115109-00087-ip-10-60-113-184.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/360823/solving-recursion-of-a-complex-function
# Solving recursion of a complex function I am trying to find a closed form formula for the following recursive function: $$f_n(h)= \sum_{i=1}^{n-1} \binom{n-2}{i-1} \cdot (0.5)^{n-2} \cdot [ (f_{n-i}(h-1)\cdot \sum_{j=0}^{h-1}f_i(j)) + (f_{i}(h-1)\cdot \sum_{j=0}^{h-2} f_{n-i}(j))]$$ The base cases are the following: $$f_1(h)= \begin{cases} 1 & h=0 \\ 0 & otherwise \end{cases} \\ f_2(h)= \begin{cases} 1 & h=1\\ 0 & otherwise \end{cases}$$ I have been trying to use the generating functions technique, but I have been unsuccessful so far and I was wondering if anyone has suggestions into how to solve this problem. Thank you for your help in advance Edit: I added the base cases • If we start with $f_i(0)=1=g(0)$ we get all $f_i(h)=g(h)$. Then we get a simpler recurrence relation $$g(h+1)=2g(h)[\sum_{i=0}^{h} g(i)]$$. From this we can make a differential equation $\frac{\text{d}^2\text{ln}(g(x))}{\text{d}x^2}=2g(x)$. May 20, 2020 at 4:26 • Sorry, I forgot to add the base cases when I posted the question. May 20, 2020 at 5:35 Define $$g_k(m) := \sum_{j=0}^m f_k(j)$$. Then the given recurrence becomes $$\begin{split} g_n(h)-g_n(h-1) &= 0.5^{n-2} \sum_{i=1}^{n-1}\binom{n-2}{i-1} [(g_{n-i}(h-1)-g_{n-i}(h-2))g_i(h-1)+(g_{i}(h-1)-g_{i}(h-2))g_{n-i}(h-2)] \\ &=0.5^{n-2} \sum_{i=1}^{n-1}\binom{n-2}{i-1} [(g_{n-i}(h-1)g_i(h-1)-g_{i}(h-2)g_{n-i}(h-2)]. \end{split}$$ Consider the generating function $$G_h(x) := \sum_{n\geq 1} g_n(h) \frac{x^{n-1}}{(n-1)!}.$$ The initial conditions imply that $$G_1(x)=1+x$$ and $$G_2(x)=1+x+\frac{x^2}2+\frac{x^3}{12}$$. Then the recurrence takes form: $$G_h'(x) - G_{h-1}'(x) = G_{h-1}(x/2)^2 - G_{h-2}(x/2)^2$$ or $$G_h'(x) - G_{h-1}(x/2)^2 = G_{h-1}'(x) - G_{h-2}(x/2)^2.$$ Unrolling the last recurrence, we get that for any $$h\geq 2$$ $$G_h'(x) - G_{h-1}(x/2)^2 = G_{2}'(x) - G_{1}(x/2)^2=0.$$ That is, $$G_h'(x) = G_{h-1}(x/2)^2.$$ It seems that there is no simple expression for the solution to this recurrence, although we may notice that $$\lim_{h\to\infty} G_h(x)=e^x$$. P.S. For a fixed $$h$$, the generating function for $$f_n(h)$$ can be expressed as $$\sum_{n\geq 1} f_n(h) \frac{x^{n-1}}{(n-1)!} = G_h(x)-G_{h-1}(x).$$ • Thank you for your time and clear explanation. I just have a question regarding $$G_h'(x) - G_{h-1}'(x) = G_{h-1}(x/2)^2 - G_{h-2}(x/2)^2.$$ From the lhs I get $$\sum_{n\geq 1} (g_n(h)-g_n(h-1) ) \frac{x^{n-2}}{(n-2)!}$$ assuming $$G_h'(x) = \frac{d(G_h(x))}{dx}$$ and I am not too sure how this equals to $$G_{h-1}(x/2)^2 - G_{h-2}(x/2)^2$$. May 20, 2020 at 18:16 • Not sure if this is usable, but for the generating function ${\mathcal G}(x,z):=\sum_hG_h(x)z^h$ from a formula for Hadamard product your relations give$$\frac\partial{\partial x}{\mathcal G}(x,z)=z\int_0^1{\mathcal G}(\frac x2,\sqrt ze^{2\pi it}){\mathcal G}(\frac x2,\sqrt ze^{-2\pi it})dt$$ May 20, 2020 at 19:50 • @KokoNanahji: This is just an application of the formula for the product of two exponential generating functions. May 20, 2020 at 23:53 • Ok, sounds good. Thank you very much for your help May 21, 2020 at 0:57 • @მამუკაჯიბლაძე: Good point, thanks! May 21, 2020 at 3:05
2022-10-04 23:55:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8543666005134583, "perplexity": 123.20320289726662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00640.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/dcds.2017121
Article Contents Article Contents # Monotonicity and uniqueness of wave profiles for a three components lattice dynamical system This work was partially supported by the Ministry of Science and Technology of the Republic of China under the grant 105-2115-M-005-002. The author would like to thank the referees for valuable comments. • We consider a three components lattice dynamical system which arises in the study of a three species competition model. It is assumed that two weaker species have different preferences of food and the third stronger competitor has both preferences of food. Under this assumption, it is well-known that there is the minimal speed such that a traveling wave solution exists for any speed above this minimal one. In this paper, we prove the monotonicity of wave profiles and the uniqueness (up to translations) of wave profiles for each given admissible speed under certain restrictions on parameters. Mathematics Subject Classification: Primary: 34K05, 34A34; Secondary: 34K60, 34E05. Citation: • [1] J. Carr and A. Chmaj, Uniqueness of travelling waves for nonlocal monostable equations, Proc. Amer. Math. Soc., 132 (2004), 2433-2439. [2] X. Chen and J.-S. Guo, Existence and asymptotic stability of travelling waves of discrete quasilinear monostable equations, J. Diff. Eqns., 184 (2002), 549-569. [3] X. Chen and J.-S. Guo, Uniqueness and existence of travelling waves for discrete quasilinear monostable dynamics, Math. Ann., 326 (2003), 123-146. [4] X. Chen, S. -C. Fu and J. -S. Guo, Uniqueness and asymptotics of traveling waves of monostable dynamics on lattices, SIAM J. Math. Anal. , 38 (2006), 233-258. [5] S. -N. Chow, Lattice dynamical systems, in J. W. Macki, P. Zecca (Eds. ), Dynamical Systems, Lecture Notes in Mathematics, Springer, Berlin, 1822 (2003), 1–102. [6] S.-N. Chow, J. Mallet-Paret and W. Shen, Traveling waves in lattice dynamical systems, J. Differential Equations, 149 (1998), 248-291. [7] P. C. Fife, Mathematical Aspects of Reacting and Diffusing Systems Lecture Notes in Biomathematics 28, Springer Verlag, 1979. [8] J.-S. Guo and F. Hamel, Front propagation for discrete periodic monostable equations, Math. Ann., 335 (2006), 489-525. [9] J.-S. Guo, Y. Wang, C.-H. Wu and C.-C. Wu, The minimal speed of traveling wave solutions for a diffusive three species competition system, Taiwanese J. Math., 19 (2015), 1805-1829. [10] J.-S. Guo and C.-H. Wu, Existence and uniqueness of traveling waves for a monostable 2-D lattice dynamical system, Osaka J. Math., 45 (2008), 327-346. [11] J.-S. Guo and C.-H. Wu, Wave propagation for a two-component lattice dynamical system arising in strong competition models, J. Differential Equations, 250 (2011), 3504-3533. [12] J.-S. Guo and C.-H. Wu, Traveling wave front for a two-component lattice dynamical system arising in competition models, J. Differential Equations, 252 (2012), 4357-4391.  doi: 10.1016/j.jde.2012.01.009. [13] J. Mallet-Paret, Traveling waves in spatially discrete dynamical systems of diffusive type, in: J. W. Macki, P. Zecca (Eds. ), Dynamical Systems, Lecture Notes in Mathematics, Springer, Berlin, 1822 (2003), 231–298 doi: 10.1007/978-3-540-45204-1_4. [14] E. Renshaw, Modelling Biological Populations in Space and Time, Cambridge University Press, Cambridge, 1991. doi: 10.1017/CBO9780511624094. [15] B. Shorrocks and I. R. Swingland, Living in a Patch Environment, Oxford University Press, New York, 1990.
2023-03-22 08:46:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5917425155639648, "perplexity": 1208.0808971901797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00176.warc.gz"}
https://zbmath.org/?q=an%3A0943.65098
## Implicit, compact, linearized $$\theta$$-methods with factorization for multidimensional reaction-diffusion equations.(English)Zbl 0943.65098 Summary: An iterative predictor-corrector technique for the elimination of the approximate factorization errors which result from the factorization of implicit, three-point compact, linearized $$\theta$$-methods in multidimensional reaction-diffusion equations is proposed, and its convergence and linear stability are analyzed. Four compact, approximate factorization techniques which do not account for the approximate factorization errors and which involve three-point stencils for each one-dimensional operator are developed. The first technique uses the full Jacobian matrix of the reaction terms, requires the inversion of, in general, dense matrices, and its approximate factorization errors are second-order accurate in time. The second and third methods approximate the Jacobian matrix by diagonal or triangular ones which are easily inverted but their approximate factorization errors are, however, first-order accurate in time. The fourth approximately factorized, compact, implicit method has approximate factorization errors which are second-order accurate in time and requires the inversion of lower and upper triangular matrices. The techniques are applied to a nonlinear, two-species, two-dimensional system of reaction-diffusion equations in order to determine the approximate factorization errors and those resulting from the approximations to the Jacobian matrix as functions of the allocation of the reaction terms, space and time. ### MSC: 65M06 Finite difference methods for initial value and initial-boundary value problems involving PDEs 65M12 Stability and convergence of numerical methods for initial value and initial-boundary value problems involving PDEs 65M15 Error bounds for initial value and initial-boundary value problems involving PDEs 35K57 Reaction-diffusion equations Full Text:
2022-10-06 14:59:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7235074043273926, "perplexity": 867.7335091098872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00382.warc.gz"}
http://physics.stackexchange.com/questions/62131/combining-metric-tensors-curvature-tensors
Combining metric tensors/curvature tensors I was thinking about the following scenario: Consider a particle which causes a metric $g_{\mu\nu}$ on an otherwise Minkowski spacetime (or any manifold). Now, consider another particle, somewhere in the vicinity of the first particle, which causes a metric $h_{\mu\nu}$ on a spacetime which would have been Minkowski if not for these two particles. Then, what what would the metric in the vicinity of these two points be? I am guessing that it is: $$(g_{\mu\nu}-\eta_{\mu\nu})+(h_{\mu\nu}-\eta_{\mu\nu}) + \eta_{\mu\nu} = g_{\mu\nu}+h_{\mu\nu} - \eta_{\mu\nu}$$ Also, does the Riemann Curvature Tensor $R_{\mu\nu\rho}^\sigma$add up directly? I don't think it should because the Einstein tensor $G_{\mu\nu}$ does (I think) and it is dependent on the Ricci Curvature AND the spacetime metric tensor. - You can do this addition at weak field, so long as your coordinates make the metric nonsingular (rectangular coordinates as the unperturbed metric), and add the usually negligible corrections perturbatively. –  Ron Maimon Aug 22 '13 at 23:57 Unlike classical electromagnetism, General Relativity is highly nonlinear--this means that the gravitational field can serve as its own source. A consequence of this fact is that fields decidedly do not superpose, and you can get all sorts of effects even from vacuum relativity. The most notable of these effects are things such as Brill waves and Geons, where gravitational waves collide or collapse to form black holes. You can work out solutions where this happens even when the spaces initially are empty outside of the two regions before overlap occurs. -
2014-07-24 17:03:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9044435024261475, "perplexity": 291.38552247188477}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997889455.65/warc/CC-MAIN-20140722025809-00216-ip-10-33-131-23.ec2.internal.warc.gz"}
https://codereview.stackexchange.com/questions/80190/malloc-free-realloc-using-brk-and-sbrk/87537#87537
# malloc(), free(), realloc() using brk() and sbrk() I recoded malloc() by using brk() and sbrk(). I just want some "reviews" to see if it is possible to improve my code to make it faster and better. If I'm doing some tests like "ls -Rla /" it takes a lot longer than the original ls, and if I do with "sh -c", it's way longer than the first one. malloc.c #include "../inc/malloc.h" t_list *g_list; void *malloc(size_t size) { static int i = 0; if (size == 0) return (NULL); size = (size - 1) / 4 * 4 + 4; { re_init_list(); } { printf("Error : sbrk() failed\n"); return (NULL); } if (i == 0) g_list = NULL; ++i; } void *find_block(size_t size) { if (g_list == NULL) return (NULL); if (g_list->is_used == UNUSED && size <= g_list->size) { g_list->is_used = USED; } { if (g_list->is_used == UNUSED && size <= g_list->size) { g_list->is_used = USED; } g_list = g_list->next; } re_init_list(); return (NULL); } realloc.c #include "../inc/malloc.h" extern t_list *g_list; void *realloc(void *ptr, size_t size) { void *cpy; size_t ptr_size; if (size == 0 && ptr != NULL) { free(ptr); return (ptr); } else if (ptr == NULL || is_in_list(ptr) == 1) ptr = malloc(size); else { ptr_size = get_size(ptr); if (ptr_size == size) return (ptr); cpy = malloc(size); if (size < ptr_size) memcpy(cpy, ptr, size); else memcpy(cpy, ptr, ptr_size); free(ptr); return (cpy); } return (ptr); } int is_in_list(void *ptr) { t_list *tmp; tmp = g_list; tmp = tmp->next; return (1); return (0); } size_t get_size(void *ptr) { t_list *tmp; tmp = g_list; tmp = tmp->next; return (tmp->size); } free.c #include "../inc/malloc.h" extern t_list *g_list; void free(void *ptr) { if (ptr == NULL) return ; if (is_in_list(ptr) == 1) return ; g_list = g_list->next; if (g_list->is_used == UNUSED) return ; g_list->is_used = UNUSED; { if (g_list->next->is_used == UNUSED && { g_list->size += g_list->next->size; g_list->next = g_list->next->next; } } re_init_list(); } list.c #include "../inc/malloc.h" extern t_list *g_list; void put_in_list(t_list **list, size_t size, void *addr) { t_list *tmp; tmp = sbrk(sizeof(*tmp)); if (tmp == (void *)-1) { printf("Error : sbrk() failed\n"); return ; } tmp->size = size; tmp->is_used = USED; if (*list == NULL) else { tmp->next = *list; if (tmp->next) tmp->next->prev = tmp; } *list = tmp; make_circle(list); } void make_circle(t_list **list) { t_list *tmp; tmp = *list; (*list) = (*list)->next; (*list)->next = tmp; (*list)->next->prev = *list; while ((*list) != tmp) *list = (*list)->next; } void re_init_list() { g_list = g_list->next; g_list = g_list->next; } malloc.h #ifndef MALLOC_H_ # define MALLOC_H_ # include <unistd.h> # include <string.h> # include <stdio.h> # define UNUSED 0 # define USED 1 typedef struct s_list { size_t size; int is_used; struct s_list *prev; struct s_list *next; } t_list; void *malloc(size_t size); void put_in_list(t_list **list, size_t size, void *addr); void free(void *ptr); void make_circle(t_list **list); void show_alloc_mem(); void *realloc(void *ptr, size_t size); size_t get_size(void *ptr); void re_init_list(); void *find_block(size_t size); int is_in_list(void *ptr); #endif /* !MALLOC_H_ */ ### 1. Performance This memory management implementation maintains a single doubly-linked list of memory blocks. The main causes of the performance problems are as follows: 1. When malloc is called, the global pointer to the list is needlessly updated (see §2.18 below) and then the whole list is traversed in order to find the start of the list again. 2. Allocated blocks are not removed from the list, so malloc has to uselessly examine all the allocated blocks each time it is called. 3. When free is called, the whole list might need to be traversed in order to find the block containing the freed memory. The result is that every memory operation might need to look at all the memory blocks. Any program using this implementation therefore runs in quadratic time (or worse). To fix these problems: 1. Don't update the global pointer, use a local variable to remember the position in the list. (See §2.18.) 2. Remove blocks from the list when they are allocated and insert them when freed. (Thus making the list into a free block chain.) 3. Design a mechanism that finds the s_list structure corresponding to an allocated address in constant time. For example, if each s_list structure is placed in memory immediately below the allocated block, then it can be found by subtracting sizeof(s_list) from the freed address. The result will be better, but still won't be all that good, because of the following problems: 1. There's no segregation of blocks, so no quick way of finding a free block of the requested size. 2. The list structures are large (six words), so a program that allocates many small objects will suffer from internal fragmentation. ### 2. Review I just reviewed malloc.c and malloc.h. 1. The code is not thread-safe so cannot be used in multi-threaded programs. 2. The global variable g_list needs a comment. What is this? 3. This line needs explanation: size = (size - 1) / 4 * 4 + 4; Presumably the intention is to align size up to the next multiple of 4. But you should make that clear with a comment. 4. Aligning upwards to a multiple of a power of 2 is better done like this: /* Align upwards to next multiple of 4. */ size = (size + 3) & ~3; This has two arithmetic operations instead of four. 5. Where does the number 4 come from? A constant like this needs a name. Presumably it's the maximum required alignment for any object that might be allocated with malloc, so you need something like: /* Alignment of allocated addresses, in bytes. */ #define ALIGNMENT (4) 6. It's unlikely that the alignment requirement is actually 4. On x86-64, long, double, and pointer types should be 8-byte aligned. To make the code portable, you probably want something like: /* Alignment of allocated addresses, in bytes. */ #define ALIGNMENT sizeof(void *) 7. sbrk is a "LEGACY" interface according to POSIX: that is, it should be avoided in new programs. In addition: The behaviour of brk() and sbrk() is unspecified if an application also uses any other memory functions (such as malloc(), mmap(), free()). Other functions may use these other memory functions silently. The modern way to ask a Unix operating system to give you a range of virtual memory addresses is to call mmap, passing MAP_ANONYMOUS (on Linux) or MAP_ANON (on BSD): void *addr = mmap(0, size, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); /* handle the error */ } mmap allocates memory in units of pages, which are typically 4 KB in size. So a malloc implementation needs to map memory in page-sized (or larger) chunks, and then split the chunks up as needed. 8. The constant (void *)-1 needs a name. I would write: #define SBRK_FAILED ((void *)-1) 9. If sbrk fails then malloc prints an error message to standard output. This is a bad idea. It's not the job of malloc to output error messages: it should just return NULL and let the caller handle the error. But if you are going to emit an error message, it should go to the standard error stream, not standard output. 10. The logic for initializing g_list is in malloc: if (i == 0) g_list = NULL; but this is pointless, since you could have just initialized g_list = NULL; in the first place. 11. Each time malloc is called, the variable i is incremented. But i is an int, which on many systems has a maximum value of 2,147,483,647. So after this many allocations, there will be a signed integer overflow, which has undefined behaviour. Better to just set i = 1, which can't go wrong like that. 12. The functions that are not defined by POSIX (find_block, re_init_list, etc.) need comments explaining what they do. 13. The data structure s_list needs comments explaining the meaning of each of its members. 14. The declaration extern t_list *g_list; should go in the header, so that you don't have to repeat it in each source file. 15. It would be better to #include <stdlib.h> to get the standard prototypes for malloc, realloc and free. 16. If you're going to define malloc, realloc and free, then you should define calloc too, otherwise a program might call the calloc from the standard C library and then pass the pointer to your free. 17. In C, the number 0 tests false and any other number tests true. So there's no need to define constants UNUSED and USED, or to compare against these. You can just write: if (g_list->is_used && size <= g_list->size) And similarly: while(!g_list->head) 18. The loop in find_block updates the global value of g_list as it searches the linked list. Then you call re_init_list to set g_list back to the start of the list again. So why did you update it in the first place?? It would be better to use a local variable to remember your position in the list: t_list *cur = g_list; /* current position in the list */ do { if (!cur->is_used && size <= cur->size) { cur->is_used = 1; Notice that in this version of the code we don't have to consult head at all. If you made similar changes throughout then you'd be able to get rid of the head member of the stucture.
2021-12-07 10:08:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24567900598049164, "perplexity": 7104.511399135908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363337.27/warc/CC-MAIN-20211207075308-20211207105308-00365.warc.gz"}
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/2281
## On a Cardinality Constrained Multicriteria Knapsack Problem • We consider a variant of a knapsack problem with a fixed cardinality constraint. There are three objective functions to be optimized: one real-valued and two integer-valued objectives. We show that this problem can be solved efficiently by a local search. The algorithm utilizes connectedness of a subset of feasible solutions and has optimal run-time. ### Additional Services Author: Florian Seipp, Stefan Ruzika, Luis Paquete urn:nbn:de:hbz:386-kluedo-16817 Report in Wirtschaftsmathematik (WIMA Report) (133) Preprint English 2011 2011 Technische Universität Kaiserslautern Knapsack problem ; combinatorial optimization ; connectedness ; local search algorithm; multicriteria optimization Fachbereich Mathematik 510 Mathematik $Rev: 12793$
2014-03-10 08:42:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2974572777748108, "perplexity": 3796.505878835801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010707300/warc/CC-MAIN-20140305091147-00048-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.mathworks.com/help/comm/ref/comm.turbodecoder-system-object.html
# comm.TurboDecoder Decode input signal using parallel concatenated decoding scheme ## Description The `comm.TurboDecoder` System object™ uses a parallel concatenated decoding scheme to decode a coded input signal. The input signal is typically the soft-decision output from the baseband demodulation operation. For more information, see Parallel Concatenated Convolutional Decoding Scheme. To decode an input signal using a parallel concatenated decoding scheme: 1. Create the `comm.TurboDecoder` object and set its properties. 2. Call the object with arguments, as if it were a function. ## Creation ### Syntax ``turbodec = comm.TurboDecoder`` ``turbodec = comm.TurboDecoder(trellis,interlvrindices,numiter)`` ``turbodec = comm.TurboDecoder(___,Name,Value)`` ### Description ````turbodec = comm.TurboDecoder` creates a turbo decoder System object. This object uses the a-posteriori probability (APP) constituent decoder to iteratively decode the parallel-concatenated convolutionally encoded input data.``` example ````turbodec = comm.TurboDecoder(trellis,interlvrindices,numiter)` creates a turbo decoder System object with the `TrellisStructure`, `InterleaverIndices`, and `numiter`, respectively. The `trellis` input must be specified as described by the `TrellisStructure` property. The `interlvrindices` input must be specified as described by the `InterleaverIndices` property. The `numiter` input must be specified as described by the `NumIterations` property.``` example ````turbodec = comm.TurboDecoder(___,Name,Value)` sets properties using one or more name-value pairs in addition to any input argument combination from previous syntaxes. Enclose each property name in quotes. For example, ```comm.TurboDecoder('InterleaverIndicesSource','Input port')``` configures a turbo decoder System object with the interleaver indices to be supplied as an input argument to the System object when it is called.``` ## Properties expand all Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the `release` function unlocks them. If a property is tunable, you can change its value at any time. Trellis description of the constituent convolutional code, specified as a structure that contains the trellis description for a rate KN code. K is the number of input bit streams, and N is the number of output bit streams. Note K must be 1 for the turbo coder. For more information, see Coding Rate. You can either use the `poly2trellis` function to create the trellis structure or create it manually. For more about this structure, see Trellis Description of a Convolutional Code and the `istrellis` function. The trellis structure contains these fields. Number of symbols input to the encoder, specified as an integer equal to 2K, where K is the number of input bit streams. Data Types: `double` Number of symbols output from the encoder, specified as an integer equal to 2N, where N is the number of output bit streams. Data Types: `double` Number of states in the encoder, specified as a power of 2. Data Types: `double` Next states for all combinations of current states and current inputs, specified as a matrix of integers. The matrix size must be `numStates`-by-2K. Data Types: `double` Outputs for all combinations of current states and current inputs, specified as a matrix of octal numbers. The matrix size must be `numStates`-by-2K. Data Types: `double` Data Types: `struct` Source of interleaver indices, specified as `'Property'` or `'Input port'`. • When you set this property to `'Input port'`, the object executes using the input argument `interlvrindices` when you call the object. The vector length and values for the interleaver indices and coded input signal can change with each call to the object. • When you set this property to `'Property'`, the object executes using the interleaver indices that you specified with the `InterleaverIndices` property when configuring the object. Data Types: `char` | `string` Interleaver indices that define the mapping used to permute the codeword bits input to the decoder, specified as a column vector of integers. The vector must be of length L. Each element of the vector must be an integer in the range [1, L] and must be unique. L is the length decoded output message, `decmsg`. Each element of the vector must be an integer in the range [1, L] and must be unique. #### Dependencies To enable this property, set the `InterleaverIndicesSource` property to `'Property'`. Data Types: `double` Source of input indices, specified as `'Auto'`, `'Property'`, or ```'Input port'```. • When you set this property to `'Auto'`, the object computes input indices that assume the second systematic stream is punctured and all tail bits are included in the input. • When you set this property to `'Property'`, the object uses the input indices that you specify for the `InputIndices` property. • When this property is set to `'Input port'`, the object executes using the input indices specified by the input argument `inindices`. The vector length and values for the input indices and the coded input signal can change with each call to the object. Data Types: `char` | `string` Input indices for the bit ordering and puncturing used on the fully encoded data, specified as a column vector of integers. The length of this property must equal the length of the input data vector `codeword`. #### Dependencies To enable this property, set the `InputIndicesSource` property to `'Property'`. Data Types: `double` Decoding algorithm, specified as `'True APP'`, `'Max*'`, or `'Max'`. When you set this property to `'True APP'`, the object implements true APP decoding. When you set this property to `'Max*'` or `'Max'`, the object uses approximations to increase the speed of the computations. For more information, see APP Decoder. Data Types: `char` | `string` Number of scaling bits, specified as an integer in the range [0, 8]. This property sets the number of bits the constituent decoders use to scale the input data to avoid losing precision during computations. The constituent decoders multiply the input by 2 `NumScalingBits` and divide the pre-output by the same factor. For more information, see APP Decoder. #### Dependencies This enable this property, set the Algorithm property to `'Max*'`. Data Types: `double` Number of decoding iterations, specified as a positive integer. This property sets the number of decoding iterations used for each call to the object. The object iterates and provides updates to the log-likelihood ratios (LLR) of the uncoded output bits. The output of the object is the hard-decision output of the final LLR update. Data Types: `double` ## Usage ### Syntax ``decmsg = turbodec(codeword)`` ``decmsg = turbodec(codeword,interlvrindices)`` ``decmsg = turbodec(codeword,interlvrindices,inindices)`` ### Description example ````decmsg = turbodec(codeword)` decodes the input codeword using the parallel concatenated convolutional decoding scheme that is specified by the trellis structure and interleaver indices. `turbodec` returns the binary decoded data. For more information, see Parallel Concatenated Convolutional Decoding Scheme.``` example ````decmsg = turbodec(codeword,interlvrindices)` additionally specifies the interleaver indices. To enable this syntax, set the InterleaverIndicesSource property to ```'Input port'```. The interleaver indices define the mapping used to permute the input at the decoder.``` ````decmsg = turbodec(codeword,interlvrindices,inindices)` additionally specifies the bit ordering and puncturing used on the fully encoded data. To enable this syntax, set the InputIndicesSource property to `'Input port'`. The input indices vector values must be relative to the fully encoded data, including the tail bits for the coding scheme for all streams.``` ### Input Arguments expand all Parallel concatenated codeword, specified as a column vector of length M, where M is the length of the parallel concatenated codeword. Data Types: `double` | `single` Interleaver indices, specified as a column vector of integers. The vector must be of length L, where L is the length of the decoded output message, `decmsg`. Each element of the vector must be an integer in the range [1, L] and must be unique. The interleaver indices define the mapping used to permute the input bits at the decoder. Tunable: Yes #### Dependencies To enable this property, set the `InterleaverIndicesSource` property to `'Input port'`. Data Types: `double` Input indices for the bit ordering and puncturing used on the fully encoded data, specified as a column vector of integers. The length of the `inindices` vector must equal the length of the input data vector `codeword`. Element values in the `inindices` vector must be relative to the fully encoded data, including the tail bits for the coding scheme for all streams. #### Dependencies To enable this argument, set the `InputIndicesSource` property to `'Input port'`. Data Types: `double` ### Output Arguments expand all Decoded message, returned as a binary column vector of length L, where L is the length of the decoded output message. This output signal is the same as data type of the `codeword` input. ## Object Functions To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named `obj`, use this syntax: `release(obj)` expand all `step` Run System object algorithm `release` Release resources and allow changes to System object property values and input characteristics `reset` Reset internal states of System object ## Examples collapse all Define output indices by using the `OutputIndices` property for turbo encoding and define the input indices by using the `InputIndices` property for turbo decoding. Show full-length punctured encoding and decoding for a rate 1/2 code and 10-bit block length. Initialize Parameters Define parameters to initialize the encoder. ```blkLen = 10; trellis = poly2trellis(4,[13 15],13); n = log2(trellis.numOutputSymbols); mLen = log2(trellis.numStates);``` Full-length Encoding and Decoding Initialize variables and turbo encoding and decoding System objects for full-length coding. Turbo encode and decode the message. Display the turbo coding rate. Check the length of the coded output versus the length of the output indices vector. ```fullOut = (1:(mLen+blkLen)*2*n)'; outLen = length(fullOut); netRate = blkLen/outLen; data = randi([0 1],blkLen,1); intIndices = randperm(blkLen); turboEnc = comm.TurboEncoder('TrellisStructure',trellis); turboEnc.InterleaverIndices = intIndices; turboEnc.OutputIndicesSource = 'Property'; turboEnc.OutputIndices = fullOut; turboDec = comm.TurboDecoder('TrellisStructure',trellis); turboDec.InterleaverIndices = intIndices; turboDec.InputIndicesSource = 'Property'; turboDec.InputIndices = fullOut; encMsg = turboEnc(data); % Encode disp(['Turbo coding rate: ' num2str(netRate)])``` ```Turbo coding rate: 0.19231 ``` `encOutLen = length(encMsg) % Display encoded length` ```encOutLen = 52 ``` `isequal(encOutLen,outLen) % Check lengths` ```ans = logical 1 ``` ```rxMsg = turboDec(2*encMsg-1); % Decode isequal(data, rxMsg) % Compare bits with decoded bits``` ```ans = logical 1 ``` Punctured Encoding and Decoding Specify the output indices for puncturing of the second systematic stream by using the `getTurboIOIndices` function. Initialize variables and turbo encoding and decoding System objects for punctured coding. Turbo encode and decode the message. Display the turbo coding rate. Check the length of the coded output versus the length of the output indices vector. ```puncOut = getTurboIOIndices(blkLen,n,mLen); outLen = length(puncOut); netRate = blkLen/outLen; data = randi([0 1],blkLen,1); intIndices = randperm(blkLen); turboEnc = comm.TurboEncoder('TrellisStructure',trellis); turboEnc.InterleaverIndices = intIndices; turboEnc.OutputIndicesSource = 'Property'; turboEnc.OutputIndices = puncOut; turboDec = comm.TurboDecoder('TrellisStructure',trellis); turboDec.InterleaverIndices = intIndices; turboDec.InputIndicesSource = 'Property'; turboDec.InputIndices = puncOut; encMsg = turboEnc(data); % Encode disp(['Turbo coding rate: ' num2str(netRate)])``` ```Turbo coding rate: 0.25641 ``` `encOutLen = length(encMsg) % Display encoded length` ```encOutLen = 39 ``` `isequal(encOutLen, outLen) % Check lengths` ```ans = logical 1 ``` ```rxMsg = turboDec(2*encMsg-1); % Decode isequal(data, rxMsg) % Compare bits with decoded bits``` ```ans = logical 1 ``` Compare Full and Punctured Outputs The output of the encoder interlaces the individual bit streams. The third bit of every 4-bit tuple is removed from the full-length code to produce the punctured code. This third output bit stream corresponds to the second systematic bit stream. Display the indices of the full-length code and the indices of the punctured code to show that the third bit of every 4-bit tuple is punctured. `fullOut'` ```ans = 1×52 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 ``` `puncOut'` ```ans = 1×39 1 2 4 5 6 8 9 10 12 13 14 16 17 18 20 21 22 24 25 26 28 29 30 32 33 34 36 37 38 40 41 42 44 45 46 48 49 50 52 ``` Simulate the transmission and reception of BPSK data over an AWGN channel by using turbo encoding and decoding. Specify simulation parameters, and then compute the effective coding rate and noise variance. For BPSK modulation, ${\mathit{E}}_{\mathrm{S}}/{\mathit{N}}_{0}$ equals ${\mathit{E}}_{\mathrm{b}}/{\mathit{N}}_{0}$ because the number of bits per symbol (bps) is 1. To ease reuse of this code for other modulation schemes, calculations in this example include the bps terms. Define the packet length, trellis structure, and number of iterations. Calculate the noise variance using ${\mathit{E}}_{\mathrm{S}}/{\mathit{N}}_{0}$ and the code rate. Set the random number generator to its default state to ensure that the results are repeatable. ```modOrd = 2; % Modulation order bps = log2(modOrd); % Bits per symbol EbNo = 1; % Energy per bit to noise power spectral density ratio in dB EsNo = EbNo + 10*log10(bps); % Energy per symbol to noise power spectral density ratio in dB L = 256; % Input packet length in bits trellis = poly2trellis(4,[13 15 17],13); numiter = 4; n = log2(trellis.numOutputSymbols); numTails = log2(trellis.numStates)*n; M = L*(2*n - 1) + 2*numTails; % Output codeword packet length rate = L/M; % Coding rate snrdB = EsNo + 10*log10(rate); % Signal to noise ratio in dB noiseVar = 1./(10.^(snrdB/10)); % Noise variance rng default``` Generate random interleaver indices. `intrlvrIndices = randperm(L);` Create a turbo encoder and decoder pair. Use the defined trellis structure and random interleaver indices. Configure the decoder to run a maximum of four iterations. ```turboenc = comm.TurboEncoder(trellis,intrlvrIndices); turbodec = comm.TurboDecoder(trellis,intrlvrIndices,numiter);``` Create a BPSK modulator and demodulator pair, where the demodulator outputs soft bits determined using an LLR method. ```bpskmod = comm.BPSKModulator; bpskdemod = comm.BPSKDemodulator('DecisionMethod','Log-likelihood ratio', ... 'Variance',noiseVar);``` Create an AWGN channel object and an error rate object. ```awgnchan = comm.AWGNChannel('NoiseMethod','Variance','Variance',noiseVar); errrate = comm.ErrorRate;``` The main processing loop performs these steps. 1. Generate binary data. 2. Turbo encode the data. 3. Modulate the encoded data. 4. Pass the modulated signal through an AWGN channel. 5. Demodulate the noisy signal by using LLR to output soft bits. 6. Turbo decode the demodulated data. Because the bit mapping from the demodulator is opposite of the mapping expected by the turbo decoder, the decoder input must use the inverse of the demodulated signal. 7. Calculate the error statistics. ```for frmIdx = 1:100 data = randi([0 1],L,1); encodedData = turboenc(data); modSignal = bpskmod(encodedData); receivedSignal = awgnchan(modSignal); demodSignal = bpskdemod(receivedSignal); receivedBits = turbodec(-demodSignal); errorStats = errrate(data,receivedBits); end``` Display the error data. `fprintf('Bit error rate = %5.2e\nNumber of errors = %d\nTotal bits = %d\n', errorStats)` ```Bit error rate = 2.34e-04 Number of errors = 6 Total bits = 25600 ``` Simulate an end-to-end communication link by using a 16-QAM signal and turbo codes in an AWGN channel. Inside a frame processing loop, packet sizes are randomly selected to be 500, 1000, or 1500 bits. Because the packet size varies, the interleaver indices are provided to the turbo encoder and decoder as an input argument of their associated System object. Compare turbo coded bit error rate results to uncoded bit error rate results. Initialize Simulation Set the modulation order and range of ${\mathit{E}}_{\mathrm{b}}/{\mathit{N}}_{0}$ values. Compute the number of bits per symbol and the energy per symbol to noise ratio (${\mathit{E}}_{\mathrm{S}}/{\mathit{N}}_{0}$) based on the modulation order and ${\mathit{E}}_{\mathrm{b}}/{\mathit{N}}_{0}$. To get repeatable results, seed the random number. ```modOrder = 16; % Modulation order bps = log2(modOrder); % Bits per symbol EbNo = (2:0.5:4); % Energy per bit to noise power spectral density ratio in dB EsNo = EbNo + 10*log10(bps); % Energy per symbol to noise power spectral density ratio in dB rng(1963);``` Create a turbo encoder and decoder pair. Because the packet length varies for each frame, specify that the interleaver indices be supplied by an input argument of the System object when executed. Specify that the decoder perform four iterations. ```turboEnc = comm.TurboEncoder('InterleaverIndicesSource','Input port'); turboDec = comm.TurboDecoder('InterleaverIndicesSource','Input port','NumIterations',4); trellis = poly2trellis(4,[13 15 17],13); n = log2(turboEnc.TrellisStructure.numOutputSymbols); numTails = log2(turboEnc.TrellisStructure.numStates)*n;``` Create an error rate object. `errRate = comm.ErrorRate;` Main Processing Loop The frame processing loop performs these steps. 1. Select a random packet length, and generate random binary data. 2. Compute the output codeword length and coding rate. 3. Compute the signal to noise ratio (SNR) and noise variance. 4. Generate interleaver indices. 5. Turbo encode the data. 6. Apply 16-QAM modulation, and normalize the average signal power. 7. Pass the modulated signal through an AWGN channel. 8. Demodulate the noisy signal by using an LLR method, output soft bits, and normalize the average signal power. 9. Turbo decode the data. Because the bit mapping order from the demodulator is opposite the mapping order expected by the turbo decoder, the decoder input must use the inverse of the demodulated signal. 10. Calculate the error statistics. ```ber = zeros(1,length(EbNo)); for k = 1:length(EbNo) % numFrames = 100; errorStats = zeros(1,3); %for pktIdx = 1:numFrames L = 500*randi([1 3],1,1); % Packet length in bits M = L*(2*n - 1) + 2*numTails; % Output codeword packet length rate = L/M; % Coding rate for current packet snrdB = EsNo(k) + 10*log10(rate); % Signal to noise ratio in dB noiseVar = 1./(10.^(snrdB/10)); % Noise variance while errorStats(2) < 100 && errorStats(3) < 1e7 data = randi([0 1],L,1); intrlvrIndices = randperm(L); encodedData = turboEnc(data,intrlvrIndices); modSignal = qammod(encodedData,modOrder, ... 'InputType','bit','UnitAveragePower',true); rxSignal = awgn(modSignal,snrdB); demodSignal = qamdemod(rxSignal,modOrder,'OutputType','llr', ... 'UnitAveragePower',true,'NoiseVariance',noiseVar); rxBits = turboDec(-demodSignal,intrlvrIndices); % Demodulated signal is negated errorStats = errRate(data,rxBits); end % Save the BER data and reset the bit error rate object ber(k) = errorStats(1); reset(errRate) end``` Plot Results Plot the bit error rate and compare it to the uncoded bit error rate. ```semilogy(EbNo,ber,'-o') grid xlabel('Eb/No (dB)') ylabel('Bit Error Rate') uncodedBER = berawgn(EbNo,'qam',modOrder); % Estimate of uncoded BER hold on semilogy(EbNo,uncodedBER) legend('Turbo','Uncoded','location','sw')``` expand all ## References [1] Benedetto, S., G. Montorsi, D. Divsalar, and F. Pollara. "A Soft-Input Soft-Output Maximum A Posterior (MAP) Module to Decode Parallel and Serial Concatenated Codes." Jet Propulsion Lab TDA Progress Report, 42–127, (November 1996). [2] Viterbi, A.J. “An Intuitive Justification and a Simplified Implementation of the MAP Decoder for Convolutional Codes.” IEEE Journal on Selected Areas in Communications 16, no. 2 (February 1998): 260–64. https://doi.org/10.1109/49.661114. [3] Berrou, C., A. Glavieux, and P. Thitimajshima. “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo-Codes.” Proceedings of ICC 93 - IEEE International Conference on Communications, Geneva, Switzerland, May 1993, 1064–70. https://doi.org/10.1109/icc.1993.397441. [4] Schlegel, Christian, and Lance Perez. Trellis and Turbo Coding. IEEE Press Series on Digital & Mobile Communication. Piscataway, NJ ; Hoboken, NJ: IEEE Press ; Wiley-Interscience, 2004. [5] 3GPP TS 36.212. "Multiplexing and channel coding." 3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-UTRA). https://www.3gpp.org. ## Extended Capabilities Introduced in R2012a
2021-09-21 12:24:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6548954248428345, "perplexity": 1996.2472280871762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057202.68/warc/CC-MAIN-20210921101319-20210921131319-00235.warc.gz"}
https://brilliant.org/problems/powers-of-5/
# Powers of 5 Level pending Let $$k$$ be a nonnegative integer. What is the largest value for which the units digit of both $$6^{k}$$ and $$5^{k}$$ is neither 5 or 6? ×
2017-09-26 05:48:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.695152223110199, "perplexity": 336.3712333821723}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695066.99/warc/CC-MAIN-20170926051558-20170926071558-00634.warc.gz"}
https://breathmath.com/2017/04/05/euclids-axioms/
# Euclid’s axioms Axioms or postulates are the assumptions which are obvious universal truths. They are not proved. Axiom 1: Things which are equal to the same thing are equal to one another. For example: Draw a line segment AB of length 10cm. Draw a second line CD having length equal to that of AB, using a compass. Measure the length of CD. We see that, CD = 10cm. We can write it as, CD = AB and AB = 10cm implies CD = 10cm. Axiom 2: If equals are added to equals, the wholes are equal. Suppose we have two line segments AB and DE of equal length. Add BC to AB and add EF to DE. If BC = EF, then AC = DF. Axiom 3: If equals are subtracted from equals, then the remainders are equal. Suppose we have two line segments of equal length. Remove BC from AC and EF from DE respectively. If BC = EF, then AB = DE. Axiom 4: Things which coincide with one another must be equal to one another. This means that if two geometric figures can fit completely one in to another, they are essentially the same. Axiom 5: The whole is greater than the part. Take a container of water. Remove some water from it. Will the remaining volume of water the same as the original volume?
2021-10-17 22:01:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.811122477054596, "perplexity": 565.7241672284489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00264.warc.gz"}
https://answers.ros.org/answers/129061/revisions/
Since you message in folder A depend on the messages in folder B you must generate the messages of B before A. Therefore B must be added before A.
2022-01-25 20:50:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6127623915672302, "perplexity": 1321.2927143259997}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304872.21/warc/CC-MAIN-20220125190255-20220125220255-00061.warc.gz"}
https://www.physicsforums.com/threads/i-must-be-missing-something.465324/
# I must be missing something So I've gotten the integral that i'm doing now down to: int(cos(x)/sin^2(x) dx) I looked it up on one of those online integral calculators to get me on the right track, and the answer is: -1/sin(x) It seems so simple, what am I missing? $$\int \frac{du}{u^2} = \int u^{-2}du$$
2022-05-20 13:03:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8648731708526611, "perplexity": 413.3459436078976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662532032.9/warc/CC-MAIN-20220520124557-20220520154557-00408.warc.gz"}
http://harvard.voxcharta.org/tag/gravitational-action/
# Posts Tagged gravitational action ## Recent Postings from gravitational action ### Friedmann model with viscous cosmology in modified $f(R,T)$ gravity theory [Replacement] In this paper, we introduce bulk viscosity in the formalism of modified gravity theory in which the gravitational action contains a general function $f(R,T)$, where $R$ and $T$ denote the curvature scalar and the trace of the energy-momentum tensor, respectively within the framework of a flat Friedmann-Robertson-Walker model. As an equation of state for prefect fluid, we take $p=(\gamma-1)\rho$, where $0 \leq \gamma \leq 2$ and viscous term as a bulk viscosity due to isotropic model, of the form $\zeta =\zeta_{0}+\zeta_{1}H$, where $\zeta_{0}$ and $\zeta_{1}$ are constants, and $H$ is the Hubble parameter. The exact non-singular solutions to the corresponding field equations are obtained with non- viscous and viscous fluids, respectively by assuming a simplest particular model of the form of $f(R,T) = R+2f(T)$, where $f(T)=\alpha T$ ( $\alpha$ is a constant). A big-rip singularity is also observed for $\gamma<0$ at a finite value of cosmic time under certain constraints. We study all possible scenarios with the possible positive and negative ranges of $\alpha$ to analyze the expansion history of the universe. It is observed that the universe accelerates or exhibits transition from decelerated phase to accelerated phase under certain constraints of $\zeta_0$ and $\zeta_1$. We compare the viscous models with the non-viscous one through the graph plotted between scale factor and cosmic time and find that bulk viscosity plays the major role in the expansion of the universe. A similar graph is plotted for deceleration parameter with non-viscous and viscous fluids and find a transition from decelerated to accelerated phase with some form of bulk viscosity. ### Superluminal Gravitational Waves The quantum gravity effects of vacuum polarization of gravitons propagating in a curved spacetime cause the quantum vacuum to act as a dispersive medium with a refractive index. Due to this dispersive medium gravitons acquire superluminal velocities. The dispersive medium is produced by higher derivative curvature contributions to the effective gravitational action. It is shown that in a Friedmann-Lema\^{i}tre-Robertson-Walker spacetime in the early universe near the Planck time $t_{\rm PL}\gtrsim 10^{-43}\,{\rm sec}$, the speed of gravitational waves $c_g\gg c_{g0}=c_0$, where $c_{g0}$ and $c_0$ are the speeds of gravitational waves and light today. The large speed of gravitational waves stretches their wavelengths to super-horizon sizes, allowing them to be observed in B-polarization experiments. ### $R^2\log R$ quantum corrections and the inflationary observables [Cross-Listing] We study a model of inflation with terms quadratic and logarithmic in the Ricci scalar, where the gravitational action is $f(R)=R+\alpha R^2+\beta R^2 \ln R$. These terms are expected to arise from one loop corrections involving matter fields in curved space-time. The spectral index $n_s$ and the tensor to scalar ratio yield $10^{-4}\lesssim r\lesssim0.03$ and $0.94\lesssim n_s \lesssim 0.99$. i.e. $r$ is an order of magnitude bigger or smaller than the original Starobinsky model which predicted $r\sim 10^{-3}$. Further enhancement of $r$ gives a scale invariant $n_s\sim 1$ or higher. Other inflationary observables are $d n_s/d\ln k \gtrsim -5.2 \times 10^{-4},\, \mu \lesssim 2.1 \times 10^{-8} ,\, y \lesssim 2.6 \times 10^{-9}$. Despite the enhancement in $r$, if the recent BICEP2 measurement stands, this model is disfavoured. ### $R^2\log R$ quantum corrections and the inflationary observables [Cross-Listing] We study a model of inflation with terms quadratic and logarithmic in the Ricci scalar, where the gravitational action is $f(R)=R+\alpha R^2+\beta R^2 \ln R$. These terms are expected to arise from one loop corrections involving matter fields in curved space-time. The spectral index $n_s$ and the tensor to scalar ratio yield $10^{-4}\lesssim r\lesssim0.03$ and $0.94\lesssim n_s \lesssim 0.99$. i.e. $r$ is an order of magnitude bigger or smaller than the original Starobinsky model which predicted $r\sim 10^{-3}$. Further enhancement of $r$ gives a scale invariant $n_s\sim 1$ or higher. Other inflationary observables are $d n_s/d\ln k \gtrsim -5.2 \times 10^{-4},\, \mu \lesssim 2.1 \times 10^{-8} ,\, y \lesssim 2.6 \times 10^{-9}$. Despite the enhancement in $r$, if the recent BICEP2 measurement stands, this model is disfavoured. ### $R^2\log R$ quantum corrections and the inflationary observables We study a model of inflation with terms quadratic and logarithmic in the Ricci scalar, where the gravitational action is $f(R)=R+\alpha R^2+\beta R^2 \ln R$. These terms are expected to arise from one loop corrections involving matter fields in curved space-time. The spectral index $n_s$ and the tensor to scalar ratio yield $10^{-4}\lesssim r\lesssim0.03$ and $0.94\lesssim n_s \lesssim 0.99$. i.e. $r$ is an order of magnitude bigger or smaller than the original Starobinsky model which predicted $r\sim 10^{-3}$. Further enhancement of $r$ gives a scale invariant $n_s\sim 1$ or higher. Other inflationary observables are $d n_s/d\ln k \gtrsim -5.2 \times 10^{-4},\, \mu \lesssim 2.1 \times 10^{-8} ,\, y \lesssim 2.6 \times 10^{-9}$. Despite the enhancement in $r$, if the recent BICEP2 measurement stands, this model is disfavoured. ### Marginally Deformed Starobinsky Gravity [Cross-Listing] We show that quantum-induced marginal deformations of the Starobinsky gravitational action of the form $R^{2(1 -\alpha)}$, with $R$ the Ricci scalar and $\alpha$ a positive parameter, smaller than one half, can account for the recent experimental observations by BICEP2 of primordial tensor modes. We also suggest natural microscopic (non) gravitational sources of these corrections and demonstrate that they lead generally to a nonzero and positive $\alpha$. Furthermore we argue, that within this framework, the tensor modes probe theories of grand unification with a large scalar field content. ### Marginally Deformed Starobinsky Gravity We show that quantum-induced marginal deformations of the Starobinsky gravitational action of the form $R^{2(1 -\alpha)}$, with $R$ the Ricci scalar and $\alpha$ a positive parameter, smaller than one half, can account for the recent experimental observations by BICEP2 of primordial tensor modes. We also suggest natural microscopic (non) gravitational sources of these corrections and demonstrate that they lead generally to a nonzero and positive $\alpha$. Furthermore we argue, that within this framework, the tensor modes probe theories of grand unification with a large scalar field content. ### Marginally Deformed Starobinsky Gravity [Cross-Listing] We show that quantum-induced marginal deformations of the Starobinsky gravitational action of the form $R^{2(1 -\alpha)}$, with $R$ the Ricci scalar and $\alpha$ a positive parameter, smaller than one half, can account for the recent experimental observations by BICEP2 of primordial tensor modes. We also suggest natural microscopic (non) gravitational sources of these corrections and demonstrate that they lead generally to a nonzero and positive $\alpha$. Furthermore we argue, that within this framework, the tensor modes probe theories of grand unification with a large scalar field content. ### Entropy of isolated horizon from surface term of gravitational action Starting from the surface term of gravitational action, one can construct a Virasoro algebra with central extension, with which the horizon entropy can be derived by using Cardy formula. This approach gives a new routine to calculate and interpret the horizon entropy. In this paper, we generalize this approach to a more general case, the isolated horizon, which contains non-stationary spacetimes beyond stationary ones. By imposing appropriate boundary conditions near the horizon, the full set of diffeomorphism is restricted to a subset where the corresponding Noether charges form a Virasoro algebra with central extension. Then by using the Cardy formula, we can derive the entropy of the isolated horizon. ### Entropy of isolated horizon from surface term of gravitational action [Cross-Listing] Starting from the surface term of gravitational action, one can construct a Virasoro algebra with central extension, with which the horizon entropy can be derived by using Cardy formula. This approach gives a new routine to calculate and interpret the horizon entropy. In this paper, we generalize this approach to a more general case, the isolated horizon, which contains non-stationary spacetimes beyond stationary ones. By imposing appropriate boundary conditions near the horizon, the full set of diffeomorphism is restricted to a subset where the corresponding Noether charges form a Virasoro algebra with central extension. Then by using the Cardy formula, we can derive the entropy of the isolated horizon. ### Determination of Gravitational Counterterms Near Four Dimensions from RG Equations [Replacement] The finiteness condition of renormalization gives a restriction on the form of the gravitational action. By reconsidering the Hathrell’s RG equations for massless QED in curved space, we determine the gravitational counterterms and the conformal anomalies as well near four dimensions. As conjectured for conformal couplings in 1970s, we show that at all orders of the perturbation they can be combined into two forms only: the square of the Weyl tensor in $D$ dimensions and $E_D=G_4 +(D-4)\chi(D)H^2 -4\chi(D) \nabla^2 H$, where $G_4$ is the usual Euler density, $H=R/(D-1)$ is the rescaled scalar curvature and $\chi(D)$ is a finite function of $D$ only. The number of the dimensionless gravitational couplings is also reduced to two. $\chi(D)$ can be determined order by order in series of $D-4$, whose first several coefficients are calculated. It has a universal value of $1/2$ at $D=4$. The familiar ambiguous $\nabla^2 R$ term is fixed. At the $D \to 4$ limit, the conformal anomaly $E_D$ just yields the combination $E_4=G_4-2\nabla^2 R/3$, which induces Riegert’s effective action. ### Determination of Gravitational Counterterms Near Four Dimensions from RG Equations [Replacement] The finiteness condition of renormalization gives a restriction on the form of the gravitational action. By reconsidering the Hathrell’s RG equations for massless QED in curved space, we determine the gravitational counterterms and the conformal anomalies as well near four dimensions. As conjectured for conformal couplings in 1970s, we show that at all orders of the perturbation they can be combined into two forms only: the square of the Weyl tensor in $D$ dimensions and $E_D=G_4 +(D-4)\chi(D)H^2 -4\chi(D) \nabla^2 H$, where $G_4$ is the usual Euler density, $H=R/(D-1)$ is the rescaled scalar curvature and $\chi(D)$ is a finite function of $D$ only. The number of the dimensionless gravitational couplings is also reduced to two. $\chi(D)$ can be determined order by order in series of $D-4$, whose first several coefficients are calculated. It has a universal value of $1/2$ at $D=4$. The familiar ambiguous $\nabla^2 R$ term is fixed. At the $D \to 4$ limit, the conformal anomaly $E_D$ just yields the combination $E_4=G_4-2\nabla^2 R/3$, which induces Riegert’s effective action. ### Determination of Gravitational Counterterms Near Four Dimensions from RG Equations The finiteness condition of renormalization gives a restriction on the form of the gravitational action. By reconsidering the Hathrell’s RG equations for massless QED in curved space, we determine the gravitational counterterms and conformal anomalies near four dimensions. As conjectured for conformal couplings in 1970s, we show at all orders of the perturbation that they can be combined into two forms only: the square of the Weyl tensor in $D$ dimensions and \begin{eqnarray*} E_D=G_4 +(D-4)\chi(D)H^2 -4\chi(D) \nabla^2 H, \end{eqnarray*} where $G_4$ is the usual Euler density, $H=R/(D-1)$ is the rescaled scalar curvature and $\chi(D)$ is a finite function of $D$ only. The number of the dimensionless gravitational couplings is also reduced to two. $\chi(D)$ can be determined order by order in series of $D-4$, whose first several coefficients are calculated. It has a universal value of $1/2$ at $D=4$. The familiar ambiguous $\nabla^2 R$ term is fixed. At the $D \to 4$ limit, the conformal anomaly $E_D$ just yields the combination $E_4=G_4-2\nabla^2 R/3$, which induces Riegert’s effective action. ### Determination of Gravitational Counterterms Near Four Dimensions from RG Equations [Replacement] The finiteness condition of renormalization gives a restriction on the form of the gravitational action. By reconsidering the Hathrell’s RG equations for massless QED in curved space, we determine the gravitational counterterms and the conformal anomalies as well near four dimensions. As conjectured for conformal couplings in 1970s, we show that at all orders of the perturbation they can be combined into two forms only: the square of the Weyl tensor in $D$ dimensions and \begin{eqnarray*} E_D=G_4 +(D-4)\chi(D)H^2 -4\chi(D) \nabla^2 H, \end{eqnarray*} where $G_4$ is the usual Euler density, $H=R/(D-1)$ is the rescaled scalar curvature and $\chi(D)$ is a finite function of $D$ only. The number of the dimensionless gravitational couplings is also reduced to two. $\chi(D)$ can be determined order by order in series of $D-4$, whose first several coefficients are calculated. It has a universal value of $1/2$ at $D=4$. The familiar ambiguous $\nabla^2 R$ term is fixed. At the $D \to 4$ limit, the conformal anomaly $E_D$ just yields the combination $E_4=G_4-2\nabla^2 R/3$, which induces Riegert’s effective action. ### Determination of Gravitational Counterterms Near Four Dimensions from RG Equations [Replacement] The finiteness condition of renormalization gives a restriction on the form of the gravitational action. By reconsidering the Hathrell’s RG equations for massless QED in curved space, we determine the gravitational counterterms and the conformal anomalies as well near four dimensions. As conjectured for conformal couplings in 1970s, we show that at all orders of the perturbation they can be combined into two forms only: the square of the Weyl tensor in $D$ dimensions and $E_D=G_4 +(D-4)\chi(D)H^2 -4\chi(D) \nabla^2 H$, where $G_4$ is the usual Euler density, $H=R/(D-1)$ is the rescaled scalar curvature and $\chi(D)$ is a finite function of $D$ only. The number of the dimensionless gravitational couplings is also reduced to two. $\chi(D)$ can be determined order by order in series of $D-4$, whose first several coefficients are calculated. It has a universal value of $1/2$ at $D=4$. The familiar ambiguous $\nabla^2 R$ term is fixed. At the $D \to 4$ limit, the conformal anomaly $E_D$ just yields the combination $E_4=G_4-2\nabla^2 R/3$, which induces Riegert’s effective action. ### Dynamics of Linear Perturbations in the hybrid metric-Palatini gravity [Replacement] In this work we focus on the evolution of the linear perturbations in the novel hybrid metric-Palatini theory achieved by adding a $f(\mathcal{R})$ function to the gravitational action. Working in the Jordan frame, we derive the full set of linearized evolution equations for the perturbed potentials and present them in the Newtonian and synchronous gauges. We also derive the Poisson equation, and perform the evolution of the lensing potential, $\Phi_{+}$, for a model with a background evolution indistinguishable from $\Lambda$CDM. In order to do so, we introduce a designer approach that allows to retrieve a family of functions $f(\mathcal{R})$ for which the effective equation of state is exactly $w_{\textrm{eff}} = -1$. We conclude, for this particular model, that the main deviations from standard General Relativity and the Cosmological Constant model arise in the distant past, with an oscillatory signature in the ratio between the Newtonian potentials, $\Phi$ and $\Psi$. ### Dynamics of Linear Perturbations in the hybrid metric-Palatini gravity [Replacement] In this work we focus on the evolution of the linear perturbations in the novel hybrid metric-Palatini theory achieved by adding a $f(\mathcal{R})$ function to the gravitational action. Working in the Jordan frame, we derive the full set of linearized evolution equations for the perturbed potentials and present them in the Newtonian and synchronous gauges. We also derive the Poisson equation, and perform the evolution of the lensing potential, $\Phi_{+}$, for a model with a background evolution indistinguishable from $\Lambda$CDM. In order to do so, we introduce a designer approach that allows to retrieve a family of functions $f(\mathcal{R})$ for which the effective equation of state is exactly $w_{\textrm{eff}} = -1$. We conclude, for this particular model, that the main deviations from standard General Relativity and the Cosmological Constant model arise in the distant past, with an oscillatory signature in the ratio between the Newtonian potentials, $\Phi$ and $\Psi$. ### Free energy of a Lovelock holographic superconductor [Replacement] We study thermodynamics of black hole solutions in Lanczos-Lovelock AdS gravity in d+1 dimensions coupled to nonlinear electrodynamics and a Stueckelberg scalar field. This class of theories is used in the context of gauge/gravity duality to describe a high-temperature superconductor in d dimensions. Larger number of coupling constants in the gravitational side is necessary to widen a domain of validity of physical quantities in a dual QFT. We regularize the gravitational action and find the finite conserved quantities for a planar black hole with scalar hair. Then we derive the quantum statistical relation in the Euclidean sector of the theory, and obtain the exact formula for the free energy of the superconductor in the holographic quantum field theory. Our result is analytic and it includes the effects of backreaction of the gravitational field. We further discuss on how this formula could be used to analyze second order phase transitions through the discontinuities of the free energy, in order to classify holographic superconductors in terms of the parameters in the theory. ### Free energy of a Lovelock holographic superconductor [Replacement] We study thermodynamics of black hole solutions in Lanczos-Lovelock AdS gravity in d+1 dimensions coupled to nonlinear electrodynamics and a Stueckelberg scalar field. This class of theories is used in the context of gauge/gravity duality to describe a high-temperature superconductor in d dimensions. Larger number of coupling constants in the gravitational side is necessary to widen a domain of validity of physical quantities in a dual QFT. We regularize the gravitational action and find the finite conserved quantities for a planar black hole with scalar hair. Then we derive the quantum statistical relation in the Euclidean sector of the theory, and obtain the exact formula for the free energy of the superconductor in the holographic quantum field theory. Our result is analytic and it includes the effects of backreaction of the gravitational field. We further discuss on how this formula could be used to analyze second order phase transitions through the discontinuities of the free energy, in order to classify holographic superconductors in terms of the parameters in the theory. ### On the renormalization of the Gibbons-Hawking boundary term [Replacement] The bulk (Einstein-Hilbert) and boundary (Gibbons-Hawking) terms in the gravitational action are generally renormalized differently when integrating out quantum fluctuations. The former is affected by nonminimal couplings, while the latter is affected by boundary conditions. We use the heat kernel method to analyze this behavior for a nonminimally coupled scalar field, the Maxwell field, and the graviton field. Allowing for Robin boundary conditions, we examine in which cases the renormalization preserves the ratio of boundary and bulk terms required for the effective action to possess a stationary point. The implications for field theory and black hole entropy computations are discussed. ### Weyl-Cartan-Weitzenb\"ock gravity through Lagrange multiplier [Cross-Listing] We consider an extension of the Weyl-Cartan-Weitzenb\"{o}ck (WCW) and teleparallel gravity, in which the Weitzenb\"{o}ck condition of the exact cancellation of curvature and torsion in a Weyl-Cartan geometry is inserted into the gravitational action via a Lagrange multiplier. In the standard metric formulation of the WCW model, the flatness of the space-time is removed by imposing the Weitzenb\"{o}ck condition in the Weyl-Cartan geometry, where the dynamical variables are the space-time metric, the Weyl vector and the torsion tensor, respectively. However, once the Weitzenb\"{o}ck condition is imposed on the Weyl-Cartan space-time, the metric is not dynamical, and the gravitational dynamics and evolution is completely determined by the torsion tensor. We show how to resolve this difficulty, and generalize the WCW model, by imposing the Weitzenb\"{o}ck condition on the action of the gravitational field through a Lagrange multiplier. The gravitational field equations are obtained from the variational principle, and they explicitly depend on the Lagrange multiplier. As a particular model we consider the case of the Riemann-Cartan space-times with zero non-metricity, which mimics the teleparallel theory of gravity. The Newtonian limit of the model is investigated, and a generalized Poisson equation is obtained, with the weak field gravitational potential explicitly depending on the Lagrange multiplier and on the Weyl vector. The cosmological implications of the theory are also studied, and three classes of exact cosmological models are considered. ### Weyl-Cartan-Weitzenb\"ock gravity through Lagrange multiplier [Replacement] We consider an extension of the Weyl-Cartan-Weitzenb\"{o}ck (WCW) and teleparallel gravity, in which the Weitzenb\"{o}ck condition of the exact cancellation of curvature and torsion in a Weyl-Cartan geometry is inserted into the gravitational action via a Lagrange multiplier. In the standard metric formulation of the WCW model, the flatness of the space-time is removed by imposing the Weitzenb\"{o}ck condition in the Weyl-Cartan geometry, where the dynamical variables are the space-time metric, the Weyl vector and the torsion tensor, respectively. However, once the Weitzenb\"{o}ck condition is imposed on the Weyl-Cartan space-time, the metric is not dynamical, and the gravitational dynamics and evolution is completely determined by the torsion tensor. We show how to resolve this difficulty, and generalize the WCW model, by imposing the Weitzenb\"{o}ck condition on the action of the gravitational field through a Lagrange multiplier. The gravitational field equations are obtained from the variational principle, and they explicitly depend on the Lagrange multiplier. As a particular model we consider the case of the Riemann-Cartan space-times with zero non-metricity, which mimics the teleparallel theory of gravity. The Newtonian limit of the model is investigated, and a generalized Poisson equation is obtained, with the weak field gravitational potential explicitly depending on the Lagrange multiplier and on the Weyl vector. The cosmological implications of the theory are also studied, and three classes of exact cosmological models are considered. ### Incorporating gravity into trace dynamics: the induced gravitational action [Replacement] We study the incorporation of gravity into the trace dynamics framework for classical matrix-valued fields, from which we have proposed that quantum field theory is the emergent thermodynamics, with state vector reduction arising from fluctuation corrections to this thermodynamics. We show that the metric must be incorporated as a classical, not a matrix-valued, field, with the source for gravity the exactly covariantly conserved trace stress-energy tensor of the matter fields. We then study corrections to the classical gravitational action induced by the dynamics of the matrix-valued matter fields, by examining the average over the trace dynamics canonical ensemble of the matter field action, in the presence of a general background metric. Using constraints from global Weyl scaling and three-space general coordinate transformations, we show that to zeroth order in derivatives of the metric, the induced gravitational action in the preferred rest frame of the trace dynamics canonical ensemble must have the form $$\Delta S=\int d^4x (^{(4)}g)^{1/2}(g_{00})^{-2} A\big(g_{0i} g_{0j} g^{ij}/g_{00}, D^ig_{ij}D^j/g_{00}, g_{0i}D^i/g_{00}\big),$$ with $D^i$ defined through the co-factor expansion of $^{(4)}g$ by $^{(4)}g/{^{(3)}g}=g_{00}+g_{0i}D^i$, and with $A(x,y,z)$ a general function of its three arguments. This action has "chameleon-like" properties: For the Robertson-Walker cosmological metric, it {\it exactly} reduces to a cosmological constant, but for the Schwarzschild metric it diverges as $(1-2M/r)^{-2}$ near the Schwarzschild radius, indicating that it will substantially affect the horizon structure. ### Incorporating gravity into trace dynamics: the induced gravitational action [Replacement] We study the incorporation of gravity into the trace dynamics framework for classical matrix-valued fields, from which we have proposed that quantum field theory is the emergent thermodynamics, with state vector reduction arising from fluctuation corrections to this thermodynamics. We show that the metric must be incorporated as a classical, not a matrix-valued, field, with the source for gravity the exactly covariantly conserved trace stress-energy tensor of the matter fields. We then study corrections to the classical gravitational action induced by the dynamics of the matrix-valued matter fields, by examining the average over the trace dynamics canonical ensemble of the matter field action, in the presence of a general background metric. Using constraints from global Weyl scaling and three-space general coordinate transformations, we show that to zeroth order in derivatives of the metric, the induced gravitational action in the preferred rest frame of the trace dynamics canonical ensemble must have the form $$\Delta S=\int d^4x (^{(4)}g)^{1/2}(g_{00})^{-2} A\big(g_{0i} g_{0j} g^{ij}/g_{00}, D^ig_{ij}D^j/g_{00}, g_{0i}D^i/g_{00}\big),$$ with $D^i$ defined through the co-factor expansion of $^{(4)}g$ by $^{(4)}g/{^{(3)}g}=g_{00}+g_{0i}D^i$, and with $A(x,y,z)$ a general function of its three arguments. This action has "chameleon-like" properties: For the Robertson-Walker cosmological metric, it {\it exactly} reduces to a cosmological constant, but for the Schwarzschild metric it diverges as $(1-2M/r)^{-2}$ near the Schwarzschild radius, indicating that it will substantially affect the horizon structure. ### Incorporating gravity into trace dynamics: the induced gravitational action [Replacement] We study the incorporation of gravity into the trace dynamics framework for classical matrix-valued fields, from which we have proposed that quantum field theory is the emergent thermodynamics, with state vector reduction arising from fluctuation corrections to this thermodynamics. We show that the metric must be incorporated as a classical, not a matrix-valued, field, with the source for gravity the exactly covariantly conserved trace stress-energy tensor of the matter fields. We then study corrections to the classical gravitational action induced by the dynamics of the matrix-valued matter fields, by examining the average over the trace dynamics canonical ensemble of the matter field action, in the presence of a general background metric. Using constraints from global Weyl scaling and three-space general coordinate transformations, we show that to zeroth order in derivatives of the metric, the induced gravitational action in the preferred rest frame of the trace dynamics canonical ensemble must have the form $$\Delta S=\int d^4x (^{(4)}g)^{1/2}(g_{00})^{-2} A\big(g_{0i} g_{0j} g^{ij}/g_{00}, D^ig_{ij}D^j/g_{00}, g_{0i}D^i/g_{00}\big),$$ with $D^i$ is defined through the co-factor expansion of $^{(4)}g$ by $^{(4)}g/{^{(3)}g}=g_{00}+g_{0i}D^i$, and with $A(x,y,z)$ a general function of its three arguments. This action has "chameleon-like" properties: For the Robertson-Walker cosmological metric, it {\it exactly} reduces to a cosmological constant, but for the Schwarzschild metric it diverges as $(1-2M/r)^{-2}$ near the Schwarzschild radius, indicating that it will substantially affect the horizon structure. ### Incorporating gravity into trace dynamics: the induced gravitational action [Replacement] We study the incorporation of gravity into the trace dynamics framework for classical matrix-valued fields, from which we have proposed that quantum field theory is the emergent thermodynamics, with state vector reduction arising from fluctuation corrections to this thermodynamics. We show that the metric must be incorporated as a classical, not a matrix-valued, field, with the source for gravity the exactly covariantly conserved trace stress-energy tensor of the matter fields. We then study corrections to the classical gravitational action induced by the dynamics of the matrix-valued matter fields, by examining the average over the trace dynamics canonical ensemble of the matter field action, in the presence of a general background metric. Using constraints from global Weyl scaling and three-space general coordinate transformations, we show that to zeroth order in derivatives of the metric, the induced gravitational action in the preferred rest frame of the trace dynamics canonical ensemble must have the form $$\Delta S=\int d^4x (^{(4)}g)^{1/2}(g_{00})^{-2} A\big(g_{0i} g_{0j} g^{ij}/g_{00}, D^ig_{ij}D^j/g_{00}, g_{0i}D^i/g_{00}\big),$$ with $D^i$ is defined through the co-factor expansion of $^{(4)}g$ by $^{(4)}g/{^{(3)}g}=g_{00}+g_{0i}D^i$, and with $A(x,y,z)$ a general function of its three arguments. This action has "chameleon-like" properties: For the Robertson-Walker cosmological metric, it {\it exactly} reduces to a cosmological constant, but for the Schwarzschild metric it diverges as $(1-2M/r)^{-2}$ near the Schwarzschild radius, indicating that it will substantially affect the horizon structure. ### Incorporating gravity into trace dynamics: the induced gravitational action [Replacement] We study the incorporation of gravity into the trace dynamics framework for classical matrix-valued fields, from which we have proposed that quantum field theory is the emergent thermodynamics, with state vector reduction arising from fluctuation corrections to this thermodynamics. We show that the metric must be incorporated as a classical, not a matrix-valued, field, with the source for gravity the exactly covariantly conserved trace stress-energy tensor of the matter fields. We then study corrections to the classical gravitational action induced by the dynamics of the matrix-valued matter fields, by examining the average over the trace dynamics canonical ensemble of the matter field action, in the presence of a general background metric. Using constraints from global Weyl scaling and three-space general coordinate transformations, we show that to zeroth order in derivatives of the metric, the induced gravitational action in the preferred rest frame of the trace dynamics canonical ensemble must have the form $$\Delta S=\int d^4x (^{(4)}g)^{1/2}(g_{00})^{-2} A\big(g_{0i} g_{0j} g^{ij}/g_{00}, D^ig_{ij}D^j/g_{00}, g_{0i}D^i/g_{00}\big),$$ with $D^i$ is defined through the co-factor expansion of $^{(4)}g$ by $^{(4)}g/{^{(3)}g}=g_{00}+g_{0i}D^i$, and with $A(x,y,z)$ a general function of its three arguments. This action has "chameleon-like" properties: For the Robertson-Walker cosmological metric, it {\it exactly} reduces to a cosmological constant, but for the Schwarzschild metric it diverges as $(1-2M/r)^{-2}$ near the Schwarzschild radius, indicating that it will substantially affect the horizon structure. ### Further matters in space-time geometry: $f(R,T,R_{\mu\nu}T^{\mu\nu})$ gravity [Cross-Listing] We consider a gravitational model in which matter is non-minimally coupled to geometry, with the effective Lagrangian of the gravitational field being given by an arbitrary function of the Ricci scalar, the trace of the matter energy-momentum tensor, and the contraction of the Ricci tensor with the matter energy-momentum tensor. The field equations of the model are obtained in the metric formalism, and the equation of motion of a massive test particle is derived. In this type of models the matter energy-momentum tensor is generally not conserved, and this non-conservation determines the appearance of an extra-force acting on the particles in motion in the gravitational field. The Newtonian limit of the model is also considered, and an explicit expression for the extra-acceleration which depends on the matter density is obtained in the small velocity limit for dust particles. We also analyze in detail the so-called Dolgov-Kawasaki instability, and obtain the stability conditions of the model with respect to local perturbations. A particular class of gravitational field equations can be obtained by imposing the conservation of the energy-momentum tensor. We derive the corresponding field equations for the conservative case by using a Lagrange multiplier method, from a gravitational action that explicitly contains an independent parameter multiplying the divergence of the energy-momentum tensor. The cosmological implications of the model are investigated for both the conservative and non-conservative cases, and several classes of analytical solutions are obtained. ### Further matters in space-time geometry: $f(R,T,R_{\mu\nu}T^{\mu\nu})$ gravity [Replacement] We consider a gravitational model in which matter is non-minimally coupled to geometry, with the effective Lagrangian of the gravitational field being given by an arbitrary function of the Ricci scalar, the trace of the matter energy-momentum tensor, and the contraction of the Ricci tensor with the matter energy-momentum tensor. The field equations of the model are obtained in the metric formalism, and the equation of motion of a massive test particle is derived. In this type of models the matter energy-momentum tensor is generally not conserved, and this non-conservation determines the appearance of an extra-force acting on the particles in motion in the gravitational field. The Newtonian limit of the model is also considered, and an explicit expression for the extra-acceleration which depends on the matter density is obtained in the small velocity limit for dust particles. We also analyze in detail the so-called Dolgov-Kawasaki instability, and obtain the stability conditions of the model with respect to local perturbations. A particular class of gravitational field equations can be obtained by imposing the conservation of the energy-momentum tensor. We derive the corresponding field equations for the conservative case by using a Lagrange multiplier method, from a gravitational action that explicitly contains an independent parameter multiplying the divergence of the energy-momentum tensor. The cosmological implications of the model are investigated for both the conservative and non-conservative cases, and several classes of analytical solutions are obtained. ### Further matters in space-time geometry: $f(R,T,R_{\mu\nu}T^{\mu\nu})$ gravity [Replacement] We consider a gravitational model in which matter is non-minimally coupled to geometry, with the effective Lagrangian of the gravitational field being given by an arbitrary function of the Ricci scalar, the trace of the matter energy-momentum tensor, and the contraction of the Ricci tensor with the matter energy-momentum tensor. The field equations of the model are obtained in the metric formalism, and the equation of motion of a massive test particle is derived. In this type of models the matter energy-momentum tensor is generally not conserved, and this non-conservation determines the appearance of an extra-force acting on the particles in motion in the gravitational field. The Newtonian limit of the model is also considered, and an explicit expression for the extra-acceleration which depends on the matter density is obtained in the small velocity limit for dust particles. We also analyze in detail the so-called Dolgov-Kawasaki instability, and obtain the stability conditions of the model with respect to local perturbations. A particular class of gravitational field equations can be obtained by imposing the conservation of the energy-momentum tensor. We derive the corresponding field equations for the conservative case by using a Lagrange multiplier method, from a gravitational action that explicitly contains an independent parameter multiplying the divergence of the energy-momentum tensor. The cosmological implications of the model are investigated for both the conservative and non-conservative cases, and several classes of analytical solutions are obtained. ### The Structure of the Gravitational Action and its relation with Horizon Thermodynamics and Emergent Gravity Paradigm [Replacement] If gravity is an emergent phenomenon, as suggested by several recent results, then the structure of the action principle for gravity should encode this fact. With this motivation we study several features of the Einstein-Hilbert action and establish direct connections with horizon thermodynamics. We begin by introducing the concept of holographically conjugate variables (HCVs) in terms of which the surface term in the action has a specific relationship with the bulk term. In addition to g_{ab} and its conjugate momentum \sqrt{-g} M^{cab}, this procedure allows us to (re)discover and motivate strongly the use of f^{ab}=\sqrt{-g}g^{ab} and its conjugate momentum N^c_{ab}. The gravitational action can then be interpreted as a momentum space action for these variables. We also show that many expressions in classical gravity simplify considerably in this approach. For example, the field equations can be written in a form analogous to Hamilton’s equations for a suitable Hamiltonian if we use these variables. More importantly, the variation of the surface term, evaluated on any null surface which acts a local Rindler horizon can be given a direct thermodynamic interpretation. The term involving the variation of the dynamical variable leads to T\delta S while the term involving the variation of the conjugate momentum leads to S\delta T. We have found this correspondence only for the choice of variables (g_{ab}, \sqrt{-g} M^{cab}) or (f^{ab}, N^c_{ab}). We use this result to provide a direct thermodynamical interpretation of the boundary condition in the action principle, when it is formulated in a spacetime region bounded by the null surfaces. We analyse these features from several different perspectives and provide a detailed description, which offers insights about the nature of classical gravity and emergent paradigm. ### The Structure of the Gravitational Action and its relation with Horizon Thermodynamics and Emergent Gravity Paradigm [Replacement] If gravity is an emergent phenomenon, as suggested by several recent results, then the structure of the action principle for gravity should encode this fact. With this motivation we study several features of the Einstein-Hilbert action and establish direct connections with horizon thermodynamics. We begin by introducing the concept of holographically conjugate variables (HCVs) in terms of which the surface term in the action has a specific relationship with the bulk term. In addition to g_{ab} and its conjugate momentum \sqrt{-g} M^{cab}, this procedure allows us to (re)discover and motivate strongly the use of f^{ab}=\sqrt{-g}g^{ab} and its conjugate momentum N^c_{ab}. The gravitational action can then be interpreted as a momentum space action for these variables. We also show that many expressions in classical gravity simplify considerably in this approach. For example, the field equations can be written in a form analogous to Hamilton’s equations for a suitable Hamiltonian if we use these variables. More importantly, the variation of the surface term, evaluated on any null surface which acts a local Rindler horizon can be given a direct thermodynamic interpretation. The term involving the variation of the dynamical variable leads to T\delta S while the term involving the variation of the conjugate momentum leads to S\delta T. We have found this correspondence only for the choice of variables (g_{ab}, \sqrt{-g} M^{cab}) or (f^{ab}, N^c_{ab}). We use this result to provide a direct thermodynamical interpretation of the boundary condition in the action principle, when it is formulated in a spacetime region bounded by the null surfaces. We analyse these features from several different perspectives and provide a detailed description, which offers insights about the nature of classical gravity and emergent paradigm. ### The Structure of the Gravitational Action and its relation with Horizon Thermodynamics and Emergent Gravity Paradigm [Cross-Listing] If gravity is an emergent phenomenon, as suggested by several recent results, then the structure of the action principle for gravity should encode this fact. With this motivation we study several features of the Einstein-Hilbert action and establish direct connections with horizon thermodynamics. We begin by introducing the concept of holographically conjugate variables (HCVs) in terms of which the surface term in the action has a specific relationship with the bulk term. In addition to g_{ab} and its conjugate momentum \sqrt{-g} M^{cab}, this procedure allows us to (re)discover and motivate strongly the use of f^{ab}=\sqrt{-g}g^{ab} and its conjugate momentum N^c_{ab}. The gravitational action can then be interpreted as a momentum space action for these variables. We also show that many expressions in classical gravity simplify considerably in this approach. For example, the field equations can be written in a form analogous to Hamilton’s equations for a suitable Hamiltonian if we use these variables. More importantly, the variation of the surface term, evaluated on any null surface which acts a local Rindler horizon can be given a direct thermodynamic interpretation. The term involving the variation of the dynamical variable leads to T\delta S while the term involving the variation of the conjugate momentum leads to S\delta T. We have found this correspondence only for the choice of variables (g_{ab}, \sqrt{-g} M^{cab}) or (f^{ab}, N^c_{ab}). We use this result to provide a direct thermodynamical interpretation of the boundary condition in the action principle, when it is formulated in a spacetime region bounded by the null surfaces. We analyse these features from several different perspectives and provide a detailed description, which offers insights about the nature of classical gravity and emergent paradigm. ### Palatini approach to modified f(R) gravity and its bi-metric structure [Cross-Listing] f(R) gravity theories in the Palatini formalism has been recently used as an alternative way to explain the observed late-time cosmic acceleration with no need of invoking either dark energy or extra spatial dimension. However, its applications have shown that some subtleties of these theories need a more profound examination. Here we are interested in the conformal aspects of the Palatini approach in extended theories of gravity. As is well known, extremization of the gravitational action a la Palatini, naturally "selects" a new metric h related to the metric g of the subjacent manifold by a conformal transformation. The related conformal function is given by the derivative of f(R). In this work we examine the conformal symmetries of the flat (k=0) FLRW spacetime and find that its Conformal Killing Vectors are directly linked to the new metric h and also that each vector yields a different conformal function. ### Smoking guns of a bounce in modified theories of gravity through the spectrum of the gravitational waves [Replacement] We present an inflationary model preceded by a bounce in a metric theory a l\’{a} $f(R)$ where $R$ is the scalar curvature of the space-time. The model is asymptotically de Sitter such that the gravitational action tends asymptotically to a Hilbert-Einstein action, therefore modified gravity affects only the early stages of the universe. We then analyse the spectrum of the gravitational waves through the method of the Bogoliubov coefficients by two means: taking into account the gravitational perturbations due to the modified gravitational action in the $f(R)$ setup and by simply considering those perturbations inherent to the standard Hilbert-Einstein action. We show that there are distinctive (oscillatory) signals on the spectrum for very low frequencies; i.e. corresponding to modes that are currently entering the horizon. ### Smoking guns of a bounce in modified theories of gravity through the spectrum of the gravitational waves We present an inflationary model preceded by a bounce in a metric theory a l\’{a} $f(R)$ where $R$ is the scalar curvature of the space-time. The model is asymptotically de Sitter such that the gravitational action tends asymptotically to a Hilbert-Einstein action, therefore modified gravity affects only the early stages of the universe. We then analyse the spectrum of the gravitational waves through the method of the Bogoliubov coefficients by two means: taking into account the gravitational perturbations due to the modified gravitational action in the $f(R)$ setup and by simply considering those perturbations inherent to the standard Hilbert-Einstein action. We show that there are distinctive (oscillatory) signals on the spectrum for very low frequencies; i.e. corresponding to modes that are currently entering the horizon. ### Unimodular Constraint on global scale Invariance The global scale invariance along with the unimodular gravity in the vacuum is studied in this paper. The global scale invariant gravitational action which follows the unimodular general coordinate transformations is considered without invoking any scalar field. The possible solutions for the gravitational potential under linear field approximation for the allowed values of the introduced parameters of the theory are discussed. The modified solution has additional corrections along with the Schwarzschild solution. A comparative study of unimodular theory with conformal theory is also presented. Furthermore, the cosmological solution is studied and it is shown that the unimodular constraint preserve the de Sitter solution. ### Unimodular Constraint on global scale Invariance [Replacement] We study global scale invariance along with the unimodular gravity in the vacuum. The global scale invariant gravitational action which follows the unimodular general coordinate transformations is considered without invoking any scalar field. This is generalization of conformal theory described in the Ref. \cite{Mannheim}. The possible solutions for the gravitational potential under static linear field approximation are discussed. The new modified solution has additional corrections to the Schwarzschild solution which describe the galactic rotational curve. A comparative study of unimodular theory with conformal theory is also presented. Furthermore, the cosmological solution is studied and it is shown that the unimodular constraint preserve the de Sitter solution explaining the dark energy of the universe. ### A tensor instability in the Eddington inspired Born-Infeld Theory of Gravity [Cross-Listing] In this paper we consider an extension to Eddington’s proposal for the gravitational action. We study tensor perturbations of a homogeneous and isotropic space-time in the Eddington regime, where modifications to Einstein gravity are strong. We find that the tensor mode is linearly unstable deep in the Eddington regime and discuss its cosmological implications. ### A tensor instability in the Eddington inspired Born-Infeld Theory of Gravity [Replacement] In this paper we consider an extension to Eddington’s proposal for the gravitational action. We study tensor perturbations of a homogeneous and isotropic space-time in the Eddington regime, where modifications to Einstein gravity are strong. We find that the tensor mode is linearly unstable deep in the Eddington regime and discuss its cosmological implications. ### Renormalization group scale-setting from the action - a road to modified gravity theories [Cross-Listing] The renormalization group (RG) corrected gravitational action in Einstein-Hilbert and other truncations is considered. The running scale of the renormalization group is treated as a scalar field at the level of the action and determined in a scale-setting procedure recently introduced by Koch and Ramirez for the Einstein-Hilbert truncation. The scale-setting procedure is elaborated for other truncations of the gravitational action and applied to several phenomenologically interesting cases. It is shown how the logarithmic dependence of the Newton’s coupling on the RG scale leads to exponentially suppressed effective cosmological constant and how the scale-setting in particular RG corrected gravitational theories yields the effective $f(R)$ modified gravity theories with negative powers of the Ricci scalar $R$. The scale-setting at the level of the action at the non-gaussian fixed point in Einstein-Hilbert and more general truncations is shown to lead to universal effective action quadratic in Ricci tensor. Recently obtained analytical solutions for the quadratic action in $R$ are summarized as an illustration of the dynamics at the non-gaussian fixed point. ### Renormalization group scale-setting from the action - a road to modified gravity theories [Replacement] The renormalization group (RG) corrected gravitational action in Einstein-Hilbert and other truncations is considered. The running scale of the renormalization group is treated as a scalar field at the level of the action and determined in a scale-setting procedure recently introduced by Koch and Ramirez for the Einstein-Hilbert truncation. The scale-setting procedure is elaborated for other truncations of the gravitational action and applied to several phenomenologically interesting cases. It is shown how the logarithmic dependence of the Newton’s coupling on the RG scale leads to exponentially suppressed effective cosmological constant and how the scale-setting in particular RG corrected gravitational theories yields the effective $f(R)$ modified gravity theories with negative powers of the Ricci scalar $R$. The scale-setting at the level of the action at the non-gaussian fixed point in Einstein-Hilbert and more general truncations is shown to lead to universal effective action quadratic in Ricci tensor. ### Quantum corrections to gravity and their implications for cosmology and astrophysics The quantum contributions to the gravitational action are relatively easy to calculate in the higher derivative sector of the theory. However, the applications to the post-inflationary cosmology and astrophysics require the corrections to the Einstein-Hilbert action and to the cosmological constant, and those we can not derive yet in a consistent and safe way. At the same time, if we assume that these quantum terms are covariant and that they have relevant magnitude, their functional form can be defined up to a single free parameter, which can be defined on the phenomenological basis. It turns out that the quantum correction may lead, in principle, to surprisingly strong and interesting effects in astrophysics and cosmology. ### Weyl-Cartan-Weitzenb\"{o}ck gravity as a generalization of teleparallel gravity [Replacement] We consider a gravitational model in a Weyl-Cartan space-time, in which the Weitzenb\"{o}ck condition of the vanishing of the sum of the curvature and torsion scalar is also imposed. Moreover, a kinetic term for the torsion is also included in the gravitational action. The field equations of the model are obtained from a Hilbert-Einstein type variational principle, and they lead to a complete description of the gravitational field in terms of two fields, the Weyl vector and the torsion, respectively, defined in a curved background. The cosmological applications of the model are investigated for a particular choice of the free parameters in which the torsion vector is proportional to the Weyl vector. Depending on the numerical values of the parameters of the cosmological model, a large variety of dynamic evolutions can be obtained, ranging from inflationary/accelerated expansions to non-inflationary behaviors. In particular we show that a de Sitter type late time evolution can be naturally obtained from the field equations of the model. Therefore the present model leads to the possibility of a purely geometrical description of the dark energy, in which the late time acceleration of the Universe is determined by the intrinsic geometry of the space-time. ### Weyl-Cartan-Weitzenb\"{o}ck gravity [Cross-Listing] We consider a gravitational model in a Weyl-Cartan space-time, in which the Weitzenb\”{o}ck condition of the vanishing of the sum of the curvature and torsion scalar is also imposed. Moreover, a kinetic term for the torsion is also included in the gravitational action. The field equations of the model are obtained from a Hilbert-Einstein type variational principle, and they lead to a complete description of the gravitational field in terms of two fields, the Weyl vector and the torsion, respectively, defined in a curved background. The cosmological applications of the model are investigated for a particular choice of the free parameters in which the torsion vector is proportional to the Weyl vector. Depending on the numerical values of the parameters of the cosmological model, a large variety of dynamic evolutions can be obtained, ranging from inflationary/accelerated expansions to non-inflationary behaviors. In particular we show that a de Sitter type late time evolution can be naturally obtained from the field equations of the model. Therefore the present model leads to the possibility of a purely geometrical description of the dark energy, in which the late time acceleration of the Universe is determined by the intrinsic geometry of the space-time. ### On the stability of the cosmological solutions in f(R,G) gravity [Replacement] Modified gravity is one of the most promising candidates for explaining the current accelerating expansion of the Universe, and even its unification with the inflationary epoch. Nevertheless, the wide range of models capable to explain the phenomena of dark energy, imposes that current research focuses on a more precise study of the possible effects of modified gravity may have on both cosmological and local levels. In this paper, we focus on the analysis of a type of modified gravity, the so-called f(R,G) gravity and we perform a deep analysis on the stability of important cosmological solutions. This not only can help to constrain the form of the gravitational action, but also facilitate a better understanding of the behavior of the perturbations in this class of higher order theories of gravity, which will lead to a more precise analysis of the full spectrum of cosmological perturbations in future. ### On the stability of the cosmological solutions in $f(R,G)$ gravity [Cross-Listing] Modified gravity is one of the most promising candidates for explaining the current accelerating expansion of the Universe, and even its unification with the inflationary epoch. Nevertheless, the wide range of models capable to explain the phenomena of dark energy, imposes that current research focuses on a more precise study of the possible effects of modified gravity may have on both cosmological and local levels. In this paper, we focus on the analysis of a type of modified gravity, the so-called $f(R,G)$ gravity and we perform a deep analysis on the stability of important cosmological solutions. This not only can help to constrain the form of the gravitational action, but also facilitate a better understanding of the behavior of the perturbations in this class of higher order theories of gravity, which will lead to a more precise analysis of the full spectrum of cosmological perturbations in future. ### Gravitational effects of the faraway matter on the rotation curves of spiral galaxies It was recently shown that in cosmology the gravitational action of faraway matter has quite relevant effects, if retardation of the forces and discreteness of matter (with its spatial correlation) are taken into account. Indeed, far matter was found to exert, on a test particle, a force per unit mass of the order of 0.2 cH0 . It is shown here that such a force can account for the observed rotational velocity curves in spiral galaxies, if the force is assumed to be decorrelated beyond a sufficiently large distance, of the order of 1 kpc. In particular we fit the rotation curves of the galaxies NGC 3198, NGC 2403, UGC 2885 and NGC 4725 without any need of introducing dark matter at all. Two cases of galaxies presenting faster than keplerian decay are also considered. ### Towards singularity and ghost free theories of gravity [Cross-Listing] We present the most general ghost-free gravitational action in a Minkowski vacuum. Apart from the much studied f(R) models, this includes a large class of non-local actions with improved UV behavior, which nevertheless recover Einstein’s general relativity in the IR. ### Towards singularity and ghost free theories of gravity [Replacement] We present the most general covariant ghost-free gravitational action in a Minkowski vacuum. Apart from the much studied f(R) models, this includes a large class of non-local actions with improved UV behavior, which nevertheless recover Einstein’s general relativity in the IR.
2014-10-31 21:52:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8310936689376831, "perplexity": 388.48665694822836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900397.29/warc/CC-MAIN-20141030025820-00084-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.biostars.org/p/420008/
Is there a tool for calculating the number of uniquely mapped reads, ambigiously mapped reads, and unmapped reads from sam/bam? 0 0 Entering edit mode 2.3 years ago O.rka ▴ 540 I found a tool called MMQuant but this is suppose to be drop in replacement (though, it's not) for featureCounts. I'm trying to calculate the number of uniquely mapped reads, the number of ambigiously mapped reads, and the number of unmapped reads. Is there a tool that can calculate these basic measures for sam/bam files? RNA-Seq • 773 views 1 Entering edit mode why not just use featureCounts in the first place? (== might be useful to mention why do you want an alternative for it when asking for alternatives) htseq-count ? 0 Entering edit mode I’m getting a large number of ambiguously mapped reads and a very inconsistent gene expression between replicates. I suspect it’s a result of these ambiguous reads. I want to know exactly what proportion of the read pairs are ambiguously mapped. 0 Entering edit mode I hadn't realized this information is in the summary output of featureCounts. 0 Entering edit mode use combination of samtools and broad institute's website for flags 0 Entering edit mode BBTools (reformat.sh has a number of stats options) for aligned data files.
2022-05-17 10:04:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43204182386398315, "perplexity": 4068.5221152502563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00593.warc.gz"}
https://answers.ros.org/questions/55524/revisions/
# Revision history [back] ### Asus extrinsic calibration failing on ir image I am trying to follow the camera_pose_calibration guidelines as in http://www.ros.org/wiki/openni_launch/Tutorials/ExtrinsicCalibration for the Asus Xtion Pro Live. (Using Fuerte) The checkerboard on the colour images are detected, but on my IR images it doesn't do anything. One thing I noticed was that if I open the IR stream in image_view, it is completely black. It stays black, even if I point a halogen lamp directly at the camera. However, right-clicking saves me a nice picture of the scene. The viewer for camera_pose_calibration however, shows a nice image of the IR (but it doesn't detect a checkerboard). I have written a node that takes the IR image, and increases the contrast (multiplies everything by 256). The image now becomes visible in image_view. The image in the camera_pose_calibration is completely white, but the pattern seems to be detected. (I still get "Couldn't get measurement in interval", but I'm not sure that it is related to this problem) Can it be that there is a conversion error somewhere? I noticed the IR image is mono16 in the messages. By multiplying it by 256 I actually shifted it by 8 bit. ### Asus extrinsic calibration failing on ir image I am trying to follow the camera_pose_calibration guidelines as in http://www.ros.org/wiki/openni_launch/Tutorials/ExtrinsicCalibration ExtrinsicCalibration for the Asus Xtion Pro Live. (Using Fuerte) The checkerboard on the colour images are detected, but on my IR images it doesn't do anything. One thing I noticed was that if I open the IR stream in image_view, it is completely black. It stays black, even if I point a halogen lamp directly at the camera. However, right-clicking saves me a nice picture of the scene. The viewer for camera_pose_calibration however, shows a nice image of the IR (but it doesn't detect a checkerboard). I have written a node that takes the IR image, and increases the contrast (multiplies everything by 256). The image now becomes visible in image_view. The image in the camera_pose_calibration is completely white, but the pattern seems to be detected. (I still get "Couldn't get measurement in interval", but I'm not sure that it is related to this problem) Can it be that there is a conversion error somewhere? I noticed the IR image is mono16 in the messages. By multiplying it by 256 I actually shifted it by 8 bit. ### Asus extrinsic calibration failing on ir image I am trying to follow the camera_pose_calibration guidelines as in ExtrinsicCalibration for the Asus Xtion Pro Live. (Using Fuerte) The checkerboard on the colour images are detected, but on my IR images it doesn't do anything. One thing I noticed was that if I open the IR stream in image_view, it is completely black. It stays black, even if I point a halogen lamp directly at the camera. However, right-clicking saves me a nice picture of the scene. The viewer for camera_pose_calibration however, shows a nice image of the IR (but it doesn't detect a checkerboard). I have written a node that takes the IR image, and increases the contrast (multiplies everything by 256). The image now becomes visible in image_view. The image in the camera_pose_calibration is completely white, but the pattern seems to be detected. (I still get "Couldn't get measurement in interval", but I'm not sure that it is related to this problem) Can it be that there is a conversion error somewhere? I noticed the IR image is mono16 in the messages. By multiplying it by 256 I actually shifted it by 8 bit. Here is the callback I use to transform the image (I don't get the formatting right): void imageCallback(const sensor_msgs::Image::ConstPtr& msg) { cv_bridge::CvImagePtr cv_ptr; try { cv_ptr = cv_bridge::toCvCopy(msg); cv_ptr->image.convertTo(cv_ptr->image, -1, alpha, beta); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } image_pub.publish(cv_ptr->toImageMsg()); } ### Asus extrinsic calibration failing on ir image I am trying to follow the camera_pose_calibration guidelines as in ExtrinsicCalibration for the Asus Xtion Pro Live. (Using Fuerte) The checkerboard on the colour images are detected, but on my IR images it doesn't do anything. One thing I noticed was that if I open the IR stream in image_view, it is completely black. It stays black, even if I point a halogen lamp directly at the camera. However, right-clicking saves me a nice picture of the scene. The viewer for camera_pose_calibration however, shows a nice image of the IR (but it doesn't detect a checkerboard). I have written a node that takes the IR image, and increases the contrast (multiplies everything by 256). The image now becomes visible in image_view. The image in the camera_pose_calibration is completely white, but the pattern seems to be detected. (I still get "Couldn't get measurement in interval", but I'm not sure that it is related to this problem) Can it be that there is a conversion error somewhere? I noticed the IR image is mono16 in the messages. By multiplying it by 256 I actually shifted it by 8 bit. Here is the callback I use to transform the image (I don't get the formatting right): void imageCallback(const sensor_msgs::Image::ConstPtr& msg) { cv_bridge::CvImagePtr cv_ptr; try { cv_ptr = cv_bridge::toCvCopy(msg); cv_ptr->image.convertTo(cv_ptr->image, -1, alpha, beta); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } image_pub.publish(cv_ptr->toImageMsg()); } You can download the node for the workaround here and see a short explanation of it here. 5 No.5 Revision Martin Günther 11169 ●78 ●138 ●189 http://robotik.dfki-br... ### Asus extrinsic calibration failing on ir image I am trying to follow the camera_pose_calibration guidelines as in ExtrinsicCalibration for the Asus Xtion Pro Live. (Using Fuerte) The checkerboard on the colour images are detected, but on my IR images it doesn't do anything. One thing I noticed was that if I open the IR stream in image_view, it is completely black. It stays black, even if I point a halogen lamp directly at the camera. However, right-clicking saves me a nice picture of the scene. The viewer for camera_pose_calibration however, shows a nice image of the IR (but it doesn't detect a checkerboard). I have written a node that takes the IR image, and increases the contrast (multiplies everything by 256). The image now becomes visible in image_view. The image in the camera_pose_calibration is completely white, but the pattern seems to be detected. (I still get "Couldn't get measurement in interval", but I'm not sure that it is related to this problem) Can it be that there is a conversion error somewhere? I noticed the IR image is mono16 in the messages. By multiplying it by 256 I actually shifted it by 8 bit. Here is the callback I use to transform the image (I don't get the formatting right): image: void imageCallback(const sensor_msgs::Image::ConstPtr& msg) { cv_bridge::CvImagePtr cv_ptr; try { cv_ptr = cv_bridge::toCvCopy(msg); cv_ptr->image.convertTo(cv_ptr->image, -1, alpha, beta); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } image_pub.publish(cv_ptr->toImageMsg()); } You can download the node for the workaround here and see a short explanation of it here. 6 retagged ### Asus extrinsic calibration failing on ir image I am trying to follow the camera_pose_calibration guidelines as in ExtrinsicCalibration for the Asus Xtion Pro Live. (Using Fuerte) The checkerboard on the colour images are detected, but on my IR images it doesn't do anything. One thing I noticed was that if I open the IR stream in image_view, it is completely black. It stays black, even if I point a halogen lamp directly at the camera. However, right-clicking saves me a nice picture of the scene. The viewer for camera_pose_calibration however, shows a nice image of the IR (but it doesn't detect a checkerboard). I have written a node that takes the IR image, and increases the contrast (multiplies everything by 256). The image now becomes visible in image_view. The image in the camera_pose_calibration is completely white, but the pattern seems to be detected. (I still get "Couldn't get measurement in interval", but I'm not sure that it is related to this problem) Can it be that there is a conversion error somewhere? I noticed the IR image is mono16 in the messages. By multiplying it by 256 I actually shifted it by 8 bit. Here is the callback I use to transform the image: void imageCallback(const sensor_msgs::Image::ConstPtr& msg) { cv_bridge::CvImagePtr cv_ptr; try { cv_ptr = cv_bridge::toCvCopy(msg); cv_ptr->image.convertTo(cv_ptr->image, -1, alpha, beta); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } image_pub.publish(cv_ptr->toImageMsg()); } You can download the node for the workaround here and see a short explanation of it here. 7 retagged ### Asus extrinsic calibration failing on ir image I am trying to follow the camera_pose_calibration guidelines as in ExtrinsicCalibration for the Asus Xtion Pro Live. (Using Fuerte) The checkerboard on the colour images are detected, but on my IR images it doesn't do anything. One thing I noticed was that if I open the IR stream in image_view, it is completely black. It stays black, even if I point a halogen lamp directly at the camera. However, right-clicking saves me a nice picture of the scene. The viewer for camera_pose_calibration however, shows a nice image of the IR (but it doesn't detect a checkerboard). I have written a node that takes the IR image, and increases the contrast (multiplies everything by 256). The image now becomes visible in image_view. The image in the camera_pose_calibration is completely white, but the pattern seems to be detected. (I still get "Couldn't get measurement in interval", but I'm not sure that it is related to this problem) Can it be that there is a conversion error somewhere? I noticed the IR image is mono16 in the messages. By multiplying it by 256 I actually shifted it by 8 bit. Here is the callback I use to transform the image: void imageCallback(const sensor_msgs::Image::ConstPtr& msg) { cv_bridge::CvImagePtr cv_ptr; try { cv_ptr = cv_bridge::toCvCopy(msg); cv_ptr->image.convertTo(cv_ptr->image, -1, alpha, beta); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } image_pub.publish(cv_ptr->toImageMsg()); } You can download the node for the workaround here and see a short explanation of it here.
2021-06-21 05:37:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5699448585510254, "perplexity": 2296.9565977851207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488262046.80/warc/CC-MAIN-20210621025359-20210621055359-00317.warc.gz"}
https://kb.osu.edu/dspace/handle/1811/8249
# ABSOLUTE INFRARED INTENSITY MEASUREMENTS IN THIN FILMS II. SOLIDS DEPOSITED ON HALIDE $PLATES^{*}$ Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/8249 Files Size Format View 1963-H-09.jpg 136.2Kb JPEG image Title: ABSOLUTE INFRARED INTENSITY MEASUREMENTS IN THIN FILMS II. SOLIDS DEPOSITED ON HALIDE $PLATES^{*}$ Creators: Thyagarajan, G.; Schatz, P. N. Issue Date: 1963 Publisher: Ohio State University Abstract: “A theoretical investigation has been made of the error to be expected if reflection effects are neglected when absolute intensities are measured in thin solid films deposited on halide plates. The analysis has been applied to the experimental intensity results of Person and co-$workers^{1}$ on the antisymmetric stretching mode of $CS_{2}(s)$ deposited on AgCl, and to the intensity results of Dows and $Wieder^{2}$ on the two infrared active fundamentals of $SF_{6}(s)$ deposited on AgCl. The theoretical treatment is completely rigorous except for the fact that the vibrational absorption band is approximated by the damped oscillator model. Care has been taken to insure that the base line in the theoretical calculations corresponds to the one used experimentally. This is essential if meaningful comparisons are to be made. It is found in all three bands that the true intensity is predicted to be substantially (25-33%) than the intensity measured experimentally. Furthermore, in the case of the two $SF_{6}$ fundamental bands which are characterized by very large ratios of observed band intensity to observed band width, the shape of the experimentally observed band is influenced to a remarkable degree by reflection effects.” Description: $^{*}$Supported by a grant from the National Science Foundation. $^{\dag}$Present address: Department of Physics. Indian Institute of Technology, Powai, Bombay 76, India. $^{1}$W. E. Person, private communication. $^{2}$D. A. Dows and G. M. Wilder, Spectrochim, Acta. 18, 1567 (1962). Author Institution: Department of Chemistry, University of Virginia URI: http://hdl.handle.net/1811/8249 Other Identifiers: 1963-H-9
2017-04-29 23:25:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5325862765312195, "perplexity": 2388.5698002258478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123632.58/warc/CC-MAIN-20170423031203-00055-ip-10-145-167-34.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/369971/tricky-wording-on-congruence-modulo-question
# “Tricky” wording on Congruence Modulo Question? I'm asked for all possible values, but I can only see one. The question on my practice exam reads: Consider the equivalence class [3] for the equivalence relation "congruence modulo $7$" on $\Bbb Z$. Suppose that $S = {1, 2, ..., N}$, where $N$ is a positive integer. Find all possible values of $N$ so that $[3] \cap S$ contains exactly $10$ elements. As I see it, $S$ must be $\{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12\}$, so all possible values of $N$ are $12$? $3$ and $10$ and members of $[3]$, but $13$ is not, so any higher values would have more than $10$ elements. - What about $\,S=\{1,2\ldots,73\}\,$ ? How many integers here equal $\,3\pmod 7\,$ ? – DonAntonio Apr 23 '13 at 3:12 Oh boy... hours of studying have left me unable to discern between union and intersection. Thank you :) – GotterdammerunG Apr 23 '13 at 3:21 Recall that $[3]$ is an equivalence class, representing a set: $$x \in [3] \iff x \equiv 3 \pmod 7 \implies x = 7k + 3 \;\text{ where}\;\;k\in \mathbb Z$$ What integers $x$ in $\mathbb N = \{1, 2, 3, ....\}$, are such that $x = 7k + 3, k\geq 0\,$? We need the first ten such elements in the natural numbers, and let's call the set of the first ten elements $X$: $$X =\{3, 10, 17, 24, 31, 38, 45, 52, 59, \bf 66\}$$ is the set of the first ten elements in $[3]$, if we are considering only values of $x \in [3]$ as a subset of the natural numbers. $$S \subset \mathbb N = \{1, 2, 3, \cdots, 65, {\bf 66}\}$$ $$|X \cap S| = 10 \implies \bf N = 66$$ - Union / Intersection confusion: time for a sandwich break :) Thanks again – GotterdammerunG Apr 23 '13 at 3:24 100% What a great website/community here! – GotterdammerunG Apr 23 '13 at 3:26 Glad to help! ;-) – amWhy Apr 23 '13 at 3:27 Hahaha... still cheaper than a tutor ;) I do appreciate it! – GotterdammerunG Apr 23 '13 at 3:31 Yes, and likely quicker than a waiting until you're next meeting with a tutor! ;-) – amWhy Apr 23 '13 at 3:32 The first $10$ positive integers that belong to the equivalence class are $3, 10, 17, 24, 31, 38, 45, 52, 59, 66$. So we need to go up to $66$ at least, and we don't want the next one, $73$, for that would put us over. Remark: The number $7$ definitely does not belong to the equivalence class of $3$ modulo $7$, since the difference between $7$ and $3$ is not divisible by $7$. The equivalence class of $3$, in symbols $[3]$, consists of all integers $n$, positive, negative, or $0$, such that $n\equiv 3\pmod{7}$. In more old-fashioned language, it consists of all integers $n$ of the form $7k+3$. Very soon, we start thinking of $[3]$ as a single abstract object, and we kind of forget that, in principle, it is an infinite set. In that sense, the question was a bit of a trick question. - Framing an equivalence class as you did has given me a more intuitive understanding of the subject. Those problems should be a piece of cake now, at least in the context of my current math class. Thank you. – GotterdammerunG Apr 23 '13 at 3:38 @GotterdammerunG: You are welcome. But soon, as I mentioned, you will need to think of the equivalence classes mainly as new kinds of objects which are as concrete as the integers themselves. – André Nicolas Apr 23 '13 at 3:50
2015-11-26 00:04:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7690995335578918, "perplexity": 244.6271460283522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446230.43/warc/CC-MAIN-20151124205406-00034-ip-10-71-132-137.ec2.internal.warc.gz"}
https://forum.solidworks.com/message/8571
# Pendulum in COSMOS Question asked by Mike L on Mar 26, 2007 Latest reply on Apr 5, 2007 by Mike L Hello. This is my first post here, so please reply : ) I created a very simple physical pendulum (see first image below)for the sole purpose of finding out if I can simulate one inCOSMOS. Well, turns out I cannot - or I'm missing something, whichis hopefully the case - and I'll get some hints from the communityregarding the issue. So I have two parts: The support (fixed in the assembly) and thependulum itself, which is mated to the support so that it canrotate freely. I've added Gravity and moved the pendulum to itsinitial position. Then I started the simulation thinking that the pendulum willoscillate as it should, but in reality, my pendulum reachedvertical position and stopped! Second image shows this finalposition. So is there a way to make this thing oscillate? I'd like to try todesign some pendulum-driven mechanisms (clock?), but unless thisissue can be resolved - I'll just have to move on to learning otheraspects SolidWorks.
2019-08-25 06:47:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8777084350585938, "perplexity": 1967.0295944382226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323221.23/warc/CC-MAIN-20190825062944-20190825084944-00503.warc.gz"}
http://math.stackexchange.com/questions/72280/system-of-difference-equations
# System of difference equations Is there an effective way of finding a particular $x_n$, say $x_5$, of a system of difference equations $x_{n}=ax_{n+1}+bx_{n-1}$ where $a, b$ are constants and the $n$'s say are $\leq k$ (apart from actually substituting each equation into the next)? Thanks. - You could solve the constant-coefficient linear difference equation explicitly. This is second-order, so your characteristic polynomial is quadratic... –  J. M. Oct 13 '11 at 12:35 @J.M.: Thanks. I am being silly. –  olga Oct 13 '11 at 12:45 See this answer for a closed formula to solve such equations. –  Pierre-Yves Gaillard Oct 13 '11 at 13:14
2014-10-24 11:26:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8969922065734863, "perplexity": 758.1032460125334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645866.1/warc/CC-MAIN-20141024030045-00285-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.speedsolving.com/threads/monthly-computer-cube-competition-14-august-2010-special-edition.22931/
# Monthly Computer Cube Competition 14: August 2010 (special edition!) #### qqwref ##### Member This is the monthly speedsolving.com computer cube competition! But... it's a bit different this time. 2x2 and 3x3 seem to be the only popular events, so for this month I've kicked it up a notch on the others. Who doesn't like a challenge, eh? They're all best of 1 except 3BLD, so you don't have to go crazy if you don't like solving so much. Here are the rules. - Do all of the solves for each event consecutively (no practice solves in between), and you must decide that you're doing official solves right before you start the first one. You can redo a solve if you get a computer-related problem. - You may use any simulator you want (if it supports the puzzle of course). - NO MACROS! You can't do more than one turn per key press. - Try to keep inspection under 15 seconds. - I have the right to ask for proof that you are capable of the times you claim. - For the 3x3 and 2x2 events, the top 5 people get 6, 4, 3, 2, and 1 points in that order. For every other event, everyone who submits a non-DNF time gets 5 points. Here are some useful simulators: - Ryan Heise's hi-games. - Ryan Heise's 3x3 BLD sim. - Gelatinbrain for many puzzles. - Jeremy Fleischman's jflySim + qqTimer. - Mitchell Stern's NxN clock simulator. - My jsclock (dvorak version) or Tim Sun's sim for 3x3 clock. - My qCube. - My IsoMinxSim. This competition is over. Results are here. The current list of puzzles are as follows: - 2x2x2: Average of 12. - 3x3x3: Average of 12. - 8x8x8: Best of 1. - 9x9x9: Best of 1. - 10x10x10: Best of 1. - 20x20x20: Best of 1. - 1x5x5: Best of 1. - 4x4x5: Best of 1. - 4x5x5: Best of 1. - 3x3x3 BLD: Best of 5. - Clock (10x10): Best of 1. - Clock (20x20): Best of 1. - Clock (30x30): Best of 1. - Deep-Cut Helicopter Cube: Best of 1. (This is gelatinbrain 3.3.3.) - Gigaminx: Best of 1. - Lattice Cube: Best of 1. (This is gelatinbrain 3.2.7.) - Master FTO: Best of 1. (This is gelatinbrain 4.1.8.) - Master Pyraminx: Best of 1. (This is gelatinbrain 5.1.10.) - Master Skewb: Best of 1. (This is gelatinbrain 3.2.2.) - Master Super-X: Best of 1. (This is gelatinbrain 3.4.5.) - Square-2: Best of 1. (Use jflysim.) - Teraminx: Best of 1. Good luck and have fun! Last edited: #### hawkmp4 ##### Member Oh boy... I was just thinking about doing a teraminx solve the other day... I guess I have some motivation now. 1x5x5: 2:23.594 Gigaminx: 1:09:05.687 Last edited: #### uberCuber ##### Member congrats qq you have officially gotten me to love computer cubes 2x2: 5.96, 9.85, 11.64, 9.89, 8.69, 7.93, 5.75, 3.95, 6.70, 10.17, 7.44, 4.45 = 7.68 (a couple PLL skips ) 3x3: 53.29, 32.14, 37.25, 41.18, 36.27, 38.01, 49.47, 33.40, 35.01, 42.06, 30.27, 51.03 = 39.58 getting more used to this 1x5x5: 1:42.066 this thing is weird Clock (10x10): 5:36.243 first day i've ever tried solving solving any size clock sorry, i'm afraid that might be all I will be able to do for this comp...I don't have enough time to sit in front of the computer for as long as it would take me to solve one of those large cubes or a gigaminx or a 30x30 clock..and as for all that gelatinbrain stuff..I haven't even learned how to solve the normal versions of a Pyraminx, Skewb, FTO, etc...and I'm going back to school soon which means I won't have too much time to mess with them..next month if it is back to normal I should be able to do most things though Last edited: #### sz35 ##### Member I want to solve a sq2. when I open jflysim+qqtimer there is no sq2 option, only sq1/ #### hawkmp4 ##### Member I want to solve a sq2. when I open jflysim+qqtimer there is no sq2 option, only sq1/ Go to the Square-1 sim, then select 'Options' in the applet, and change the variation to Square-2. #### Anthony ##### Professional Speedcuber 2x2: 4.05, 4.91, 7.55, 9.14, 4.65, 6.34, 4.67, 3.88, 5.84, (11.83), 4.84, (3.58) I think I'll stick to real 2x2. 3x3: 14.09, 17.31, 16.82, 15.68, 15.05, (11.46), 15.68, 17.70, (18.59), 16.29, 16.88, 14.69 = 16.02 Pretty good considering I haven't done any computer cubing in quite a while. Last edited: #### sz35 ##### Member I want to solve a sq2. when I open jflysim+qqtimer there is no sq2 option, only sq1/ Go to the Square-1 sim, then select 'Options' in the applet, and change the variation to Square-2. Thanks Sq2: 3:45.17 3x3x3: 20.37, 32.99, 25.12, 22.37, 26.43, 29.39, 28.74, 17.98, 23.00, 22.17, 26.63, 25.10 = 24.93 Yes, I suck at computer cubes, and I'm proud of it. Last edited: #### MrData ##### Member 2x2: 3.05, (1.38), 3.28, 2.25, 5.13, 2.95, 3.59, (6.25), 2.13, 2.69, 4.17, 2.27 --> 3.15 Wow. This is really bad. 1.38 was nl. 3x3: 13.03, 14.41, 12.24, 12.48, 14.50, 14.49, (15.31), 10.09, 12.22, 11.41, (9.55), 12.69 --> 12.75 First computer cube avg since I got back from nats. The 9.55 is my new nl pb, everything else was meh. Last edited: #### cincyaviation ##### Member 3x3: 47.30, 57.23, 1:08.44, 39.22, 52.17, 50.06, 49.00, 38.97, 34.97, 42.89, 44.20, 38.25 = 45.93 #### plechoss ##### Member sq2 : 2:05.38 2x2 : 2.61, 3.28, 2.69, 2.86, 3.44, 2.55, 4.36, 3.25, 1.75, 2.44, 2.17, 2.58 = 2.79 ok 3x3 : 8.80, 11.97, 10.58, 13.03, 9.97, 9.78, 9.97, 10.16, 10.09, (14.19), (8.02), 12.84 = 10.72 :/ #### Jude ##### Member 4x4x4: 1:02.77 --> kinda annoying cus it was a really good reduction and no OLL parity, but the PLL was just 2 diagonal corners so I did PLL parity and N perm :\ #### mande ##### Member 2x2: 6.33, 4.84, 9.04, 5.43, 6.85, 4.98, (25.70), 12.55, (3.00), 5.33, 5.76, 7.47 = 6.86 Stupid counting 9 and 12. 3x3: 22.70, 23.29, 24.57, 27.98, 28.24, 37.88, 22.34, 35.18, (41.57), (18.95), 20.25, 19.43 = 26.18 I hate computer G-perms. Last edited: #### zosomaniac ##### Member only minxes- big ones Gigaminx: 1:45:19 (1783 moves) Teraminx : 2:54:36 (3473 moves) both done on ultimate magic cube #### Anonymous ##### Member 4x4x4: 1:02.77 --> kinda annoying cus it was a really good reduction and no OLL parity, but the PLL was just 2 diagonal corners so I did PLL parity and N perm :\ Are N-perms as bad on the computer as they are in real life? #### qqwref ##### Member 4x4x4: 1:02.77 --> kinda annoying cus it was a really good reduction and no OLL parity, but the PLL was just 2 diagonal corners so I did PLL parity and N perm :\ 4x4x4 isn't on the list #### qqwref ##### Member Here are my submissions for this month. 8x8x8: 4:36.774 9x9x9: 6:25.538 10x10x10: 11:17.943 2102 @ 3.1 lol Gigaminx: 7:05.932 Teraminx: 20:07.429 2744 @ 2.27 3x3x3: (12.497) 10.912 10.499 9.846 11.234 11.451 9.919 8.529 (8.297) 9.628 9.653 9.759 => 10.143 nice 2x2x2: (16.728) 3.169 2.361 2.934 5.199 (1.914) 4.462 3.538 2.973 2.72 8.891 3.678 => 3.993 fail Lattice Cube: 57 Master Skewb: 2:01 Master Super-X: 2:56 no parity yessss Master FTO: 8:05 Master Pyraminx: 2:12 Clock (10x10): 2:11.906 Clock (20x20): 9:59.765 sub10 wooo Clock (30x30): 25:56.750 6060 moves at 3.893 tps... dang 4x5x5: 8:42.625 ugh terrible, kept completely screwing up inner section (warmup was sub4) 4x4x5: 2:20.156 good Square-2: 2:12.406 redux was meh 1x5x5: 5.828 3x3x3 BLD: DNF DNF DNF 2:34.48 DNS thought I wouldn't get one, phew #### ben1996123 ##### Banned 8x8x8: 15:40.14 Comment - Done on Gabbasoft 1x5x5: 3.946 Comment - Done on isocubesim 4x4x5: 10:32.507 Comment - Done on isocubesim, first ever solve was 34:59.xyz :fp I wasted a lot of time trying to solve the middle layer. Clock (20x20): 1:36:24.489 Comment - Done in 2 parts, 1:20:00 of this was not solving time (should I remove this from the time or not?) #### Yes We Can! ##### Member 4x4x4: 1:02.77 --> kinda annoying cus it was a really good reduction and no OLL parity, but the PLL was just 2 diagonal corners so I did PLL parity and N perm :\ F R U' R' U' R U R' (U PLL parity U) R U R' U' R' F R F' #### qqwref ##### Member Clock (20x20): 1:36:24.489 Comment - Done in 2 parts, 1:20:00 of this was not solving time (should I remove this from the time or not?) Of course not. F R U' R' U' R U R' (U PLL parity U) R U R' U' R' F R F' You mean F R U' R' U' R U R' (U PLL parity U') F R U R' U' R' F R F' ? Personally I prefer (L U L') (PLL parity) y' R U R' U' R' F R2 U' R' U' R U R' U' F'. Here are the final results, and then the rankings for all events: Final Results 1: qqwref - 99 points!!! 2: ben1996123 - 20 points!! 3: plechoss & uberCuber - 15 points! 5: hawkmp4 & zosomaniac - 10 points 7: MrData - 7 points 8: sz35 - 6 points 9: Anthony - 4 points 10: mande - 1 point 11: cincyaviation - 0 points Individual events: Code: [B]2x2x2[/B] 1. plechoss: 2.787 2. MrData: 3.151 3. qqwref: 3.9925 4. Anthony: 5.587 5. mande: 6.858 6. uberCuber: 7.683 [B]3x3x3[/B] 1. qqwref: 10.1430 2. plechoss: 10.719 3. MrData: 12.756 4. Anthony: 16.019 5. sz35: 24.932 6. mande: 26.186 7. uberCuber: 39.582 8. cincyaviation: 45.929 [B]8x8x8[/B] 1. qqwref: 4:36.774 2. ben1996123: 15:40.14 [B]9x9x9[/B] 1. qqwref: 6:25.538 [B]10x10x10[/B] 1. qqwref: 11:17.943 [B]20x20x20[/B] [B]1x5x5[/B] 1. ben1996123: 3.946 2. qqwref: 5.828 3. uberCuber: 1:42.066 4. hawkmp4: 2:23.594 [B]4x4x5[/B] 1. qqwref: 8:42.625 2. ben1996123: 10:32.507 [B]4x5x5[/B] 1. qqwref: 2:20.156 [B]3x3 BLD[/B] 1. qqwref: 2:34.48 [B]Clock (10x10)[/B] 1. qqwref: 2:11.906 2. uberCuber: 5:36.243 [B]Clock (20x20)[/B] 1. qqwref: 9:59.765 2. uberCuber: 20:20.148 3. ben1996123: 1:36:24.489 [B]Clock (30x30)[/B] 1. qqwref: 25:56.750 [B]Deep-Cut Helicopter Cube[/B] [B]Gigaminx[/B] 1. qqwref: 7:05.932 2. hawkmp4: 1:09:05.687 3. zosomaniac: 1:45:19 [B]Lattice Cube[/B] 1. qqwref: 57 [B]Master FTO[/B] 1. qqwref: 8:05 [B]Master Pyraminx[/B] 1. qqwref: 2:12 [B]Master Skewb[/B] 1. qqwref: 2:01 [B]Master Super-X[/B] 1. qqwref: 2:56 [B]Square-2[/B] 1. plechoss: 2:05.38 2. qqwref: 2:12.406 3. sz35: 3:45.17 [B]Teraminx[/B] 1. qqwref: 20:07.429 2. zosomaniac: 2:54:36 #### ben1996123 ##### Banned Clock (20x20): 1:36:24.489 Comment - Done in 2 parts, 1:20:00 of this was not solving time (should I remove this from the time or not?) Of course not. Ok, didn't think so.
2019-12-16 08:16:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3931451141834259, "perplexity": 12377.012914062247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541318556.99/warc/CC-MAIN-20191216065654-20191216093654-00306.warc.gz"}
https://bodheeprep.com/cat-quant-practice-problems/365
# CAT Quant Practice Problems Question: A cow is tethered at point A by a rope. Neither the rope nor the cow is allowed to enter ?ABC. ∠BAC = 30° (AB)= (AC)= 10 m What is the area that can be grazed by the cow if the length of the rope is 8 m? $134\pi \frac{1}{3}$ sq.m $121\pi$sq. m $132\pi$sq. m $\frac{{176\pi }}{3}$sq. m #### CAT Quant Online Course • 1000+ Practice Problems • Detailed Theory of Every Topics • Online Live Sessions for Doubt Clearing • All Problems with Video Solutions CAT Quant Practice Problems
2019-01-22 18:21:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8557882905006409, "perplexity": 12723.30941879567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583867214.54/warc/CC-MAIN-20190122182019-20190122204019-00538.warc.gz"}
http://tex.stackexchange.com/tags/biblatex/new
# Tag Info 2 From the perspective of a technical editor, the two works, while identical in the passage you mentioned, are not the same. Therefore, both must be cited when quoted passages are the same or very similar, while only the relevant one would be cited when then two versions of a quoted passage are different. BibLaTeX is doing exactly what it is supposed to ... 1 After little modifying this answer: \documentclass[a4paper, 12pt]{report} %\usepackage[french]{babel} \usepackage[backend=bibtex, hyperref=true, url=false, isbn=false, backref=false, style=numeric-comp, maxcitenames=3, maxbibnames=100, ... 3 You need to alter the labelnumberwidth format \documentclass{scrbook} \begin{filecontents*}{\jobname.bib} @BOOK{ADA, author = {Example Author}, title = {Random Title}, publisher = {Some Publisher}, year = {2003}, location = {City}, edition = {2}, } \end{filecontents*} \usepackage[style=numeric-comp]{biblatex} ... 2 For inproceedings the relevant macro is chapter+pages not note+pages. You need to look in standard.bbx to discover this. With this change the solution works. \documentclass{article} \usepackage[style=authoryear, backend=biber]{biblatex} \usepackage{filecontents} \usepackage{hyperref} \begin{filecontents*}{\jobname.bib} @inproceedings{BarPalNumEst, ... 3 The backrefs are stored in the pageref list. Thus you can use \DeclareListFormat control how the backrefs are formatted. Here is a possible definition to achieve what you want: \DeclareListFormat{pageref}{% \ifthenelse{\value{listcount}<\value{liststop}} {#1\addcomma\addspace} {\ifnumequal{\value{listcount}}{\value{liststop}} {and #1} {}% ... 3 0 Just for the sake of completeness, an alternative method I have found while waiting for answers is: \documentclass{article} \usepackage[hyperref]{biblatex} \usepackage{hyperref} \usepackage{filecontents} \begin{filecontents*}{\jobname.bib} @book{Goossens1994LaTeX, author = {Michel Goossens and Frank Mittelbach and Alexander Samarin}, title = {The \LaTeX{} ... 6 You must override the strings defined in german.lbx with something like: \DefineBibliographyStrings{german}{% page = {p\adddot}, sequens = {sq\adddot}, sequentes = {sqq\adddot}, &c. }% So you'll have to scan through the list of strings (several hundreds) in german.lbx, and replace those you want to be in Latin form. 1 You have turned the and into a comma via \renewcommand*{\finalnamedelim}{\multinamedelim} Removing this command restores the and: \documentclass[a4paper,10pt]{report} \usepackage[bibstyle=authoryear,citestyle=authoryear,sorting=none, backend=biber,natbib,dashed=false]{biblatex} \addbibresource{sources.bib} \NewBibliographyString{available} ... 2 Since you are setting dashed=false anyway, you can just remove the definition of the macro byeditor+others which tells biblatex to use a dash when the author and editor are the same: \documentclass[11pt,a4paper,oneside]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[a4paper, top=2.5cm, right=2.5cm, bottom=2.5cm, ... 2 Here is a solution using biblatex \documentclass{article} \usepackage[hyperref]{biblatex} \usepackage{hyperref} \usepackage{filecontents} \begin{filecontents*}{\jobname.bib} @book{Goossens1994LaTeX, author = {Michel Goossens and Frank Mittelbach and Alexander Samarin}, title = {The \LaTeX{} Companion, $2^{nd}$ Edition}, publisher = {Addison-Wesley}, ... 2 Why not bibentry? \documentclass{article} \usepackage{filecontents} \begin{filecontents*}{mybib1.bib} @book{Goossens1994LaTeX, author = {Michel Goossens and Frank Mittelbach and Alexander Samarin}, title = {The \LaTeX{} Companion, $2^{nd}$ Edition}, publisher = {Addison-Wesley}, year = {1994{.}}, url = {www.tex.stackexchange.com} } ... 1 As it seems that you want to change the appearance of the number field just for those entries that are standardisodin (as defined in your other question here), you can use \DeclareFieldFormat[standardisodin]{number}{#1} to get rid of any prefix to the number field for this particular entry type. If you want to be more radical and do this for all entry ... 2 I recommend to use arara, a very flexible tool to compile LaTeX documents, including, but not limited to, biblatex with biber -- and also deleting the aux files generated in the process. I adapted from here: % arara: pdflatex: { shell: yes } % arara: biber % arara: pdflatex: { shell: yes } % arara: pdflatex: { shell: yes } % arara: clean: { files: [ ... 0 You are almost there, the only thing we have to do is to tell biblatex we want to be able to track "ibid" by ibidtracker=constrict (or any other option that turns on the ibidtracker, see p. 56 of the biblatex documentation for more on that, constrict is the authoryaer standard setting). In our new cite command we then only need to check whether the citation ... 2 The footnote package can save any footnotes entered inside a float and spit them out at the end (normally they are just thrown away, not sure why). You only need to add these two lines to your preamble: \usepackage{footnote} \makesavenoteenv{figure} You can \makesavenoteenv for any other environments you use, like tables. 1 Thanks to Ulrike Fischer I can post now the answer that I searched for. The three lines before \begin{document} are the key lines. \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{filecontents} \usepackage[backend=biber]{biblatex} \begin{filecontents*}{references.bib} @misc { mybibkey, url = ... 2 You have at least two options to achieve this. \citefield and friends We can use \citefield and friends to access any field of any bibliography entry using the \citefield[<prenote>][<postnote>]{<key>}[<format>]{<field>} syntax (so \citefield does indeed work like your normal cite command). One needs to be aware, however, ... 5 The problem with your approach was that a refsection is local, as such its references are not accessible from outside that refsection, so when you cited Sun in the document you were (for biblatex at least) citing Sun in a "global" bibliography and not the Sun in the local refsection. biblatex also has refsegments that are "more global versions" of ... 1 I finally managed it. These three macros are required: For journal volume.number, month year % Comma before date; date not in parentheses \renewbibmacro*{issue+date}{% \setunit*{\addcomma\space}% \iffieldundef{issue} {\usebibmacro{date}} {\printfield{issue}% \setunit*{\addcomma\addspace}% \usebibmacro{date}}% \newunit} ... 0 Here is an MWE based on @Guido's answer: \documentclass{article} \usepackage{filecontents} % tlmgr install filecontents \begin{filecontents*}{btest.bib} @BOOK{Author2014, author = {Jane Doe and John Hansen and Tom Nielsen}, publisher = {Another}, title = {The Author book}, year = 2014 } \end{filecontents*} \usepackage[% style=ieee, isbn=true, ... 1 Contrary to traditional BibTeX style files, with biblatex the correct syntax is \addbibresource{references.bib} i.e., including the .bib filename extension. 3 You can redefine \footcitetext as follows: \DeclareCiteCommand{\footcitetext}[\footnotetext] {\bibsentence% \usebibmacro{cite:init}% \usebibmacro{prenote}} {\usebibmacro{citeindex}% \global\booltrue{cbx@mlafootnotes}% \renewcommand*{\newunitpunct}{\addcomma\space}% \usebibmacro{cite:mla:foot}} {} {\usebibmacro{mla:foot:postnote}} ... 6 EDIT: Added Title clickable (1. just the title cickable and 2. the whole reference clickable) 1. Just the Title reference clickable You can redefine the title macro and add the \href to the title using the DeclareFieldFormat. I edited the default definitions in the biblatex.def file. \DeclareFieldFormat{title}{\myhref{\mkbibemph{#1}}} \DeclareFieldFormat ... 1 There were several small errors in your source file and commands. You were missing commas after some of the fields in the .bib file, which meant the file wouldn't parse correctly. Also, you need to pass some additional options to biblatex-chicago. I corrected the style of the .bib entries to conform to Chicago style (headline-style capitalization for ... 4 biblatex has good control over breaking url's, without using the breakurl package. In particular you can set biburlnumpenalty to a non-zero number to allow breaks after digits. It is a counter, so use e.g. \setcounter{biburlnumpenalty}{10}: \documentclass[12pt]{article} \usepackage[T1]{fontenc} \usepackage[english]{babel} \usepackage{makeidx} ... 5 Use \AtEveryCitekey{% <--- You need the % 1 Here is possibly another way for ignoring the warnings; those come from a \warn command in the .bbl file: \$ sed -n '/\\warn/p' my_new_article.bbl \warn{\item Overwriting field 'year' with year value from field 'date' for entry 'author2001paper'} \warn{\item Overwriting field 'month' with month value from field 'date' for entry ... 1 I was just trying these examples with a fresh TexLive 2014; I had been using the \let\l@ENGLISH\l@english fix for a while in TexLive 2011 with success, but now with 2014 the same document gave me the dreaded warning: Package babel Warning: You haven't loaded the language ENGLISH yet (babel) I'll proceed, but expect unexpected results. ... 0 as it took some time for me to solve a similar problem (underlining one author), I just post my solution here. I had the problem with other solutions that the "et al." was dropped if the reference was abbrevated ... I hope this can help some else as well. \usepackage[normalem]{ulem} \renewcommand{\ULthickness}{0.5pt} \DeclareNameFormat{author}{% ... 2 The part of the citations is working with your code. The part of bibliography is possible modifying the macros of the style. a. The bibliography style authortitle is more similar to the style in the question that the authoryear. The it is suggested to use bibstyle=authortitle in the load of biblatex. b. The example only use the initial of firstnames, ... 3 You can adapt the solution proposed in this answer http://tex.stackexchange.com/a/203350/16895. We create a toggle and set it to true just before executing the loop code of the cite command for \citeauthor, and set the toggle false after the name has been printed. \newtoggle{citeauthor} \DeclareCiteCommand{\citeauthor} {\boolfalse{citetracker}% ... 3 biber uses hash funcions to distinguish between authors. To be identified as the same author, the name has to written exactly the same in both cases to produce the same hash. If you enter the name once with full first name and once with initials only, it will not produce the same hash and biber or biblatex will treat it as two different persons. 3 The citations use the printnames{labelname} for print the author names. Then you can modify or declare the format of labelname. The most simple way is using a alias. It means: \DeclareNameAlias{labelname}{last-first} but the above code not support the option uniquename because the last-first declaration not support this. Then other form is modifying the ... 2 You can get the Hebrew font in the bibliography by using your well-defined \textheb command directly in the .bib file itself. 4 Obviously, it would be best to have a properly formatted .bib file, i.e. one that biblatex and Biber can actually process without choking. But in some cases that's not really possible (or even desirable). Using Biber it is very easy to modify a source file on the fly. In your case, where you propose to have a latextitle field for consumption for ... 4 Use \AtNextBibliography{\small} or in the preamble \AtBeginBibliography{\small} 2 tabu uses some code derived partly from tabularx that sets the table multiple times to determine the column widths so you need to disable \autocite during the trials. tabu has a hook for that: \documentclass{article} \usepackage[backend=bibtex,style=authoryear-icomp]{biblatex} \usepackage{longtable, tabu} \tabuDisableCommands{\def\autocite{}} ... 3 Here is an hack using latex crossref mechanism to define the labels of the bibliographic references. The trick is to redefine how the bibliography is handled (using the facilities provided by biblatex). To redefine it we use enumitem that allows us to specify how the labels and the references are formatted. We also have to redefine \cite accordingly. ... 3 Well, I changed your given MWE a little bit and added with package filecontents your given bib entry to the MWE to have all things together. You use \addbibresource{\jobname.bib} in your MWE, but you should use simple \addbibresource{\jobname}. EDIT: To clarify this: I'm using current MiKTeX 2.9 and I get an error message when using .bib here. The ... 0 Instead of biblatex use directly natbib (e.g. with apalike style), it does this automatically. And it does not matter if the 2nd and 3rd authors are in different order. It has also a lot of formating possibilities, see the manual The test.bib looks like this (I cut it a bit just for saving space): @Article{Saffran1996a, Title = ... 0 It is very possible that not be the best form, but this works in the form that the questions suggests. The bibentries are generics. The code is comments. Read it, please. MWE: \documentclass[a4paper,titlepage,10pt,twoside,openright]{report} \begin{filecontents}{IEEEexample.bib} @inBook{Wolff1962, Title = {Philosophia prima sive ... 0 Three mistakes!. The \printbibliography option (key) type is not to filter the entries by field type, is to print the bibitem by entry names. it means, @book, @article, etc... (without @). The example bibentry is a @article if it is changed to @techreport it works if the type option (\printbibliography) is report. For filter by field (different to ... 1 biblatex has the built-in standard macro shorthandintro that can do this. In the .bib file one will then add the shorthand field and give the short citation name there, like this @article{jd14, author = {Doe, J. and Smith, J. and Bar, F.}, title = {Some title}, journal = {Some journal}, year = {2014}, shorthand = {JD14}, } The only thing ... 0 Go to Settings, Configure Kile. In the left panel, select Tools and Build. Press the "New..." button. Follow the next image instructions: Now, from "Select a tool" listbox, select QuickBuild. Perform the necessary changes to have the same that the illustration: Press OK and compile your file/project :) Reference: Florian Schöngaßner webpage. 2 This trouble happens with all themes that use the infolines outer theme (i.e AnnArbor, Boadilla, CambridgeUS, EastLansing and Madrid) because the left margin is very small. This makes the label is hidden. A possible solution (with unknown effects) is extend the margin a bit with \setbeamersize{text margin left=2.5em} Try \documentclass{beamer} ... 1 It is described clearly in the manual of natbib on page 3. Use command: \defcitealias{nbren12}{NB12} After that, in addition to classic citing \citet{} or \citep{}, you can also use: \citetalias{nbren12} % or \citepalias{nbren12} Example In example.tex I have: \documentclass{article} \usepackage{natbib} \bibliographystyle{apalike} \begin{document} ... 2 I found this command: \renewcommand*{\bibfont}{\small} and work well. (first question). For the second question, for now I used to break the ISBN with line (XXX-X-XXXX-XXXX-X, standard 13-number ISBN code). But I'm searching for a more general rule to avoid overfull in bibliography (for example, when I tried to use small font, a DOI URL exceeded the cage ... 0 For the first question, in the bibliography section you can change the main font size, just as you do it in some particular section. For example: \bibliographystyle{Users/Daniele/Thesis/plainnat.bst} % {\small \bibliography{Users/Daniele/Thesis/bibliography}} There are plenty font sizes: \tiny, \scriptsize, \footnotesize, \small, \normalsize, \large, ... 4 In the case outlined biblatex uses a macro called \blx@usqcheck which it uses for US-style quotations with 'moving' punctuation. This checks ahead for punctuation, spaces and so on but also includes a check: \if\noexpand\@let@token\relax \blx@usqcheck@i\blx@tempb \fi where the \blx@usqcheck@i\blx@tempb does not insert any closing quote mark but saves if ... Top 50 recent answers are included
2014-10-25 21:26:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8802549839019775, "perplexity": 6072.8803320352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119650516.39/warc/CC-MAIN-20141024030050-00147-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.esaral.com/q/find-the-equation-of-all-lines-having-slope-2-which-are-tangents-to-the-curve-16257
# Find the equation of all lines having slope 2 which are tangents to the curve Question: Find the equation of all lines having slope 2 which are tangents to the curve $y=\frac{1}{x-3}, x \neq 3$. Solution: The equation of the given curve is $y=\frac{1}{x-3}, x \neq 3$. The slope of the tangent to the given curve at any point (xy) is given by, $\frac{d y}{d x}=\frac{-1}{(x-3)^{2}}$ If the slope of the tangent is 2, then we have: $\frac{-1}{(x-3)^{2}}=2$ $\Rightarrow 2(x-3)^{2}=-1$ $\Rightarrow(x-3)^{2}=\frac{-1}{2}$ This is not possible since the L.H.S. is positive while the R.H.S. is negative. Hence, there is no tangent to the given curve having slope 2 .
2023-03-21 14:56:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9212856292724609, "perplexity": 158.23434949406632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00490.warc.gz"}
https://mathoverflow.net/questions/306469/symmetric-orthogonal-matrices-with-constant-diagonal-entries
# Symmetric orthogonal matrices with constant diagonal entries This question is a follow-up to the previous question on symmetric matrices. Thanks to the responses by Christian Remling and Geoff Robinson to that question, the problem now becomes much more specific, as follows. Suppose that a symmetric matrix $M\in\R^{n\times n}$ is orthogonal or, equivalently, satisfies the condition $M^2=I_n$, where $I_n$ is the $n\times n$ identity matrix. Suppose also that all the diagonal entries of $M$ are equal to one another. Is it then true that $M$ is of the form $aI_n+b\,\de\de^T$, where $\de$ belongs to the set (say $\De_n$) of all $n\times 1$ column matrices with all entries from the set $\{-1,1\}$; $a\in\{-1,1\}$; and $b\in\{0,-2a/n\}$? This is true for $n\in\{2,3\}$. Comment 1. Let $\mathcal M_n$ denote the set of all matrices $M$ satisfying the stated conditions, that is, the set of all symmetric orthogonal matrices $M\in\R^{n\times n}$ with constant diagonal entries. The actual problem here is to show that for each $M\in\mathcal M_n$ all the off-diagonal entries $M_{ij}$ of $M$ with $i\ne j$ are of the form $c\de_i\de_j$ for some real $c$ and some $\de\in\De_n$; it is then easy to specify the appropriate $a$ and $b$, given in the above question. So, it is enough to show that $$\prod_{\de\in\De_n}\Big(\sum_{1\le i<j\le n}\sum_{1\le k<\ell\le n}(M_{ij}\de_i\de_j-M_{k\ell}\de_k\de_\ell)^2\Big)=0,$$ which is how the cases the cases of $n\in\{2,3\}$ were verified. Comment 2. More generally, even without the condition on the diagonal entries of $M$, it is enough to show that $M-aI_n$ is of rank $1$ for some real $a$. • what is an $n\times n$ column matrix? – Abdelmalek Abdesselam Jul 20 '18 at 15:35 • @AbdelmalekAbdesselam : Thank you for spotting the typo. It is now corrected. – Iosif Pinelis Jul 20 '18 at 15:36 No. A symmetric $M$ will satisfy $M^2=1$ if and only if the spectrum is contained in $\pm 1$, which is equivalent to $M=P-(1-P)=2P-1$ for some orthogonal projection $P$. Now you're asking if the extra condition that the diagonal is constant will give $P$ rank $1$ or $n-1$. It's clear that this won't follow because we have such examples with $\textrm{rank }P=1$ for $n=2$ and can just take orthogonal sums of those for larger (even) $n$'s. • So, the cases of $n\in\{2,3\}$ are the only ones when the answer is yes. Turning to orthoprojectors is a nice idea. – Iosif Pinelis Jul 20 '18 at 17:06
2019-10-24 06:02:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9574108123779297, "perplexity": 95.00764594793057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987841291.79/warc/CC-MAIN-20191024040131-20191024063631-00424.warc.gz"}
https://brilliant.org/discussions/thread/confused-polynomial/
× # Confused Polynomial p(x) is a polynomial of degree 3 with p(1)=2, p(2)=3, p(3)=4, and p(4)=6. One of the factor of p(x+2) is ... The answer is in the form of (x+a) with a is integer number Note by Dina Andini Sri Hardina 3 years ago Sort by: If $$p(x)$$ is cubic, we can generalize it as $$p(x) = ax^3 + bx^2 + cx + d$$. So, we can plug in our four data points to get four equations in four variables. Namely, $$a + b + c + d = 2$$ $$8a + 4b + 2c + d = 3$$ $$27a + 9b + 3c + d = 4$$ $$64a + 16b + 4c + d = 6$$ We could solve these linear equations a whole host of ways. One of the easiest is to make this into a matrix and get it into reduced row eschelon form. Here's the Wolfram Alpha input for that: http://www.wolframalpha.com/input/?i=rref%28%5B1%2C+1%2C+1%2C+1%2C+2%5D%3B%5B8%2C+4%2C+2%2C+1%2C+3%5D%3B%5B27%2C+9%2C+3%2C+1%2C+4%5D%3B%5B64%2C+16%2C+4%2C+1%2C+6%5D%29 This gives us solutions of $$a = \frac16, b = -1, c = \frac{17}{6}, d = 0$$ Thus, $$6p(x) = x^3 - 6x^2 + 17x$$ -- note that we can multiply $$p(x)$$ by $$6$$ and still have the same roots. This factors: $$x(x^2 - 6x + 17)$$. Thus, one factor is $$x$$, and that is indeed the only integer factor. · 3 years ago That is the approach that i was trying to outline. But your presentation makes it easy to understand. But 6*p(x) = x^3+5x+18 with x+2 as one of the factors · 3 years ago We have 4 given values of the coefficients and 4 unknowns (the coefficients a, b, c and d). One could then use formal methods such as matrix inversion to get the value of the coefficients (a=1/6, b = -1, c=17/6 and d =0) Then p(x+2) = (x^2-5x+18)/6 which has a factor of (x+2). Alternatively, one could just subtract the equation for p(1) from that for p(2) to remove d and get an equation in 3 variables. p(1) = a + b + c + d = 2 p(2) = 8a + 4b + 2c + d = 3 p(3) = 27a + 9b + 3c + d = 4 p(4) = 64a + 16b + 4c + d = 6 (2)-(1) gives 7a + 3b + c = 1 (5) (3)-(2) gives 19a + 5b + c = 1 (6) (6) -(2) gives 12a + 2b = 0 2b = -12a b = -6a From (5), 7a -18a + c = 1 c = 11a+1 a + b + c + d = 2 a -6a + 11a + 1 + d = 2 6a + 1 + d = 2 d = 1-6a 64a + 16b + 4c + d = 6 64a -96a + 44a + 4 + 1-6a = 6 6a+5=6 6a=1 a = 1/6 b = -1 c = 11/6 + 1 1/6x^3 -x^2 + 17/6x x^3-6x^2+17x (x+2)^3 -6(x+2)^2 + 17(x+2) = x^3 + 8 + 6x(x+2) - 6(x^2+4x+4) + 17x + 34 = x^3+8+6x^2+12x - 6x^2-24x - 24 + 17x + 34 x^3 + 5x + 18 · 3 years ago Unfortunately, my solution not been presented / displayed as i would have liked and possibly, it looks confusing · 3 years ago i've tried this way but i already gave up · 3 years ago Hi Dina! For problems like these consider a polynomial $$H(x)=P(x)-(x+1)$$, since $$deg(P(x))=3$$, so $$deg(H(x))=3$$. Plugging in $$x=1,2,3$$ in $$H$$, gives $$0$$ each time. Since nothing is known about the leading co-efficient of $$P$$, so let it be a real number $$c$$. So, $H(x)=c(x-1)(x-2)(x-3)$, giving $P(x)-(x+1)=c(x-1)(x-2)(x-3)$, now plug in $$x=4$$ in the above equation and using the fact that $$P(4)=6$$, $$c=\frac{1}{6}$$. So $P(x)=\frac{1}{6}(x-1)(x-2)(x-3)+(x+1)$. So $P(x+2)=\frac{x^{3}-x+6x+18}{6}$. You can now find the factor.................I hope this has helped you. · 3 years ago but why do u use h(x)=p(x)-(x+1)? · 3 years ago I took this polynomial because of the values given. · 3 years ago very very help jit...anyway thanks:D · 3 years ago Sry...but what we have to calculate in the question? · 3 years ago One of the factor of p(x+2) in the form x+a · 3 years ago Jit has done it beautifully. I can also say that u should try to observe a pattern in the values given and then try to define a function. · 3 years ago Thanks for the compliment mate, yeah observing the pattern is the key to such problems. · 3 years ago
2017-03-29 15:18:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7505490183830261, "perplexity": 526.1628521126845}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190753.92/warc/CC-MAIN-20170322212950-00141-ip-10-233-31-227.ec2.internal.warc.gz"}
http://angrystatistician.blogspot.co.uk/2017/04/why-does-kaggle-use-log-loss.html
### Why does Kaggle use Log-loss? If you're not familiar with Kaggle, it's an organization dedicated to data science competitions to both provide ways for companies to potentially do analytics at less cost, as well as to identify talented data scientists. Competitions are scored using a variety of functions, and the most common for binary classification tasks with confidence is something called log-loss, which is essentially $$\sum_{i=1}^{n} p_i\cdot\log(p_i)$$, where $$p_i$$ is your model's claimed confidence for test data point $$i$$'s correct label. Why does Kaggle use this scoring function? Here I'll follow Terry Tao's argument. Ideally what we'd like is a scoring function $$f(x)$$ that yields the maximum expected score precisely when the claimed confidence $$x_i$$ in the correct label for $$i$$ is actually what the submitter believes is the true probability (or frequency) of that outcome. This means that we want $L(x)=p\cdot f(x) + (1-p)\cdot f(1-x)$ for fixed $$p$$ to be maximized when $$x=p$$. Differentiating, this means $L'(x) = p\cdot f'(x) - (1-p)\cdot f'(1-x) = 0$ when $$x=p$$, hence $$p\cdot f'(p) = (1-p)\cdot f'(1-p)$$ for all $$p$$. This will be satisfied by any admissible $$f(x)$$ with $$x\cdot f'(x)$$ symmetric around $$x=\frac{1}{2}$$, but if we extend our analysis to multinomial outcomes we get the stronger conclusion that in fact $$x\cdot f'(x) = c_0$$ for some constant $$c_0$$. This in turn implies $$f(x)=c_0\cdot \log(x)+c_1$$. If we want $$f(1/2)=0$$ and $$f(1)=1$$, we end up with $$f(x)={\log}_2(2x)$$ and the expected score is $L(x)=x\cdot {\log}_2(2x) + (1-x)\cdot {\log}_2(2(1-x)).$ 1. Mantab mas/mbak artikelnya (y) jangan lupa kunjungi: http://pdaagar.com/ terimakasih.. 2. E-commerce Solutions: - Graphic Design Services - Shopping Cart Software - Secure eCommerce Hosting - Expert Consultation - Custom Solutions 3. If you need your ex-girlfriend or ex-boyfriend to come crawling back to you on their knees (no matter why you broke up) you have to watch this video right away... (VIDEO) Win your ex back with TEXT messages? ### A Bayes' Solution to Monty Hall For any problem involving conditional probabilities one of your greatest allies is Bayes' Theorem. Bayes' Theorem says that for two events A and B, the probability of A given B is related to the probability of B given A in a specific way. Standard notation: probability of A given B is written $$\Pr(A \mid B)$$ probability of B is written $$\Pr(B)$$ Bayes' Theorem: Using the notation above, Bayes' Theorem can be written: $\Pr(A \mid B) = \frac{\Pr(B \mid A)\times \Pr(A)}{\Pr(B)}$Let's apply Bayes' Theorem to the Monty Hall problem. If you recall, we're told that behind three doors there are two goats and one car, all randomly placed. We initially choose a door, and then Monty, who knows what's behind the doors, always shows us a goat behind one of the remaining doors. He can always do this as there are two goats; if we chose the car initially, Monty picks one of the two doors with a goat behind it at random. Assume we pick Door 1 and then Monty sho… ### What's the Value of a Win? In a previous entry I demonstrated one simple way to estimate an exponent for the Pythagorean win expectation. Another nice consequence of a Pythagorean win expectation formula is that it also makes it simple to estimate the run value of a win in baseball, the point value of a win in basketball, the goal value of a win in hockey etc. Let our Pythagorean win expectation formula be $w=\frac{P^e}{P^e+1},$ where $$w$$ is the win fraction expectation, $$P$$ is runs/allowed (or similar) and $$e$$ is the Pythagorean exponent. How do we get an estimate for the run value of a win? The expected number of games won in a season with $$g$$ games is $W = g\cdot w = g\cdot \frac{P^e}{P^e+1},$ so for one estimate we only need to compute the value of the partial derivative $$\frac{\partial W}{\partial P}$$ at $$P=1$$. Note that $W = g\left( 1-\frac{1}{P^e+1}\right),$ and so $\frac{\partial W}{\partial P} = g\frac{eP^{e-1}}{(P^e+1)^2}$ and it follows \[ \frac{\partial W}{\partial P}(P=1) = … ### Mixed Models in R - Bigger, Faster, Stronger When you start doing more advanced sports analytics you'll eventually starting working with what are known as hierarchical, nested or mixed effects models. These are models that contain both fixed and random effects. There are multiple ways of defining fixed vs random random effects, but one way I find particularly useful is that random effects are being "predicted" rather than "estimated", and this in turn involves some "shrinkage" towards the mean. Here's some R code for NCAA ice hockey power rankings using a nested Poisson model (which can be found in my hockey GitHub repository): model <- gs ~ year+field+d_div+o_div+game_length+(1|offense)+(1|defense)+(1|game_id) fit <- glmer(model, data=g, verbose=TRUE, family=poisson(link=log) ) The fixed effects are year, field (home/away/neutral), d_div (NCAA division of the defense), o_div (NCAA division of the offense) and game_length (number of overtime periods); off…
2017-12-16 05:25:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7495697736740112, "perplexity": 1017.4070571342005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948583808.69/warc/CC-MAIN-20171216045655-20171216071655-00335.warc.gz"}
https://ijnetworktoolcanon.co/canon-ij-network-scanner-tool/
# Canon IJ Network Scanner Tool ## Canon IJ Network Scanner Tool Canon IJ Network Scanner Tool Canon IJ Network Tool The Canon IJ Network Tool is a utility that enables you to display and modify the machine network settings. It is installed when the machine is set up. Canon IJ Network Scanner Tool Important: Do not start up the Canon IJ Network Tool while printing. Do not print when the Canon IJ Network Tool is running Canon IJ Network Tool Download Support for OS Windows and Mac – Canon IJ Network Tool Setup device is a utility that allows you to screen and modify the community settings within the instrument. Canon Network Tools It’s put in in the event the machine is ready Canon IJ Network Scanner Tool However, if the Canon IJ Network Scan Utility just isn’t displayed on the home display screen, pick seek out the charm, then search for “IJ Network Tool.” While on Home Windows Vista/7/XP, you merely require to click Start all programs → Canon Utilities → IJ network tools, then Network tool canon ij network tool free download – Canon IJ Printer Driver Canon iP4200, Canon IJ Printer Driver Canon iP5200, Canon IJ Printer Driver Canon iP6600D, and many more programs … Network scanner Easily scan documents to your Windows computer with the Canon IJ Scan Utility to network connection, set up the network environment from IJ Scan Utility.. What Is Canon IJ Network Tool Canon IJ Network Scan Utility Windows Driver Download. OS: Windows Vista 32bit/64bit, Windows XP SP2/SP3/Windows XP x64 & Windows 2000 File version: 2.5.0 IJ Start Canon TS6220 Setup The Canon PIXMA TS6220 is a portable three-in-one printer with an incorporated flatbed scanner. Made to provide outstanding rate and also effectiveness for a range of residence and also company applications, it has a Canon IJ Network Scanner Tool IJ Network Driver Ver. 2.5.7 / Network Tool Ver. 2.5.7 (Windows 10/8,1/8/Vista/XP/2000 32-64bit) This file is the LAN driver for Canon IJ Network . With this setup, you can print from the Canon IJ Networkprinter that is connected through a network Canon Ij Setup PIXMA iX6820 Canon PIXMA iX6820 Driver Download, Wireless Setup, IJ Setup – Canon Pixma iX6820 is a normal inkjet printer. It collaborates with Windows and Mac OS. Canon IjSetup PIXMA iX6820 This printer is a From the Ver.4.5.0, OS X v10.5.8/10.6.8 will not be supported. Execute the following file to launch Network Tool: /Applications/Canon Utilities/IJ Network Tool/Canon IJ Network Tool.app PIXMA Printer Software. Canon offers a selection of optional software available to our customers to enhance your PIXMA printing experience. Details of each software item and links to download the software are provided on this page. … Canon IJ Network Tool Using the Canon IJ Network Tool, you can install, view or configure the network settings Canon IJ Network Scanner Tool Canon Ij Network Tool that enable you to print and Scan from the wireless Canon IJ Network printer that is connected through a network. … Canon IJ Network ToolCanon IJ Network Tool configuration. Canon IP4970 Driver Download. Canon IP4970 Driver Download Canon IP4970 Driver Download – Known as the best printer discharged by Canon, the Canon Easily scan documents to your Windows computer with the Canon IJ Scan Utility to network connection, set up the network environment from IJ Scan Utility.. Canon IJ Network Tool Windows 10 Canon IJ Network Scan Utility Windows Driver Download. OS: Windows Vista 32bit/64bit, Windows XP SP2/SP3/Windows XP x64 & Windows 2000 File version: 2.5.0 The IJ Network Tool Windows 10 is an application that allows you to easily scan photos, documents, etc. You can complete the scan at the same time only by clicking on the appropriate icon in the IJ ScanUtility main screen Scan using Canon IJ Scan Utility in Canon PIXMA Printer.. IJ Network Tool Canon IJ Setup MG6490 Drivers Support Download for Windows, Mac, and Linux Canon IJ Setup MG6490 Software Driver – Canon PIXMA MG6490 Wireless Color Cloud Printer with Scanner and Copier, The PIXMA MG6490 could be the best Wi-fi Inkjet Picture All-In-One printer offering fantastic image printing functionality and remarkable versatility Canon IJ Network Scanner Tool ### Canon IJ Network Scanner Tool Hyperlink Manual instruction all Canon Printers Canon IJ Network Scanner Tool
2019-06-16 23:00:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8931494951248169, "perplexity": 8392.659392099411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998325.55/warc/CC-MAIN-20190616222856-20190617004856-00042.warc.gz"}
http://physics.stackexchange.com/questions/258/what-determines-the-minimum-angle-at-which-a-domino-falls-over/259
What determines the (minimum) angle at which a domino falls over? Dominoes, when placed upright, remain that way. Sometimes, even if you tip them a little bit, they will go back to their upright position. However, if you tip them too far, they will fall over. After trying this with many different sized/shaped dominoes and some textbooks, I've noticed that this angle of "maximum tippage" varies for each one, depending on its dimensions. Taller dominoes seem to have a lower maximum tippage angle. Dominoes with wider bases seem to have a higher maximum tippage angle. What are other factors are involved? Is there a way to compute this maximum tippage angle, given the height of the domino, and the width of its base, and any other factors that might be involved? If so, what is this relationship (mathematically)? (Dominoes start acting weirdly when their weight/density is unevenly distributed, so for this question, assume that dominoes are of constant density) - To make it fall you need a torque. This torque is provided by the weight force acting on the center of mass of the object and by the offset between the center of mass and the edge of the object. Imagine your domino standing upright then tilt it. You are moving the center of mass. When the center of mass (blue) is on the right of the edge (red) then you have a torque, represented by the triangle. The torque is $\tau = m g D$ so to make it fall you need $D$ greater than zero. If the domino (of uniform constant density) as base of width L the center of mass is located at L/2. For an height H, the center of mass is located at height H/2. That his other sketch: Taking this one in the limit case where $D=0$ you obtain the last sketch. Solving the trigonometry you obtain $\alpha = Atan \frac{L/2}{H/2}$. The angle of the domino with respect to the "table" is $90-\alpha$ degrees. - I'm...not exactly sure what your diagram is trying to illustrate, sorry. What is the 90 degree angle representing? Where is $\alpha$, and where does it come from? – Justin L. Nov 5 '10 at 0:45 Now it should be clear and correct (I hope). – Cedric H. Nov 5 '10 at 0:54 How did you draw those pictures, out of interest... – Seamus Nov 5 '10 at 18:08 Am I wrong in thinking that $Atan \frac{L/2}{H/2}$ is the same as $Atan \frac{L}{H}$ ? – Justin L. Nov 5 '10 at 22:12 The direction that the domino falls is determined by the location its center of mass. It falls to the left or right depending on whether the center of mass is to the left or right of the bottom-most edge. If the center of mass is precisely above the edge, then it balances on that edge in an unstable equilibrium. -
2015-12-01 16:53:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8226731419563293, "perplexity": 550.6609044877698}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398468396.75/warc/CC-MAIN-20151124205428-00244-ip-10-71-132-137.ec2.internal.warc.gz"}
https://computergraphics.stackexchange.com/questions/8102/pbr-and-specular-aliasing
# PBR and Specular Aliasing I have been following LearnOpenGL.com's tutorials on PBR. Everything makes sense and I wrote up a shader for my physically based renderer. I noticed that the results look great, however all of my metal objects have such a strong specular on the edges thanks to my Fresnel calculation, and it is producing specular aliasing. See image: I can't seem to figure out why, or how to fix this. I also uploaded a simplified version of my PBR fragment shader. If someone could take a look, I would really appreciate it. Here is the tutorial I am following: https://learnopengl.com/PBR/Lighting It doesn't seem to have any of these specular aliasing issues. Thanks! • Can you create an none textured specular-only and fresnel-only image to help visualise were the problem is ? – PaulHK Sep 26 '18 at 2:12 • This PBR code contains many calculations prone to math errors like division by zero which cause undefined behaviour, i'd check that first and add some margin to prevent this like max(1e-5, something). I actually implemented this tutorial some time ago and did some corrections. – narthex Sep 30 '18 at 18:11 I'll point out that even if you did implement it correctly, at 1 sample per pixel with low roughness materials you'll still see heavy specular aliasing. It's not uncommon to see the firefly pattern on edges on low roughness materials. For reference see the Infiltrator demo without TAA
2019-02-16 14:13:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3988974690437317, "perplexity": 1770.612391296354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480472.38/warc/CC-MAIN-20190216125709-20190216151709-00112.warc.gz"}
http://www.acmerblog.com/POJ-1208-The-Blocks-Problem-blog-317.html
2013 11-09 # The Blocks Problem Many areas of Computer Science use simple, abstract domains for both analytical and empirical studies. For example, an early AI study of planning and robotics (STRIPS) used a block world in which a robot arm performed tasks involving the manipulation of blocks. In this problem you will model a simple block world under certain rules and constraints. Rather than determine how to achieve a specified state, you will “program” a robotic arm to respond to a limited set of commands. The problem is to parse a series of commands that instruct a robot arm in how to manipulate blocks that lie on a flat table. Initially there are n blocks on the table (numbered from 0 to n-1) with block bi adjacent to block bi+1 for all 0 <= i < n-1 as shown in the diagram below: The valid commands for the robot arm that manipulates blocks are: move a onto b where a and b are block numbers, puts block a onto block b after returning any blocks that are stacked on top of blocks a and b to their initial positions. move a over b where a and b are block numbers, puts block a onto the top of the stack containing block b, after returning any blocks that are stacked on top of block a to their initial positions. pile a onto b where a and b are block numbers, moves the pile of blocks consisting of block a, and any blocks that are stacked above block a, onto block b. All blocks on top of block b are moved to their initial positions prior to the pile taking place. The blocks stacked above block a retain their order when moved. pile a over b where a and b are block numbers, puts the pile of blocks consisting of block a, and any blocks that are stacked above block a, onto the top of the stack containing block b. The blocks stacked above block a retain their original order when moved. quit terminates manipulations in the block world. Any command in which a = b or in which a and b are in the same stack of blocks is an illegal command. All illegal commands should be ignored and should have no affect on the configuration of blocks. The input begins with an integer n on a line by itself representing the number of blocks in the block world. You may assume that 0 < n < 25. The number of blocks is followed by a sequence of block commands, one command per line. Your program should process all commands until the quit command is encountered. You may assume that all commands will be of the form specified above. There will be no syntactically incorrect commands. The output should consist of the final state of the blocks world. Each original block position numbered i ( 0 <= i < n where n is the number of blocks) should appear followed immediately by a colon. If there is at least a block on it, the colon must be followed by one space, followed by a list of blocks that appear stacked in that position with each block number separated from other block numbers by a space. Don't put any trailing spaces on a line. There should be one line of output for each block position (i.e., n lines of output where n is the integer on the first line of input). 10 move 9 onto 1 move 8 over 1 move 7 over 1 move 6 over 1 pile 8 over 6 pile 8 over 5 move 2 over 1 move 4 over 9 quit 0: 0 1: 1 9 2 4 2: 3: 3 4: 5: 5 8 7 6 6: 7: 8: 9: /* @author:zeropinzuo */ import java.util.*; public class Main{ static Scanner cin; static MyList list; public static void main(String args[]){ cin = new Scanner(System.in); int n = cin.nextInt(); list = new MyList(); for(int i = 0;i < n;i++) while(true) if(run() == false) break; show(); return; } static boolean run(){ String command = cin.next(); if(command.equals("quit")) return false; Integer a = new Integer(cin.nextInt()); String scommand = cin.next(); Integer b = new Integer(cin.nextInt()); if(command.equals("move")) return move(a,b,scommand); else return pile(a,b,scommand); } static boolean move(Integer a,Integer b,String command){ Stack A = list.contains(a); Stack B = list.contains(b); if(A == B) return true; if(command.equals("over")){ A.removeSingle(a); } else{ Iterator< Integer> iterator = B.getLast(b).iterator(); while(iterator.hasNext()) A.removeSingle(a); } return true; } static boolean pile(Integer a,Integer b,String command){ Stack A = list.contains(a); Stack B = list.contains(b); if(A == B) return true; if(command.equals("over")){ A.removeLast(a).removeSingle(a); } else{ Iterator iterator = B.getLast(b).iterator(); while(iterator.hasNext()) A.removeLast(a).removeSingle(a); } return true; } static void show(){ for(Stack stack:list) stack.show(); } } class Stack{ ArrayList< Integer> contents; Integer n; public Stack(Integer n){ this.n = n; contents = new ArrayList< Integer>(); return; } boolean contains(Integer value){ for(Integer p:contents) if(p.equals(value)) return true; return false; } return this; } } return this; } Stack removeLast(Integer value){ int num = contents.indexOf(value); int size = contents.size(); for(int i = num+1;i < size;i++) contents.remove(num+1); return this; } Stack removeSingle(Integer value){ contents.remove(value); return this; } List< Integer> getLast(Integer value){ int num = contents.indexOf(value); return contents.subList(num+1,contents.size()); } void show(){ System.out.print(n+":"); for(Integer p:contents) System.out.print(" "+p); System.out.println(); } } class MyList extends ArrayList< Stack>{ Stack contains(Integer value){ for(Stack stack:this) if(stack.contains(value)){ return stack; } return null; } } 1. #include <cstdio> #include <algorithm> struct LWPair{ int l,w; }; int main() { //freopen("input.txt","r",stdin); const int MAXSIZE=5000, MAXVAL=10000; LWPair sticks[MAXSIZE]; int store[MAXSIZE]; int ncase, nstick, length,width, tmp, time, i,j; if(scanf("%d",&ncase)!=1) return -1; while(ncase– && scanf("%d",&nstick)==1) { for(i=0;i<nstick;++i) scanf("%d%d",&sticks .l,&sticks .w); std::sort(sticks,sticks+nstick,[](const LWPair &lhs, const LWPair &rhs) { return lhs.l>rhs.l || lhs.l==rhs.l && lhs.w>rhs.w; }); for(time=-1,i=0;i<nstick;++i) { tmp=sticks .w; for(j=time;j>=0 && store >=tmp;–j) ; // search from right to left if(j==time) { store[++time]=tmp; } else { store[j+1]=tmp; } } printf("%dn",time+1); } return 0; } 2. Good task for the group. Hold it up for every yeara??s winner. This is a excellent oppotunity for a lot more enhancement. Indeed, obtaining far better and much better is constantly the crucial. Just like my pal suggests on the truth about ab muscles, he just keeps obtaining much better.
2017-01-17 21:16:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31748166680336, "perplexity": 4801.696947858106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00096-ip-10-171-10-70.ec2.internal.warc.gz"}
http://blog.vmchale.com/article/termination-checking
The totient function is defined for positive integers as: $$\displaystyle \varphi(n) = \prod_{p | n} \left(1 - \frac{1}{p}\right)$$ where $$p$$ is prime. This leads to an algorithm for computing the totient function, which we can express thusly in Haskell: import Control.Monad (join)totient :: Int -> Int totient n = foldr (\factor acc -> acc div factor * (factor - 1)) n (uniquePrimeFactors n)uniquePrimeFactors :: Int -> [Int] uniquePrimeFactors = join go where go 1 _ = [] go i n | n rem i == 0 && isPrime i = i : go (i-1) (reduce i n) go i n = go (i-1) n reduce i n | n rem i == 0 = reduce i (n quot i) reduce _ n = nisPrime :: Int -> Bool isPrime 1 = False isPrime x = all ((/=0) . (x rem)) [2..up] where up = floor (sqrt (fromIntegral x :: Double)) It is nontrivial to prove that this terminates; this depends on the fact that a number has a unique prime factorization, and that all prime numbers are greater than or equal to 2. None of this is particularly advanced mathematics, but it makes a pretty strong case against the current approach to termination checking used by Idris (for instance). Moreover, this should be read as evidence that Turing was morally right, as know he was technically right: it is impossible to check if a program terminates in general, but most of the functions where it is provably impossible to know when terminates are pathological. This case demonstrates that merely finding a compiler algorithm that works on common data strucutres (that is, integers) and functions is difficult.
2018-12-12 23:24:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5627161264419556, "perplexity": 1589.867880497547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824180.12/warc/CC-MAIN-20181212225044-20181213010544-00117.warc.gz"}
https://www.physicsforums.com/threads/group-structure.329177/
# Group Structure 1. Aug 5, 2009 ### jeff1evesque Group properties: 1. $$\forall a, b, c \in G, (a * b) * c = a * (b * c).$$ (associativity) 2. $$\exists e \in G$$ such that $$\forall x \in G, e * x = x * e = x.$$ (identity) 3. $$\forall a \in G, \exists a' \in G$$ such that, $$a * a' = a' * a = e$$ (inverse) Instruction: Determine whether the binary operation $$*$$ gives a group structure on the given set. Problem: Let $$*$$ be defined on Q by letting $$a * b = ab$$. Thought process: To begin, one has to understand the three properties of being a group- which is defined above. Can someone help me go through the process of testing the three properties from above to our specified problem? Thanks, JL Last edited: Aug 5, 2009 2. Aug 5, 2009 ### aostraff I'm going to assume your Q represents the field of rational numbers. The binary operation you have seems to be multiplication of rational numbers. So to test the 3 properties, you use arbitrary rational numbers and show that the operation is associative, there exists another element that is the inverse of another element, and that there exists an element that acts like the identity element. This shouldn't be too hard so I will stop with this. If you are still stuck, just let the arbitrary rational numbers be of the form $$\frac{a}{b}$$ where $$b \neq 0$$ and $$a,b \in \bbmath{Z}$$. 3. Aug 5, 2009 ### jeff1evesque Question 1: Can I let "a, b, c" be rational numbers in the set G- as done in my proof below? Question 2: So as you said, this problem was fairly straightfoward. I did all three axiom test, and they all passed. But according to the solution, this problem fails at the third criterion. But according to my calculations, $$a * a' = aa' = e = a'a = a' * a,$$ which shows (along with the first two axiomatic criterions) that our given binary operation $$*$$ produces a group structure on our given set. Am I correct? Question 3: Also, to show a given binary operation is a group structure we have to prove the three axiomatic conditions (as stated above). However, the binary operation must fulfill closure. How can I show our given binary operation fulfills "closure"? Thanks again, JL Last edited: Aug 5, 2009 4. Aug 5, 2009 ### aostraff Ah, my bad. I forgot that the set of rational numbers under multiplication is only a group if 0 isn't included. So Q isn't a group under multiplication but Q* is (Q* is convention for the rationals without 0). This is because 0 has no inverse in the rational numbers. Also to be a bit more complete, a group structure has to fulfill 4 axioms. 3 of them have already been mentioned in the first post and closure under the operation. Closure is just a way of saying that the performing the binary operation on 2 elements will not make another element not in the group. Let's use a concrete example such as Q*. Let $$\frac{a}{b}, \frac{c}{d}$$ be arbitrary elements in Q*. To prove closure, you want the operation to $$\frac{a}{b} \cdot \frac{c}{d}$$ still be an element of Q* where it should be of the form $$\frac{x}{y}$$. 5. Aug 5, 2009 ### jeff1evesque Is the logic in the quote above, along with the proof below reasonable? Proof (Closure): Let $$a = \frac{x}{y}, b = \frac{m}{n}$$ (such that $$y \neq 0, n \neq 0$$ since a, b are defined as rational elements). Therefore, $$a * b = \frac{x}{y} \frac{m}{n}$$ Since $$\frac{x}{y} \frac{m}{n}$$ is rational, it follows that our binary operation fulfills closure. Thanks, JL 6. Aug 5, 2009 ### Phrak Zero is a rational number. if a=0, what does a' equal? Question 3: First, can you write-out the symbolic statement for closure? 7. Aug 5, 2009 ### jeff1evesque $$a' = \frac{1}{0}$$ which is irrational, I'm guessing that means my proof is incorrect. What do mean, is the proof I wrote of closure insufficient? Thanks, JL Last edited: Aug 5, 2009 8. Aug 5, 2009 ### aostraff The thing is that Q under the operation of multiplication is not a group, so you won't be able to prove it is a group. But consider Q* (rationals without 0). Without showing fractions explicitly, I'm not sure how you can show rational numbers are closed under multiplication. 9. Aug 6, 2009 ### jeff1evesque Got it, since $$a' = \frac{1}{0},$$ then the third group axiom fails- which shows our particular binary operator does not fulfill the conditions of a group structure (even though the first two succeeds). Though our operation is not a group, a test of closure shows us our particular structure is closed on rationals so long as the denominator of the rational values are not zero (as shown in my proof). This can be shown in two cases: Case I: Assume that the numerator for either rational number is zero. Then if we multiply both rational numbers together, we get zero- which is a rational number. Case II: Assume neither values have a numerator of zero. Since the denominator for each variable is not zero (hence rational), then performing our binary operation $$*,$$ we get a real number. Since our result is a real number, it can be expressed as a rational. Conclusion: Thus, assuming our variables does not have a denominator of zero, we have shown that the binary operation always yields a real number, and all real numbers can be expressed as rational numbers. 10. Aug 6, 2009 ### Phrak It looks good to me JL. I was responding to your previous post. The exception condition, that x and y are not equal to zero, is not necessary to prove closure of the rational numbers under multiplication--all rational numbers have a non-zero denominator by definition. Formally, the closure property look something like this. $$(\forall a, b \in G) \wedge (\exists \; c \in G) \wedge (c = a * b) \ .$$ Last edited: Aug 6, 2009 11. Aug 6, 2009 ### Landau You shouldn't use all these $$\wedge$$'s. "$$(\forall a,b\in G)$$" is not a statement which can be true or false. Closure is just $$(\forall a,b\in G)(a*b\in G)$$ 12. Aug 6, 2009 ### Phrak But the statement $$\forall a,b\in G$$ can be assigned a truth value. However, I'm not familiar with your notation. What does one parenthetic expression followed directly with another parenthetic expression mean? 13. Aug 7, 2009 ### Landau How? What would it mean for this to be true or false? Simple example: "for every integer, its square is also an integer" is a true statement. Notation: $$\forall z\in\mathbb{Z}:z^2\in\mathbb{Z}$$ When talking about mathematical induction, we often have a statement about natural numbers P(n), en we want to prove $$\forall n\in\mathbb{N}:P(n)$$. You can't just use a universal quantifier saying 'for all a and b in G', you have to specify which statement applies to all such a and b (namely, that their product is also in G). The parentheses just mean that the symbols inside them belong together. But in this case you can also just say $$\forall a,b\in G:a*b\in G$$. Last edited: Aug 7, 2009 14. Aug 9, 2009 ### Phrak What function does : serve? 15. Aug 9, 2009 ### Elucidus (1) The following two logical sentences $(\forall x \in X)P(x) \text{ and }\forall x \in X:P(x)$ both translate as "For all elements x in the set X, the proposition P is true for x." or more simply: "For all x in X, P(x) is true." When the quatifier is "there exists" the colon is typically read as "such that" or "where." (2) The statement that all real numbers can be expressed as rational numbers is false. The proof that $\sqrt{2}$ is irrational (yet real) is a classic counter-example. I have not yet seen a proof that the rational numbers are closed under multiplication that doesn't involve letting a/b and c/d be two rational numbers (b, d =/= 0) and showing that the product ac/bd is also rational (i.e. that both ac and bd are integers and bd =/= 0). The critical principle here is the Zero Product Property of Real Numbers: If xy = 0 then either x = 0 or y = 0. The comment that Q is not a multiplicative group because 0 has not inverse is correct and this subtlety is the reason why that example appears in most elementary group theory exercises. --Elucidus 16. Aug 9, 2009 ### Elucidus Quick comment: The expression $\frac{1}{0}$ is not irrational. Unfortunatley, it isn't even a number. It is not defined. Moreover, it is not possible to give it numeric meaning without breaking some other (more important or useful) law of real numbers. I have had many calculus students who have claimed it equals infinity, which is equally false. I make this comment because it is an error (or similar to errors) that I see materialize too frequently and makes me suspect that some of my students have critical gaps in their pre-calculus background. I am not saying that's the case for jeff1levesque but I mention it for the public benefit. --Elucidus 17. Aug 10, 2009 ### aostraff Elucidis, how do you know it is absolutely impossible to make division by 0 be defined? 18. Aug 10, 2009 ### Elucidus Firstly, the expression $\frac{1}{0}$ is the reciprocal of 0, i.e. that number which when multiplied by 0 gives the mulitplicative identity 1: that is $\frac{1}{0}$ * 0 = 1. But it is well established that 0 times any number is 0. From this we conclude that 0 = 1 and consequently every number = 0. This creates a trivial algebra, which is of little practical use (it still is an algebra though - just not interesting). If you assign it some numberic meaning, say k, then $k^2=\frac{1}{0}\cdot\frac{1}{0} = \frac{1}{0} = k$ indicating that k2 = k or equivalently k = 0 or 1. Both of these cases can be shown to be degenerate (i.e. concluding that 0 = 1) in a similar manner as above. Hence we cannot give any numberic meaning to the expression $\frac{1}{0}$ without collapsing the algebra to the trivial case. --Elucidus 19. Aug 11, 2009 ### Phrak Unless I've missed something critical, "A such that B", or "A where B" could also be written "A in addtion to B", or simply "A and B" or even "B where A". Logically, they're all equivalent, though some are easier to read than others. 20. Aug 12, 2009 ### Elucidus This is true if A and B were both propositions. However $\exists x \in X$ is not a proposition - it's a quantifier, so $$\forall x \in \mathbb{R},\exists y \in \mathbb{R}:x+y=0$$​ is interpreted as "For all x in R, there exists y in R such that x + y = 0." --Elucidus
2017-08-16 14:04:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.825299084186554, "perplexity": 606.7728151426667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886101966.48/warc/CC-MAIN-20170816125013-20170816145013-00415.warc.gz"}
https://www.physicsforums.com/threads/time-and-motion.160021/
# Time and motion Could a String be thought of, say a duration of time? I would think that as a single dimensional object it just might fit. ## Answers and Replies Could a String be thought of, say a duration of time? I would think that as a single dimensional object it just might fit. What you have in mind is similar to the notion of a 'string bit' moving along a string. If you'd like to learn more, check out the Sept. 2006 paper: http://arxiv.org/abs/hep-th/0609103" [Broken]. Last edited by a moderator: Thank you, Kneemo. Could duration of time be thought of as this string? With two bits of motion that we think of as particles, one expanding and the other contracting, I would think it fits as one dimensional duration. Can we separate time and motion? With this example no, but we can say that time, or the potential for motion, is expanding and or contracting ahead of the bit(s) of motion(s) we call particle(s). Is this a fair statement? Last edited: josh1 Hi petm1, Im not sure what your after here. Are you looking for natural phenonmena that could serve as some sort of clock? For years I thought of time as a tool for the measurement of motion, and therefore space. Now I think that time as the forth dimension contains not only all motion but all potential for motion. Can we think of time as a potential for motion? Like the string from the paper that Kneemo showed us talks about (using that string as a temporal object), and the motion as being just a small part of that one dimensional duration. If time is potential motion and we can picture it as a string could it be broken into three separate dimensions one each for the other three dimensions? By adding these strings of potential, could different combinations make the idea of a point charge easier to understand? I was thinking of time as a one dimensional object because all motion was contained in it but now I am wondering if the bits on time strings of potential movement have to be combined before we would be able to detect them as a one dimensional motion. I guess what I am asking is if we have three dimensions for the meter, why don't we have three dimensions for a second? As for josh1's question, no I am not trying to reinvent the clock. Last edited: Chronos Gold Member The concept ot motion makes no sense without time. For example, velocity equals distance / time. Since 'space' is merely the relationship between objects traveling along a time line, it is pointless to attempt to separate the two concepts. The concept ot motion makes no sense without time. For example, velocity equals distance / time. Since 'space' is merely the relationship between objects traveling along a time line, it is pointless to attempt to separate the two concepts The concept of motion without time does not make sense to me either, but the concept of time with out motion does seem reasonable to me. It could explain how the universe could be expanding faster than light, dark energy could be explained by the difference between motion and its limit to c, and time's expanding potential for motion that we cannot "see" because motion has not caught up with it yet. Time as a forth dimension containing all motion has to be an object such as our visible universe and at the same time it has to contain all the potential motion that our universe is expanding into. Yet time as a one dimensional duration has to have direction in this case two directions the potential to expand and the potential to contract with motion between. Three separate dimensions of time with only the motion portions "visible". The concept ot motion makes no sense without time. For example, velocity equals distance / time. Since 'space' is merely the relationship between objects traveling along a time line, it is pointless to attempt to separate the two concepts. Not only the notions of time and motion are inseparable but also speed (velocity). Moreover, since the notion of speed implies both time and motion, it is the former that might be considered as a fundamental entity rather than the latters. In principle, it is known that time could be excluded from the equations of motion (unlike velocity). Chronos Gold Member How is that? Motion is merely a different way of describing velocity. I can imagine an unmoving space, devoid of time, but what motion could take place in time devoid of circumstance? So is space more fundamental than time? How is that? Motion is merely a different way of describing velocity. Exactly! For simplicity one can imaging a two-body system with a potential well (say, an oscillator). It can be described using only space coordinates, velocity and acceleration. I can imagine an unmoving space, devoid of time, but what motion could take place in time devoid of circumstance? So is space more fundamental than time? To elaborate, it is possible to imagine a world which has spatial relationships but has no moving parts. There would be no motion, no change, no evolution of time in such a world. On the other hand, if I try to imagine a time world, in which there are no spatial relationships, no objects, no place in which any change could occur, what am I imagining? I am posting the above simultaneously in one of the philosopy forums. So, a priori, I conclude that space may exist without the need for any presense of time, but that time cannot exist without space. For this reason I propose that an operational definition of time can be made from spatial concepts alone. However no operational definition of space is possible from temporal concepts alone. Therefore, a priori, space is fundamental and time is derived from space. I will post this simultaneously in one of the philosophy forums, but it was generated here. Last edited: Chronos Gold Member I think space and time are inseparable. Neither concept makes sense without the other component. The concept of space is only meaninful in terms of the time required to go from point A to point B: d = vt. If d or t is set to zero [or infinity], the other quantity cannot be quantified. 1d, 2d, 3d.... 4d.. ZapperZ Staff Emeritus It is possible to imagine a world which has spatial relationships but has no moving parts. There would be no motion, no change, no evolution of time in such a world. Er.. how so? How exactly are you able to "detect" space? How would you know that, for example, Point A is closer to you than Point B? And just in case you plan on bringing out a very long measuring tape to answer this question, consider what is necessary for you (i) calibrate that measuring tape and (ii) to actually observe various part of that measuring tape. Of course, if you claim to have the ability to observe all parts of the universe instantaneously, then there's no reason why one can't answer your question by simply making things up as one goes along without regards to any physical laws..... Zz. ZapperZ Staff Emeritus I will post this simultaneously in one of the philosophy forums, but it was generated here. Why are you cross-posting this when that has been explicitly prohibited in the Guidelines? The other thread has been merged to this one. Please don't do this again. Zz. Last edited: Thanks, Zz Well, I did say imagine, didn't I? But I think you are right in physical terms. My little mental exercise does require an observer who can move from place to place, and must require time to do so. So I have merely moved the property of time into the obervation apparatus, not really removing it from the universe. That was part of the reason I chose to use the word "world" rather than using the word "universe." I wouldn't try to measure an imaginary world, any more than I would try to calculate the unique value of an imaginary number. I have learned this week from Wiki that the modern view of science, due to Kant and Liebnitz, is that time, space, and mass are fundamental units, which remain undefined. They are not to be thought of, as Newton did, as a kind of container in which objects float about, but as a part of the process of observing events. Not as things in themselves, but as part of the conceptual apparatus. So as long as we are making up time and space without regard to any physical laws, why not make up an imaginary world for them to play in? And my point is that I can construct a concept of a world containing objects that do not experience time, but I am unable to construct a concept of a world without space. I can even make a picture of it. Any common photograph will do. Then there is the idea of relitive events. I have just come across this so I am prone to mis-speak, but there is a category of events called light-like, in which the space-time interval is said to be zero. Nevertheless, there is a spacelike separation to such events, not so? So the timelike separation must be the zero factor. This interpretation is supported, I think, by the notion of time dilation. When an event occurs very near the speed of light relitive to the observer, it experiences time dilated until it nearly passes not at all. If it is at the speed of light relitive to the observor, as in a light-like event, or just the radiation of energy in free space, then it experience zero time in the observors space. So is light real or imaginary? We do consider it a physical quantity, and we need it for every kind of measurment I can think of. I am honored that you have given my little thought consideration, Zz. It seems not unlikely that I have made a mishmash of physics, but I hope you can apply your critical skills to help me slice away everything that is not necessary or sufficient, so I can see for myself if anything "discreet" is left over. Honest thanks, Richard Why are you cross-posting this when that has been explicitly prohibited in the Guidelines? The other thread has been merged to this one. Please don't do this again. Zz. Sorry, I only wanted to try my idea out on the philosophers, who probably don't read much in this forum. And it wasn't the same exact post, I made some small changes to address the academics in the other forum. Zz, do you really expect that everyone who visits physicsforums.com reads every post in every forum? I wasn't spamming, I was merely directing my question to a group who would not see it otherwise. Nevertheless, I do want to follow the guidelines here, as I am a guest in your house. Is there some acceptable way to ask for evaluation from other forum users? Sincerely, Thanks R. ZapperZ Staff Emeritus Thanks, Zz Well, I did say imagine, didn't I? But I think you are right in physical terms. My little mental exercise does require an observer who can move from place to place, and must require time to do so. So I have merely moved the property of time into the obervation apparatus, not really removing it from the universe. That was part of the reason I chose to use the word "world" rather than using the word "universe." I wouldn't try to measure an imaginary world, any more than I would try to calculate the unique value of an imaginary number. I have learned this week from Wiki that the modern view of science, due to Kant and Liebnitz, is that time, space, and mass are fundamental units, which remain undefined. They are not to be thought of, as Newton did, as a kind of container in which objects float about, but as a part of the process of observing events. Not as things in themselves, but as part of the conceptual apparatus. So as long as we are making up time and space without regard to any physical laws, why not make up an imaginary world for them to play in? And my point is that I can construct a concept of a world containing objects that do not experience time, but I am unable to construct a concept of a world without space. I can even make a picture of it. Any common photograph will do. Then there is the idea of relitive events. I have just come across this so I am prone to mis-speak, but there is a category of events called light-like, in which the space-time interval is said to be zero. Nevertheless, there is a spacelike separation to such events, not so? So the timelike separation must be the zero factor. This interpretation is supported, I think, by the notion of time dilation. When an event occurs very near the speed of light relitive to the observer, it experiences time dilated until it nearly passes not at all. If it is at the speed of light relitive to the observor, as in a light-like event, or just the radiation of energy in free space, then it experience zero time in the observors space. So is light real or imaginary? We do consider it a physical quantity, and we need it for every kind of measurment I can think of. I am honored that you have given my little thought consideration, Zz. It seems not unlikely that I have made a mishmash of physics, but I hope you can apply your critical skills to help me slice away everything that is not necessary or sufficient, so I can see for myself if anything "discreet" is left over. Honest thanks, Richard I'm sorry, but if you want us to simply drop all of physics and just make things up, you're in the wrong forum. You want a science fiction forum. And since you stated that you do abide by the rules of this forum, pay attention to the part on overly-speculative post. Zz. ok. goodbye. I think space and time are inseparable. Neither concept makes sense without the other component. The concept of space is only meaninful in terms of the time required to go from point A to point B: d = vt. If d or t is set to zero [or infinity], the other quantity cannot be quantified. The notions of space, time and motion are mostly discussed in philosophy where they are regarded as attributes of matter (i.e., they have no meaning separated from one another and, of course, from matter). In physics, however, sometimes they are used separately (for convenience). For example, take a system with a ball rolling back and forth in a potential well. This system (oscillator) can be described fully in terms of distance (spatial coordinate) and speed (as a function of spatial coordinate) alone, with the time parameter excluded. Then could it be so that the notion of time have had arised in the human mind as a reflection of periodic motions of matter around us? ZapperZ Staff Emeritus The notions of space, time and motion are mostly discussed in philosophy where they are regarded as attributes of matter (i.e., they have no meaning separated from one another and, of course, from matter). In physics, however, sometimes they are used separately (for convenience). For example, take a system with a ball rolling back and forth in a potential well. This system (oscillator) can be described fully in terms of distance (spatial coordinate) and speed (as a function of spatial coordinate) alone, with the time parameter excluded. Then could it be so that the notion of time have had arised in the human mind as a reflection of periodic motions of matter around us? Aren't you forgetting that "speed" is the time rate of change of displacement? It appears to me that time is an implicit part of such a dynamics. And as far as time being arising "in the human mind as a reflection of periodic motion", would you care to explain why time is on equal footing with space in SR/GR, and in elementary particle physics as in the CPT symmetry? Zz. The notions of space, time and motion are mostly discussed in philosophy where they are regarded as attributes of matter (i.e., they have no meaning separated from one another and, of course, from matter). In physics, however, sometimes they are used separately (for convenience). For example, take a system with a ball rolling back and forth in a potential well. This system (oscillator) can be described fully in terms of distance (spatial coordinate) and speed (as a function of spatial coordinate) alone, with the time parameter excluded. Then could it be so that the notion of time have had arised in the human mind as a reflection of periodic motions of matter around us? I think you need to distinguish two things : (a) the measurement of eigentime (b) time as a unmeasurable hidden variable (God's'' clock if you want to). You cannot escape from using (b), any dynamical equation requires it. Now regarding (a), I agree that eigentime is the ticking of an internal clock of a particle (periodic motion) : there exist plenty such models in the literature for spinning particles (zitterbewegung, simple rotations,...). But this implies that time and space aren't on an equal footing at all (which is kind of logical since we never measure time'', we only count the number of ticks on our wrist watches). Btw : these models are (of course) all relativistically invariant. Time in special relativity is eigentime; Henri Poincare -for example- thought one should always consider time in the sense of (b) and think of eigentime as an auxilliary concept. The problem of relativistic simultaneity only arises when one *identifies* notions (a) and (b), a mistake too often made. Last edited: And as far as time being arising "in the human mind as a reflection of periodic motion", would you care to explain why time is on equal footing with space in SR/GR, and in elementary particle physics as in the CPT symmetry? "Equal footing" must mean the complete identification of space and time, which is not exactly what is happening in SR/GR, QFT, etc (Minkovskian metric is not exactly Euclidean, is it?) I would say more: the fundamental difference between space and time is that the latter must be measured by using a periodic process/motion (a clock). Or can you propose measuring time by something else? I think you need to distinguish two things : (a) the measurement of eigentime (b) time as a unmeasurable hidden variable (God's'' clock if you want to). You cannot escape from using (b), any dynamical equation requires it. This looks like an idealised (theoretical) time unavoidable in mathematical models. But I think we must keep in mind that the mathematical models are part of our language for describing physical reality. A model might reproduce pretty well a physical process but never in full detail; and it would be crazy to identify a mathematical model with physical reality. I Now regarding (a), I agree that eigentime is the ticking of an internal clock of a particle (periodic motion) : there exist plenty such models in the literature for spinning particles (zitterbewegung, simple rotations,...). But this implies that time and space aren't on an equal footing at all (which is kind of logical since we never measure time'', we only count the number of ticks on our wrist watches). Btw : these models are (of course) all relativistically invariant. Time in special relativity is eigentime; Henri Poincare -for example- thought one should always consider time in the sense of (b) and think of eigentime as an auxilliary concept. The problem of relativistic simultaneity only arises when one *identifies* notions (a) and (b), a mistake too often made. I agree completely ZapperZ Staff Emeritus "Equal footing" must mean the complete identification of space and time, which is not exactly what is happening in SR/GR, QFT, etc (Minkovskian metric is not exactly Euclidean, is it?) But that could easily be the "fault" of how we define space and have nothing to do directly with time. So why is time degraded to something lower? I would say more: the fundamental difference between space and time is that the latter must be measured by using a periodic process/motion (a clock). Or can you propose measuring time by something else? Note that the only reason why we use "periodic motion" to define time is because these are well-known time period that we know very well. It has nothing to do with these being fundamental to the quality we call time itself. That's like saying space is nothing more than that piece of bar sitting in a climate control room somewhere. You are confusing the CONCEPT of time with how we QUANTIFY time. You still haven't addressed the fundamental CPT symmetry principle that part of how we describe our world. Why is T as fundamentally important as C and P in here, while you don't think so. Zz. This looks like an idealised (theoretical) time unavoidable in mathematical models. But I think we must keep in mind that the mathematical models are part of our language for describing physical reality. A model might reproduce pretty well a physical process but never in full detail; and it would be crazy to identify a mathematical model with physical reality. It is impossible to get rid of this idealized time'' since one needs to express something like change of motion as Zapper noticed. Now, you may not like a hidden variable which you cannot measure (as do I), but I believe Newtonian time to be one of the few exceptions. Whether this t really exists or not is something we cannot decide but assuming the pragmatic attitude that something which is so deeply rooted into our general way of expressing things must be real isn't perhaps that dumb. Last edited: But that could easily be the "fault" of how we define space and have nothing to do directly with time. So why is time degraded to something lower? Equally, that could be a faulty definition of time. But I do not mean that the notion of time is "lower" that that of space. I mean that they are distinct. Time is very much related to motion (hence to energy), therefore, when searching for deeper models of reality and analysing these concepts in detail it would be logical to go beyond the earlier simplifications (such as putting space and time on equal footing). Note that the only reason why we use "periodic motion" to define time is because these are well-known time period that we know very well. It has nothing to do with these being fundamental to the quality we call time itself. That's like saying space is nothing more than that piece of bar sitting in a climate control room somewhere. You are confusing the CONCEPT of time with how we QUANTIFY time. Radioactive decay (being a stochastic process) is not a good example for a time measuring device. In addition, with this you are not going away from the oscillatory motion: in the case of alpha-decay this would correspond to some nonlinear (and, hence, stochastic) oscillatory motions of the nucleus constituents; as for beta-decay, it is assumed that the processes responsible for it to happen are currently not known, but it is quite likely that it happen due to (nonlinear) oscillatory motions of the nucleon components (quarks and gluons). Perhaps all the physical clocks are nonlinear devices but using highly nonlinear clocks would result in extremely messy science. I bet there are no better clocks than photons whose oscillations are predictable and calculable at any circumstances. The "concept" of time is that idealised notion which was introduced by Newton. But I am sure he was aware that, as any idealisation, it has its limitations. SR/GR made a step forward in the development of this notion. But what precludes us from moving further on? Be sure, I am not confusing the concept with the measuring (quantifying) procedure. The question of time is very deep. You still haven't addressed the fundamental CPT symmetry principle that part of how we describe our world. Why is T as fundamentally important as C and P in here, while you don't think so. Sorry, I have forgotten about that because I thought it is obvious: there is a branch of physics called "nonlinear science", in whose many textbooks it is shown how CPT symmetry is broken by nonlinear dissipative processes (motions). Irreversibility is not at all a problem. ZapperZ Staff Emeritus Equally, that could be a faulty definition of time. But I do not mean that the notion of time is "lower" that that of space. I mean that they are distinct. Time is very much related to motion (hence to energy), therefore, when searching for deeper models of reality and analysing these concepts in detail it would be logical to go beyond the earlier simplifications (such as putting space and time on equal footing). But I could also say that motion is very much related to time. You cannot define motion without time. If we all agree that they are all interrelated, then what's the issue here? Why are you picking on "time", when space is equally suspect, or equally valid? Radioactive decay (being a stochastic process) is not a good example for a time measuring device. Why not? In a neutron decay, it takes TIME for something to occur. And not only that, a conglomerate of such particle will ALWAYS decay according to the set decay rate, as if these particles know about time. So for something that you claim to not be "fundamental", nature sure knows how to obey it very, very strictly. In addition, with this you are not going away from the oscillatory motion: in the case of alpha-decay this would correspond to some nonlinear (and, hence, stochastic) oscillatory motions of the nucleus constituents; as for beta-decay, it is assumed that the processes responsible for it to happen are currently not known, but it is quite likely that it happen due to (nonlinear) oscillatory motions of the nucleon components (quarks and gluons). Come again? Alpha decay can be described as a tunneling process that is independent of any oscillatory motion of the nucleus consituents. In other words, even when they do not move, they will still tunnel through. And you're just grasping for speculative straws there with beta-decay. If you'd like to play make-things-up-as-we-go-along, then I can too. Perhaps all the physical clocks are nonlinear devices but using highly nonlinear clocks would result in extremely messy science. I bet there are no better clocks than photons whose oscillations are predictable and calculable at any circumstances. It doesn't matter. We are not talking about quantifying time. We're talking about time as the concept that you have degraded to some non-fundamental quantity. Yet, you have been unable to show how, without using it, you can describe ANY dynamical system fully. Your velocity requires the time rate of change, which you have not addressed. The "concept" of time is that idealised notion which was introduced by Newton. But I am sure he was aware that, as any idealisation, it has its limitations. SR/GR made a step forward in the development of this notion. But what precludes us from moving further on? Be sure, I am not confusing the concept with the measuring (quantifying) procedure. The question of time is very deep. Yes it is, and it simply cannot be dismissed as being not fundamental. If time is an "idealised notion", then so is space, and so is motion, and so is all other notion derived from them. Then what's the problem? Why are we simply picking on "time" here? Sorry, I have forgotten about that because I thought it is obvious: there is a branch of physics called "nonlinear science", in whose many textbooks it is shown how CPT symmetry is broken by nonlinear dissipative processes (motions). Irreversibility is not at all a problem. Nor is it MY point. The FACT that C, P, and T stand TOGETHER implies that you simply cannot downgrade T. Yet, you attempt to do just that. You seem to be missing the whole point of what you are doing here. If you try to do something to "time", why are you ignoring the fact that in physics, "space" and "motion" ALSO follow along. If time is an "illusion", then so is space. I really do not understand why time would be any more special, or any less fundamental, than "space", especially when they are inseparable. In condensed matter physics, there is a whole series of phenomena that is characterized broken time reversal symmetry. This is where such broken symmetry signifies the onset of a particular transition. Unconventional superconductors such as high-Tc superconductors are one such system. Several "ladder magnets" are also characterized by such symmetry. In other words, the time component is an essential ingredient in the description of such system, and nothing else will do. Such a description is as fundamental as describing broken spatial symmetry when water turns into ice. This is not an argument of making "time" to be special. This is an argument on why you are picking on time when space and charge and others are part of the mob also! I have presented several aspects in which time is essential in these description. You have shown nothing in which one could make do without, or discard, time while still preserving the complete description. Zz. Last edited: It is impossible to get rid of this `idealized time'' since one needs to express something like change of motion as Zapper noticed. Now, you may not like a hidden variable which you cannot measure (as do I), but I believe Newtonian time to be one of the few exceptions. Whether this t really exists or not is something we cannot decide but assuming the pragmatic attitude that something which is so deeply rooted into our general way of expressing things must be real isn't perhaps that dumb. Are you sure this is always the case? Of course, we have to parameterise our models. But the notion of the idealised time is completely gone, e.g., in GR, where the evolution of manifolds is represented "statically", since time is put almost on the same footing as space (as Zapper noticed). Everything is measured by using just distance (no time). Equally one can use time or speed (that of light), as I was proposing. Since they are interchangeable, conceptually they represent the same thing. The question is: does really the time concept represent deeply the corresponding aspect of physical reality?
2022-05-19 05:08:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.566334068775177, "perplexity": 556.5420165390494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662525507.54/warc/CC-MAIN-20220519042059-20220519072059-00025.warc.gz"}
http://datavedi.com/model-bias-variance-tradeoff-2/
Home / Predictive Modeling & Machine Learning / 203.4.6 model-Bias Variance Tradeoff ### Model Bias and Variance • Over fitting • Low Bias with High Variance • Low training error – ‘Low Bias’ • High testing error • Unstable model – ‘High Variance’ • The coefficients of the model change with small changes in the data • Under fitting • High Bias with low Variance • High training error – ‘high Bias’ • testing error almost equal to training error • Stable model – ‘Low Variance’ • The coefficients of the model doesn’t change with small changes in the data ### The Bias-Variance Decomposition $Y = f(X)+\epsilon$ $Var(\epsilon) = \sigma^2$ $Squared Error = E[(Y -\hat{f}(x_0))^2 | X = x_0 ]$ $= \sigma^2 + [E\hat{f}(x_0)-f(x_0)]^2 + E[\hat{f}(x_0)-E\hat{f}(x_0)]^2$ $= \sigma^2 + (Bias)^2(\hat{f}(x_0))+Var(\hat{f}(x_0 ))$ Overall Model Squared Error = Irreducible Error + $$Bias^2$$ + Variance ### Bias-Variance Decomposition • Overall Model Squared Error = Irreducible Error + $$Bias^2$$ + Variance • Overall error is made by bias and variance together • High bias low variance, Low bias and high variance, both are bad for the overall accuracy of the model • A good model need to have low bias and low variance or at least an optimal where both of them are jointly low • How to choose such optimal model. How to choose that optimal model complexity
2019-01-23 12:41:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7125278115272522, "perplexity": 4009.489249244587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584331733.89/warc/CC-MAIN-20190123105843-20190123131843-00614.warc.gz"}
https://stats.stackexchange.com/questions/437073/consistency-of-estimators-vs-sample-size
# Consistency of estimators vs sample size I understand that consistency of an estimator is large sample property, but does it make sense to talk about consistency in small samples as well? Can I say about the estimator that it is consistent even though I have a very small sample at hand? And conversely, if I have a consistent estimator (as given), can I say that the sample based on which it was estimated is large? Consistency does not depend on sample size; it judges if an estimator formula converges its target value in probability. This means we take the limit as $$n$$ goes to $$\infty$$ and get rid of the sample size. If you want to somehow quantify the risk you take at your specific sample size, you can look at the variance of your estimator. Consistency is an asymptotic property of an estimator, so it only makes sense in the context of an estimator defined over the sequence of all possible sample sizes. That is, if for any given sample size $$n \in \mathbb{N}$$ we have some estimator $$\hat{\theta}_n: \mathbf{x}_n \rightarrow \Theta$$, then this gives us the sequence of estimators: $$\hat{\theta} \equiv \{ \hat{\theta_n} | n \in \mathbb{N}\}.$$ The property of consistency (strong or weak) is a property that applies to this sequence, and asserts that $$\hat{\theta}_n \rightarrow \theta$$ as $$n \rightarrow \infty$$ (in some probabilistic sense that differs for strong and weak consistency). So it does not make sense to talk about consistency in small samples (or in large samples for that matter). Any talk about consistency concerns the properties of an infinite sequence of estimators.
2020-01-25 22:52:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8391510248184204, "perplexity": 180.87072206034125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681625.83/warc/CC-MAIN-20200125222506-20200126012506-00150.warc.gz"}
http://gmatclub.com/forum/each-employee-on-a-certain-task-force-is-either-a-manager-of-60186.html?oldest=1
Find all School-related info fast with the new School-Specific MBA Forum It is currently 25 Aug 2016, 01:45 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Each employee on a certain task force is either a manager of Author Message Senior Manager Joined: 10 Jun 2007 Posts: 346 Location: Newport, RI Followers: 2 Kudos [?]: 36 [0], given: 0 Each employee on a certain task force is either a manager of [#permalink] ### Show Tags 17 Feb 2008, 11:17 00:00 Difficulty: (N/A) Question Stats: 0% (00:00) correct 0% (00:00) wrong based on 0 sessions ### HideShow timer Statistics This topic is locked. If you want to discuss this question please re-post it in the respective forum. Each employee on a certain task force is either a manager of a director. What percentage of the employees on the task force are directors? (1) The average salary of the managers is $5000 less than the average salary of all the employees on the task force. (2) The average salary of the directors is$15000 greater than the average salary of all the employees on the task force. Senior Manager Joined: 10 Jun 2007 Posts: 346 Location: Newport, RI Followers: 2 Kudos [?]: 36 [0], given: 0 Re: DS: Managers vs Directors [#permalink] ### Show Tags 17 Feb 2008, 11:45 never mind found the explanation here 7-t58920?hilit=managers+directors Re: DS: Managers vs Directors   [#permalink] 17 Feb 2008, 11:45 Display posts from previous: Sort by
2016-08-25 08:45:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.189986914396286, "perplexity": 12406.4534031857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292975.73/warc/CC-MAIN-20160823195812-00297-ip-10-153-172-175.ec2.internal.warc.gz"}
http://www1.chapman.edu/~jipsen/calc/FunctionTransformations.html
Function transformations. Suppose $f$ is a function, and $c>0$. The graph of $y=f(x)+c$ is the graph of $f$ shifted $c$ units up $y=f(x)-c$ is the graph of $f$ shifted $c$ units down $y=f(x+c)$ is the graph of $f$ shifted $c$ units left $y=f(x-c)$ is the graph of $f$ shifted $c$ units right Now suppose $c>1$. The graph of $y=cf(x)$ is the graph of $f$ stretched vertically by a factor $c$ $y=(\frac{1}{c})f(x)$ is the graph of $f$ compressed vertically by a factor $c$ $y=f(cx)$ is the graph of $f$ compressed horizontally by a factor $c$ $y=f(x/c)$ is the graph of $f$ stretched horizontally by a factor $c$ $y=-f(x)$ is the graph of $f$ reflected about the $x$-axis $y=f(-x)$ is the graph of $f$ reflected about the $y$-axis
2022-11-26 15:29:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8743773102760315, "perplexity": 55.8132490374258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00455.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-linear-algebra-7th-edition/chapter-5-inner-product-spaces-5-1-length-and-dot-product-in-rn-5-1-exercises-page-236/81
## Elementary Linear Algebra 7th Edition $u\cdot (cv+dw)=0$. Since $u$ is orthogonal to $v$ and $w$, then we get $$u\cdot v=0, \quad u\cdot w=0.$$ Keeping this in mind, now, \begin{align*} u\cdot (cv+dw)&=c (u\cdot v)+d (u\cdot w)\\ &=0+0\\ &=0. \end{align*} Hence, $u$ is orthogonal to $cv+dw$.
2019-12-11 17:11:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998633623123169, "perplexity": 1123.1104487568111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540531974.7/warc/CC-MAIN-20191211160056-20191211184056-00534.warc.gz"}
http://server3.wikisky.org/starview?object=NGC+5112
WIKISKY.ORG Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login # NGC 5112 Contents ### Images DSS Images   Other Images ### Related articles The Hα Galaxy Survey. II. Extinction and [NII] corrections to Hα fluxesWe study the two main corrections generally applied to narrow-bandHα fluxes from galaxies in order to convert them to star formationrates, namely for [NII] contamination and for extinction internal to thegalaxy. From an imaging study using carefully chosen narrow-bandfilters, we find the [NII] and Hα emission to be differentlydistributed. Nuclear measurements are likely to overestimate thecontribution of [NII] to total narrow-band fluxes. We find that in moststar formation regions in galaxy disks the [NII] fraction is small ornegligible, whereas some galaxies display a diffuse central componentwhich can be dominated by [NII] emission. We compare these results withrelated studies in the literature, and consider astrophysicalexplanations for variations in the [NII]/Hα ratio, includingmetallicity variations and different excitation mechanisms. We proceedto estimate the extinction towards star formation regions in spiralgalaxies, firstly using Brγ/Hα line ratios. We find thatextinction values are larger in galaxy nuclei than in disks, that diskextinction values are similar to those derived from opticalemission-line studies in the literature, and that there is no evidencefor heavily dust-embedded regions emerging in the near-IR, which wouldbe invisible at Hα. The numbers of galaxies and individual regionsdetected in Brγ are small, however, and we thus exploit opticalemission line data from the literature to derive global Hαextinction values as a function of galaxy type and inclination. In thispart of our study we find only a moderate dependence on inclination,consistent with broad-band photometric studies, and a large scatter fromgalaxy to galaxy. Typical extinctions are smaller for late-type dwarfsthan for spiral types. Finally, we show that the application of thetype-dependent extinction corrections derived here significantlyimproves the agreement between star formation rates calculated usingHα fluxes and those from far-infrared fluxes as measured by IRAS.This again supports the idea that heavily dust-embedded star formation,which would be underestimated using the Hα technique, is not adominant contributor to the total star formation rate of most galaxiesin the local Universe.Based on observations made with the William Herschel and Jacobus KapteynTelescopes operated on the island of La Palma by the Isaac Newton Groupin the Spanish Observatorio del Roque de los Muchachos of the Institutode Astrofísica de Canarias. The United Kingdom Infrared Telescopeis operated by the Joint Astronomy Centre on behalf of the UK ParticlePhysics and Astronomy Research Council. The Hα galaxy survey. I. The galaxy sample, Hα narrow-band observations and star formation parameters for 334 galaxiesWe discuss the selection and observations of a large sample of nearbygalaxies, which we are using to quantify the star formation activity inthe local Universe. The sample consists of 334 galaxies across allHubble types from S0/a to Im and with recession velocities of between 0and 3000 km s-1. The basic data for each galaxy are narrowband H\alpha +[NII] and R-band imaging, from which we derive starformation rates, H\alpha +[NII] equivalent widths and surfacebrightnesses, and R-band total magnitudes. A strong correlation is foundbetween total star formation rate and Hubble type, with the strongeststar formation in isolated galaxies occurring in Sc and Sbc types. Moresurprisingly, no significant trend is found between H\alpha +[NII]equivalent width and galaxy R-band luminosity. More detailed analyses ofthe data set presented here will be described in subsequent papers.Based on observations made with the Jacobus Kapteyn Telescope operatedon the island of La Palma by the Isaac Newton Group in the SpanishObservatorio del Roque de los Muchachos of the Instituto deAstrofísica de Canarias.The full version of Table \ref{tab3} is available in electronic form atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/414/23 Reduced image datafor this survey can be downloaded fromhttp://www.astro.livjm.ac.uk/HaGS/ Bar Galaxies and Their EnvironmentsThe prints of the Palomar Sky Survey, luminosity classifications, andradial velocities were used to assign all northern Shapley-Ames galaxiesto either (1) field, (2) group, or (3) cluster environments. Thisinformation for 930 galaxies shows no evidence for a dependence of barfrequency on galaxy environment. This suggests that the formation of abar in a disk galaxy is mainly determined by the properties of theparent galaxy, rather than by the characteristics of its environment. An Infrared Space Observatory Atlas of Bright Spiral GalaxiesIn this first paper in a series we present an atlas of infrared imagesand photometry from 1.2 to 180 μm for a sample of bright spiralgalaxies. The atlas galaxies are an optically selected,magnitude-limited sample of 77 spiral and S0 galaxies chosen from theRevised Shapley-Ames Catalog (RSA). The sample is a representativesample of spiral galaxies and includes Seyfert galaxies, LINERs,interacting galaxies, and peculiar galaxies. Using the Infrared SpaceObservatory (ISO), we have obtained 12 μm images and photometry at60, 100, and 180 μm for the galaxies. In addition to its imagingcapabilities, ISO provides substantially better angular resolution thanis available in the IRAS survey, and this permits discrimination betweeninfrared activity in the central regions and global infrared emission inthe disks of these galaxies. These ISO data have been supplemented withJHK imaging using ground-based telescopes. The atlas includes 2 and 12μm images. Following an analysis of the properties of the galaxies,we have compared the mid-infrared and far-infrared ISO photometry withIRAS photometry. The systematic differences we find between the IRASFaint Source Catalog and ISO measurements are directly related to thespatial extent of the ISO fluxes, and we discuss the reliability of IRASFaint Source Catalog total flux densities and flux ratios for nearbygalaxies. In our analysis of the 12 μm morphological features we findthat most but not all galaxies have bright nuclear emission. We find 12μm structures such as rings, spiral arm fragments, knotted spiralarms, and bright sources in the disks that are sometimes brighter thanthe nuclei at mid-infrared wavelengths. These features, which arepresumably associated with extranuclear star formation, are common inthe disks of Sb and later galaxies but are relatively unimportant inS0-Sab galaxies. Based on observations with the Infrared SpaceObservatory (ISO), an ESA project with instruments funded by ESA MemberStates (especially the PI countries: France, Germany, Netherlands, andUnited Kingdom) and with the participation of ISAS and NASA. Neutral hydrogen and optical observations of edge-on galaxies: Hunting for warpsWe present 21-cm HI line and optical R-band observations for a sample of26 edge-on galaxies. The HI observations were obtained with theWesterbork Synthesis Radio Telescope, and are part of the WHISP database(Westerbork HI Survey of Spiral and Irregular Galaxies). We present HImaps, optical images, and radial HI density profiles. We have alsoderived the rotation curves and studied the warping and lopsidedness ofthe HI disks. 20 out of the 26 galaxies of our sample are warped,confirming that warping of the HI disks is a very common phenomenon indisk galaxies. Indeed, we find that all galaxies that have an extendedHI disk with respect to the optical are warped. The warping usuallystarts around the edge of the optical disk. The degree of warping variesconsiderably from galaxy to galaxy. Furthermore, many warps areasymmetric, as they show up in only one side of the disk or exhibitlarge differences in amplitude in the approaching and receding sides ofthe galaxy. These asymmetries are more pronounced in rich environments,which may indicate that tidal interactions are a source of warpasymmetry. A rich environment tends to produce larger warps as well. Thepresence of lopsidedness seems to be related to the presence of nearbycompanions. Full Fig. 13 is only available in electronic form athttp://www.edpsciences.org The Frequency of Active and Quiescent Galaxies with Companions: Implications for the Feeding of the NucleusWe analyze the idea that nuclear activity, either active galactic nuclei(AGNs) or star formation, can be triggered by interactions by studyingthe percentage of active, H II, and quiescent galaxies with companions.Our sample was selected from the Palomar survey and avoids selectionbiases faced by previous studies. This sample was split into fivedifferent groups, Seyfert galaxies, LINERs, transition galaxies, H IIgalaxies, and absorption-line galaxies. The comparison between the localgalaxy density distributions of the different groups showed that in mostcases there is no statistically significant difference among galaxies ofdifferent activity types, with the exception that absorption-linegalaxies are seen in higher density environments, since most of them arein the Virgo Cluster. The comparison of the percentage of galaxies withnearby companions showed that there is a higher percentage of LINERs,transition galaxies, and absorption-line galaxies with companions thanSeyfert and H II galaxies. However, we find that when we consider onlygalaxies of similar morphological types (elliptical or spiral), there isno difference in the percentage of galaxies with companions amongdifferent activity types, indicating that the former result was due tothe morphology-density effect. In addition, only small differences arefound when we consider galaxies with similar Hα luminosities. Thecomparison between H II galaxies of different Hα luminositiesshows that there is a significantly higher percentage of galaxies withcompanions among H II galaxies with L(Hα)>1039 ergss-1 than among those with L(Hα)<=1039ergs s-1, indicating that interactions increase the amount ofcircumnuclear star formation, in agreement with previous results. Thefact that we find that galaxies of different activity types have thesame percentage of companions suggests that interactions betweengalaxies is not a necessary condition to trigger the nuclear activity inAGNs. We compare our results with previous ones and discuss theirimplications. Nearby Optical Galaxies: Selection of the Sample and Identification of GroupsIn this paper we describe the Nearby Optical Galaxy (NOG) sample, whichis a complete, distance-limited (cz<=6000 km s-1) andmagnitude-limited (B<=14) sample of ~7000 optical galaxies. Thesample covers 2/3 (8.27 sr) of the sky (|b|>20deg) andappears to have a good completeness in redshift (97%). We select thesample on the basis of homogenized corrected total blue magnitudes inorder to minimize systematic effects in galaxy sampling. We identify thegroups in this sample by means of both the hierarchical and thepercolation friends-of-friends'' methods. The resulting catalogs ofloose groups appear to be similar and are among the largest catalogs ofgroups currently available. Most of the NOG galaxies (~60%) are found tobe members of galaxy pairs (~580 pairs for a total of ~15% of objects)or groups with at least three members (~500 groups for a total of ~45%of objects). About 40% of galaxies are left ungrouped (field galaxies).We illustrate the main features of the NOG galaxy distribution. Comparedto previous optical and IRAS galaxy samples, the NOG provides a densersampling of the galaxy distribution in the nearby universe. Given itslarge sky coverage, the identification of groups, and its high-densitysampling, the NOG is suited to the analysis of the galaxy density fieldof the nearby universe, especially on small scales. Investigations of the Local Supercluster velocity field. III. Tracing the backside infall with distance moduli from the direct Tully-Fisher relationWe have extended the discussion of Paper II (Ekholm et al.\cite{Ekholm99a}) to cover also the backside of the Local Supercluster(LSC) by using 96 galaxies within Theta <30degr from the adoptedcentre of LSC and with distance moduli from the direct B-bandTully-Fisher relation. In order to minimize the influence of theMalmquist bias we required log Vmax>2.1 and sigmaB_T<0.2mag. We found out that ifRVirgo<20 Mpc this sample fails to follow the expecteddynamical pattern from the Tolman-Bondi (TB) model. When we compared ourresults with the Virgo core galaxies given by Federspiel et al.(\cite{Federspiel98}) we were able to constrain the distance to Virgo:RVirgo=20-24 Mpc. When analyzing the TB-behaviour of thesample as seen from the origin of the metric as well as that withdistances from the extragalactic Cepheid PL-relation we found additionalsupport to the estimate RVirgo= 21 Mpc given in Paper II.Using a two-component mass-model we found a Virgo mass estimateMVirgo=(1.5 - 2)x Mvirial, whereMvirial=9.375*E14Msun forRVirgo= 21 Mpc. This estimate agrees with the conclusion inPaper I (Teerikorpi et al. \cite{Teerikorpi92}). Our results indicatethat the density distribution of luminous matter is shallower than thatof the total gravitating matter when q0<= 0.5. Thepreferred exponent in the density power law, alpha ~2.5, agrees withrecent theoretical work on the universal density profile of dark matterclustering in an Einstein-deSitter universe (Tittley & Couchman\cite{Tittley99}). The QDOT all-sky IRAS galaxy redshift surveyWe describe the construction of the QDOT survey, which is publiclyavailable from an anonymous FTP account. The catalogue consists ofinfrared properties and redshifts of an all-sky sample of 2387 IRASgalaxies brighter than the IRAS PSC 60-μm completeness limit(S_60>0.6Jy), sparsely sampled at a rate of one-in-six. At |b|>10deg, after removing a small number of Galactic sources, the redshiftcompleteness is better than 98per cent (2086/2127). New redshifts for1401 IRAS sources were obtained to complete the catalogue; themeasurement and reduction of these are described, and the new redshiftstabulated here. We also tabulate all sources at |b|>10 deg with noredshift so far, and sources with conflicting alternative redshiftseither from our own work, or from published velocities. A list of 95ultraluminous galaxies (i.e. with L_60μm>10^12 L_solar) is alsoprovided. Of these, ~20per cent are AGN of some kind; the broad-lineobjects typically show strong Feii emission. Since the publication ofthe first QDOT papers, there have been several hundred velocity changes:some velocities are new, some QDOT velocities have been replaced by moreaccurate values, and some errors have been corrected. We also present anew analysis of the accuracy and linearity of IRAS 60-μm fluxes. Wefind that the flux uncertainties are well described by a combination of0.05-Jy fixed size uncertainty and 8per cent fractional uncertainty.This is not enough to cause the large Malmquist-type errors in the rateof evolution postulated by Fisher et al. We do, however, find marginalevidence for non-linearity in the PSC 60-μm flux scale, in the sensethat faint sources may have fluxes overestimated by about 5per centcompared with bright sources. We update some of the previous scientificanalyses to assess the changes. The main new results are as follows. (1)The luminosity function is very well determined overall but is uncertainby a factor of several at the very highest luminosities(L_60μm>5x10^12L_solar), as this is where the remainingunidentified objects are almost certainly concentrated. (2) Thebest-fitting rate of evolution is somewhat lower than our previousestimate; expressed as pure density evolution with density varying as(1+z)^p, we find p=5.6+/-2.3. Making a rough correction for the possible(but very uncertain) non-linearity of fluxes, we find p=4.5+/-2.3. (3)The dipole amplitude decreases a little, and the implied value of thedensity parameter, assuming that IRAS galaxies trace the mass, isΩ=0.9(+0.45, -0.25). (4) Finally, the estimate of density varianceon large scales changes negligibly, still indicating a significantdiscrepancy from the predictions of simple cold dark matter cosmogonies. Arcsecond Positions of UGC GalaxiesWe present accurate B1950 and J2000 positions for all confirmed galaxiesin the Uppsala General Catalog (UGC). The positions were measuredvisually from Digitized Sky Survey images with rms uncertaintiesσ<=[(1.2")2+(θ/100)2]1/2,where θ is the major-axis diameter. We compared each galaxymeasured with the original UGC description to ensure high reliability.The full position list is available in the electronic version only. The I-Band Tully-Fisher Relation for SC Galaxies: 21 Centimeter H I Line DataA compilation of 21 cm line spectral parameters specifically designedfor application of the Tully-Fisher (TF) distance method is presentedfor 1201 spiral galaxies, primarily field Sc galaxies, for which opticalI-band photometric imaging is also available. New H I line spectra havebeen obtained for 881 galaxies. For an additional 320 galaxies, spectraavailable in a digital archive have been reexamined to allow applicationof a single algorithm for the derivation of the TF velocity widthparameter. A velocity width algorithm is used that provides a robustmeasurement of rotational velocity and permits an estimate of the erroron that width taking into account the effects of instrumental broadeningand signal-to-noise. The digital data are used to establish regressionrelations between measurements of velocity widths using other commonprescriptions so that comparable widths can be derived throughconversion of values published in the literature. The uniform H I linewidths presented here provide the rotational velocity measurement to beused in deriving peculiar velocities via the TF method. The I-Band Tully-Fisher Relation for SC Galaxies: Optical Imaging DataProperties derived from the analysis of photometric I-band imagingobservations are presented for 1727 inclined spiral galaxies, mostly oftypes Sbc and Sc. The reduction, parameter extraction, and errorestimation procedures are discussed in detail. The asymptotic behaviorof the magnitude curve of growth and the radial variation in ellipticityand position angle are used in combination with the linearity of thesurface brightness falloff to fit the disk portion of the profile. TotalI-band magnitudes are calculated by extrapolating the detected surfacebrightness profile to a radius of eight disk scale lengths. Errors inthe magnitudes, typically ~0.04 mag, are dominated by uncertainties inthe sky subtraction and disk-fitting procedures. Comparison is made withthe similar imaging database of Mathewson, Ford, & Buchhorn, both aspresented originally by those authors and after reanalyzing theirdigital reduction files using identical disk-fitting procedures. Directcomparison is made of profile details for 292 galaxies observed incommon. Although some differences occur, good agreement is found,proving that the two data sets can be used in combination with onlyminor accommodation of those differences. The compilation of opticalproperties presented here is optimized for use in applications of theTully-Fisher relation as a secondary distance indicator in studies ofthe local peculiar velocity field. Groups of galaxies. III. Some empirical characteristics.Not Available Bulge-Disk Decomposition of 659 Spiral and Lenticular Galaxy Brightness ProfilesWe present one of the largest homogeneous sets of spiral and lenticulargalaxy brightness profile decompositions completed to date. The 659galaxies in our sample have been fitted with a de Vaucouleurs law forthe bulge component and an inner-truncated exponential for the diskcomponent. Of the 659 galaxies in the sample, 620 were successfullyfitted with the chosen fitting functions. The fits are generally welldefined, with more than 90% having rms deviations from the observedprofile of less than 0.35 mag. We find no correlations of fittingquality, as measured by these rms residuals, with either morphologicaltype or inclination. Similarly, the estimated errors of the fittedcoefficients show no significant trends with type or inclination. Thesedecompositions form a useful basis for the study of the lightdistributions of spiral and lenticular galaxies. The object base issufficiently large that well-defined samples of galaxies can be selectedfrom it. Catalogue of HI maps of galaxies. I.A catalogue is presented of galaxies having large-scale observations inthe HI line. This catalogue collects from the literature the informationthat characterizes the observations in the 21-cm line and the way thatthese data were presented by means of maps, graphics and tables, forshowing the distribution and kinematics of the gas. It containsfurthermore a measure of the HI extension that is detected at the levelof the maximum sensitivity reached in the observations. This catalogueis intended as a guide for references on the HI maps published in theliterature from 1953 to 1995 and is the basis for the analysis of thedata presented in Paper II. The catalogue is only available inelectronic form at the CDS via anonymous ftp 130.79.128.5 orhttp://cdsweb.u-strasbg.fr/Abstract.html Total magnitude, radius, colour indices, colour gradients and photometric type of galaxiesWe present a catalogue of aperture photometry of galaxies, in UBVRI,assembled from three different origins: (i) an update of the catalogueof Buta et al. (1995) (ii) published photometric profiles and (iii)aperture photometry performed on CCD images. We explored different setsof growth curves to fit these data: (i) The Sersic law, (ii) The net ofgrowth curves used for the preparation of the RC3 and (iii) A linearinterpolation between the de Vaucouleurs (r(1/4) ) and exponential laws.Finally we adopted the latter solution. Fitting these growth curves, wederive (1) the total magnitude, (2) the effective radius, (3) the colourindices and (4) gradients and (5) the photometric type of 5169 galaxies.The photometric type is defined to statistically match the revisedmorphologic type and parametrizes the shape of the growth curve. It iscoded from -9, for very concentrated galaxies, to +10, for diffusegalaxies. Based in part on observations collected at the Haute-ProvenceObservatory. A catalogue of spatially resolved kinematics of galaxies: BibliographyWe present a catalogue of galaxies for which spatially resolved data ontheir internal kinematics have been published; there is no a priorirestriction regarding their morphological type. The catalogue lists thereferences to the articles where the data are published, as well as acoded description of these data: observed emission or absorption lines,velocity or velocity dispersion, radial profile or 2D field, positionangle. Tables 1, 2, and 3 are proposed in electronic form only, and areavailable from the CDS, via anonymous ftp to cdsarc.u-strasbg.fr (to130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html A Search for Dwarf'' Seyfert Nuclei. III. Spectroscopic Parameters and Properties of the Host GalaxiesWe have completed an optical spectroscopic survey of the nuclear regions(r <~ 200 pc) of a large sample of nearby galaxies. Although the mainobjectives of the survey are to search for low-luminosity activegalactic nuclei and to quantify their luminosity function, the databasecan be used for a variety of other purposes. This paper presentsmeasurements of the spectroscopic parameters for the 418 emission-linenuclei, along with a compilation of the global properties of all 486galaxies in the survey. Stellar absorption generally poses a seriousobstacle to obtaining accurate measurement of emission lines in nearbygalactic nuclei. We describe a procedure for removing the starlight fromthe observed spectra in an efficient and objective manner. The mainparameters of the emission lines (intensity ratios, fluxes, profilewidths, and equivalent widths) are measured and tabulated, as areseveral stellar absorption-line and continuum indices useful forstudying the stellar population. Using standard nebular diagnostics, wedetermine the probable ionization mechanisms of the emission-lineobjects. The resulting spectral classifications provide extensiveinformation on the demographics of emission-line nuclei in the nearbyregions of the universe. This new catalog contains over 200 objectsshowing spectroscopic evidence for recent star formation and an equallylarge number of active galactic nuclei, including 46 that show broad Halpha emission. These samples will serve as the basis of future studiesof nuclear activity in nearby galaxies. An image database. II. Catalogue between δ=-30deg and δ=70deg.A preliminary list of 68.040 galaxies was built from extraction of35.841 digitized images of the Palomar Sky Survey (Paper I). For eachgalaxy, the basic parameters are obtained: coordinates, diameter, axisratio, total magnitude, position angle. On this preliminary list, weapply severe selection rules to get a catalog of 28.000 galaxies, wellidentified and well documented. For each parameter, a comparison is madewith standard measurements. The accuracy of the raw photometricparameters is quite good despite of the simplicity of the method.Without any local correction, the standard error on the total magnitudeis about 0.5 magnitude up to a total magnitude of B_T_=17. Significantsecondary effects are detected concerning the magnitudes: distance toplate center effect and air-mass effect. Redshift periodicity in the Local Supercluster.Persistent claims have been made over the last ~15yr that extragalacticredshifts, when corrected for the Sun's motion around the Galacticcentre, occur in multiples of ~24 or ~36km/s. A recent investigation byus of 40 spiral galaxies out to 1000km/s, with accurately measuredredshifts, gave evidence of a periodicity ~37.2-37.7km/s. Here we extendour enquiry out to the edge of the Local Supercluster (~2600km/s),applying a simple and robust procedure to a total of 97 accuratelydetermined redshifts. We find that, when corrected for related vectorsclose to recent estimates of the Sun's galactocentric motion, theredshifts of spirals are strongly periodic (P~37.6km/s). The formalconfidence level of the result is extremely high, and the signal is seenindependently with different radio telescopes. We also examine a furthersample of 117 spirals observed with the 300-foot Green Bank telescopealone. The periodicity phenomenon appears strongest for the galaxieslinked by group membership, but phase coherence probably holds overlarge regions of the Local Supercluster. A search for 'dwarf' Seyfert nuclei. 2: an optical spectral atlas of the nuclei of nearby galaxiesWe present an optical spectral atlas of the nuclear region (generally 2sec x 4 sec, or r approximately less than 200 pc) of a magnitude-limitedsurvey of 486 nearby galaxies having BT less than or = 12.5mag and delta greater than 0 deg. The double spectrograph on the Hale 5m telescope yielded simultaneous spectral coverage of approximately4230-5110 A and approximately 6210-6860 A, with a spectral resolution ofapproximately 4 A in the blue half and approximately 2.5 A in the redhalf. This large, statistically significant survey contains uniformlyobserved and calibrated moderate-dispersion spectra of exceptionallyhigh quality. The data presented in this paper will be used for varioussystematic studies of the physical properties of the nuclei of nearbygalaxies, with special emphasis on searching for low-luminosity activegalactic nuclei, or 'dwarf' Seyferts. Our survey led to the discovery offour relatively obvious but previously uncataloged Seyfert galaxies (NGC3735, 492, 4639, and 6951), and many more galactic nuclei showingevidence for Seyfert activity. We have also identified numerouslow-ionization nuclear emission-line regions (LINERs), some of which maybe powered by nonstellar processes. Of the many 'starburst' nuclei inour sample, several exhibit the spectral features of Wolf-Rayet stars. A Preliminary Classification Scheme for the Central Regions of Late-Type GalaxiesThe large-scale prints in The Carnegie Atlas of Galaxies have been usedto formulate a classification scheme for the central regions oflate-type galaxies. Systems that exhibit small bright central bulges ordisks (type CB) are found to be of earlier Hubble type and of higherluminosity than galaxies that do not contain nuclei (type NN). Galaxiescontaining nuclear bars, or exhibiting central regions that are resolvedinto individual stars and knots, and galaxies with semistellar nuclei,are seen to have characteristics that are intermediate between those oftypes CB and NN. The presence or absence of a nucleus appears to be auseful criterion for distinguishing between spiral galaxies andmagellanic irregulars. Quantitative Morphology of Bars in Spiral GalaxiesAs suggested by numerical simulations, the axis ratio of the bar is afundamental parameter to describe the dynamical evolution of a barredgalaxy. In a first-order approximation considering bars as ellipticalfeatures, visual measurements of bar axis ratios and lengths of 136spiral galaxies were performed on photographs of good linear scale.Despite the limitations affecting such measurements, morphologicalproperties of bars in spirals along the Hubble sequence as well as therelationship between the bar axis ratio and nuclear star formationactivity are studied. It is found that the relative length of bars inearly-type galaxies is, on average, about a factor of 3 larger than thelength observed in late-type spirals. Also, a relation between barlengths and bulge diameters is observed for both early-type andlate-type spirals, confirming results from previous works. Furthermore,although the number of objects is small, there is an apparentcorrelation between the presence of nuclear star formation activity andthe bar axis ratio: about 71% of the starburst galaxies included in thesample have a strong bar (b/a < 0.6). The introduction of thesequantitative parameters in galaxy classification schemes is discussed. A radio continuum survey of Shapley-Ames Galaxies at λ2.8 cm. I. Atlas of radio data.We present measurements of the radio continuum emission at λ2.8cm of a nearly complete sample of spiral galaxies. The sample consistsof the Shapley-Ames galaxies north of δ=-25deg and brighter thanB_T_=+12. The large, nearby galaxies were not observed during thesurvey, but measured with high sensitivity in individual projects. Theradioweak galaxies were also excluded. The observational results and thederived flux densities are given and compared with that of otherobservations. Pecularities of the radio emission of individual galaxiesare discussed. On the size and formation mechanism of the largest star-forming complexes in spiral and irregular galaxiesThe average diameters of the largest star complexes in most of thespiral and irregular galaxies in the Sandage and Bedke Atlas of Galaxieswere measured from the Atlas photographs. The complex diametersDc correlate with galaxy magnitude as Dc = 0.18 -0.14MB, which has about the same slope as the correlation forthe largest H II regions studied by Kennicutt. There is no obviouscorrelation between Dc and either Hubble type or spiral armclass at a given magnitude. The variation of Dc withMB closely matches the expected variation in thecharacteristic length of the gaseous gravitational instabilityconsidering that the rotation curve varies with MB and thatthe stability parameter Q is about 1 in the outer regions of the disk.This match corresponds to an effective velocity dispersion of 6.1 km/sthat is about the same for all spiral and irregular galaxies. Arm structure in normal spiral galaxies, 1: Multivariate data for 492 galaxiesMultivariate data have been collected as part of an effort to develop anew classification system for spiral galaxies, one which is notnecessarily based on subjective morphological properties. A sample of492 moderately bright northern Sa and Sc spirals was chosen for futurestatistical analysis. New observations were made at 20 and 21 cm; thelatter data are described in detail here. Infrared Astronomy Satellite(IRAS) fluxes were obtained from archival data. Finally, new estimatesof arm pattern radomness and of local environmental harshness werecompiled for most sample objects. Further Analysis of a Complete Sample in the Virgo Supercluster of GalaxiesIn the context of the Local (Virgo-centred) Supercluster of galaxies, astatistical study is made of the radio-derived properties for the 165spiral and irregular galaxies in the recently published CompleteSample' by Fitt & Alexander. I find a trend for a larger neutralhydrogen mass in spiral galaxies located farther from the plane of theLocal Supercluster (or smaller HI mass in spiral galaxies closer to theLocal Supercluster plane). A previously known result, of a deficiency ofHI mass in spiral galaxies located radially close to the centre of theVirgo cluster, is seen again in the Complete Sample'. Contrary to sometheoretical predictions, the magnetic field strength in spiral galaxiesdoes not seem to be well correlated with the star-formation efficiencyor the neutral hydrogen mass. Magnetic fields in late-type galaxiesMagnitudes of the volume-averaged magnetic field have been derived for arepresentative sample of 146 late-type galaxies using the minimum-energycondition and a simple model for the distribution of radio-emittingplasma within these galaxies. The distribution of derived magneticfields is narrow (Beq = 0.29 +/- 0.11 nT) and skewed tohigher values. There is little or no dependence of the derived magneticfield on galactic type, although the archetypal starburst M82 isanomalous. We find no correlation between the scatter in the FIR/radiocorrelation and variation in the derived strength of the magnetic field. A revised catalog of CfA1 galaxy groups in the Virgo/Great Attractor flow fieldA new identification of groups and clusters in the CfA1 Catalog ofHuchra et al. is presented, using a percolation algorithm to identifydensity enhancements. It is shown that in the resulting catalog,contamination by interlopers is significantly reduced. The Schechterluminosity function is redetermined, including the Malmquist bias. General study of group membership. II - Determination of nearby groupsWe present a whole sky catalog of nearby groups of galaxies taken fromthe Lyon-Meudon Extragalactic Database. From the 78,000 objects in thedatabase, we extracted a sample of 6392 galaxies, complete up to thelimiting apparent magnitude B0 = 14.0. Moreover, in order to considersolely the galaxies of the local universe, all the selected galaxieshave a known recession velocity smaller than 5500 km/s. Two methods wereused in group construction: a Huchra-Geller (1982) derived percolationmethod and a Tully (1980) derived hierarchical method. Each method gaveus one catalog. These were then compared and synthesized to obtain asingle catalog containing the most reliable groups. There are 485 groupsof a least three members in the final catalog. Submit a new article
2018-11-16 16:35:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6511885523796082, "perplexity": 5160.234210352957}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743105.25/warc/CC-MAIN-20181116152954-20181116174954-00331.warc.gz"}
https://learn.careers360.com/ncert/ncert-solutions-class-12-chemistry-chapter-15-polymers/
# NCERT Solutions for Class 12 Chemistry chapter 15 Polymers NCERT Solutions for Class 12 Chemistry Chapter 15 Polymers - Hey are you stuck while solving your homework problems? Now you can resolve all your doubts here. Just scroll down to get CBSE NCERT solutions for class 12 chemistry chapter 15 Polymers. In this chapter, you will deal with the science of polymers this chapter covers important concepts such as polymers, monomers, polymerization, types of polymers, classification of polymers based on their source and structure, cross-linked or linear polymer with their properties and importance of polymers in daily life. Solutions of NCERT class 12 chemistry chapter 15  polymers contains a total of 6 intext questions and 20 questions in the exercise at chapter end. The NCERT solutions for class 12 chemistry chapter 15 Polymers are prepared in a comprehensive manner so that you will also learn how to write answers in your exams. These NCERT solutions help you in their preparation of CBSE boards exams as well as in the various competitive exams like JEE, NEET, etc. This chapter holds 3 marks in the CBSE boards exams and after completing the NCERT solutions for class 12 chemistry chapter 15 polymers students will be able to explain the terms like polymer, monomer, and polymerization and realise their importance, able to distinguish between different types of polymerization processes and different classes of polymers. This chapter also explains the formation of some important synthetic polymers and their uses and properties. Find all the solutions of NCERT class 12 chemistry chapter 15 by scrolling down. Important terms and points of Class 12 Chemistry Chapter 15 Polymers- 1. Polymers-  They are very high molecular mass macromolecules, which composed of repeating structural units derived from the monomers. Polymers have a high molecular mass $\dpi{100} (10^3-10^7U)$. Rubber, polythene, and nylon 6, 6 are examples of polymers. 2. Monomers- Monomers are the simple and reactive molecules that combine in large numbers through covalent bonds to give rise to the repeating structural units or polymers. For example propene, vinyl chloride, styrene, etc. ## Topics and Sub-topics of NCERT solutions for Class 12 Chemistry Chapter 15 Polymers- 15.1 Classification of Polymers 15.2 Types of Polymerisation Reactions 15.3 Molecular Mass of Polymers 15.5 Polymers of Commercial Importance ## Solutions to In-Text Questions Ex 15.1  to 15.6 Question Polymers- Poly means many and mer means unit or parts. Polymers are high molecular masses macromolecules$\dpi{80} (10^3-10^7u)$. These are formed by joining of the repeated units of monomers. Question The given above polymer is Nylon6, 6 So, the monomer is adipic acid and hexamethylene diamine Question It is a polymer of Nylon6 . So, the monomeric unit is Caprolactum. Question the above polymer is a Teflon (PTFE) the monomeric unit is tetrafluroethene • Addition Polymers are formed by the direct addition of repeated monomers. Example- polyethene and Teflon • Condensation polymers are formed by condensation of two or more than two monomers by eliminating by-product like water and HCl. Example- terylene and bakelite Question Buna-N It is a copolymer of 1,3-Butadiene and acrylonitrile. It is resistant to the action of petrol, lubricating oil and organic solvents. It is used in making oil seals and tank lining etc. Buna-S It is formed by copolymerisation of 1,3-Butadiene and Styrene. It is used for making automobiles tyres and rubber soles etc. Increasing order in their intermolecular forces- Buna-S(elastomers)<Polyethene<Nylon6, 6(fibres • (Thermoplastics, intermediate forces between elastomers and fibres) • (strong H-bond or dipole-dipole interaction) • elastomers weakest force of attraction ## NCERT Solutions for Class 12 Chemistry Chapter 15 Polymers- Exercise Questions Question Polymers- Poly means many and mer means unit or parts. Polymers are high molecular masses macromolecules. These are formed by joining of repeating unit of monomers. Monomers- These are simple reactive units, which combine together to form large molecules through covalent bond.examples- ethene and hexamethylene diamine, adipic acid. Natural Polymers- Polymers that formed naturally like formed by animals and plants. these are found in nature. Example- protein, starch, cellulose etc. Synthetic Polymers- Polymers made by human beings are called synthetic or man-made polymers. Examples- plastic(polyethene), nylon6,6 and nylon 6 etc. HOMO-POLYMER These types of polymers are formed by polymerisation of one type of monomers.$-[A-A-A-A]_{n}-$ examples - polyethene is the homopolymer of ethene monomers. CO-POLYMER- These types of polymers are formed by the polymerisation of two different monomers. $-[A-B-A-B]_{n}-$ example- Nylon6, 6 is the copolymer of adipic acid and hexamethylene diamine. Question The functionality of a monomer is the number of binding sites present in it. For example, for propene and ethene functionality is one but for adipic acid and 1,3- butadiene is two. Question The process of formation of polymers or high molecular masses$\dpi{80} (10^3-10^7u)$ from its respective monomers is known as polymerisation. In polymers monomers are held by cobalent bonds. $( NH-CHR-CO )_n$, is a homopolymer because it is  obtained from a single monomer of $NH_{2}-CHR-COOH$ Question 15. In elastomers, the polymeric chains are held by weak intermolecular forces of attraction. These weak binding forces allow them to stretch and a few cross-links are there in between the chains, which helps them to retract after stretching or releasing forces. Due to this elastomers are elastic in nature. Example- Buna-S, Buna-N and Neoprene etc. Addition Polymerisation- The process of repeated addition of monomer, having a double or triple bond to form polymers. For example, polyethene is formed by the addition polymerisation of ethene. Condensation Polymerisation- Process of formation of polymers by repeated condensation reaction between two different monomers, having bi-functionality or tri-functionality. A small molecule is eliminated like water and$HCl$ in each condensation step. For example Nylon 6, 6 is a condensation polymerisation of adipic acid and hexamethylene diamine. Question The process of formation of polymers of two or more different monomeric units is known as copolymerisation. For example, Buna-S is formed by the copolymerisation. In free radical mechanism, there are three main steps- 1. Chain initiation 2. Chain propagation 3. Termination 1. Chain Initiation- the polymerisation of ethene to polythene consists of heating or exposing to light a mixture of ethene with a small amount of benzoyl peroxide initiator. Generating new and larger free radicals. 2. Chain Propagation Step-   As the radical reacts with another molecule of ethene. So, another bigger sized radical is formed. The repetition of this step is chain propagation. 3. Chain Termination step- At some time the product radical reacts with another radical to form the polymerised product and this step is called the chain terminating step. Thermoplastic polymers are linear or slightly branched chained molecules. It can be repeatedly softened and hardened on heating. Thus they can be modified again and again. Examples- polyethene and polystyrene. These polymers have intermolecular forces of attraction intermediate between elastomers and fibres. Some examples of common thermoplastics are polyethene, polystyrene, polyvinyls, etc. Thermosetting Plastics are cross-linked and heavily branched molecules, which get hardened during the moulding process. These polymers cannot be reused. For examples bakelite and urea-formaldehyde resin etc. For PVC (polyvinyl chloride) we use vinyl chloride as a monomer. $(CH_{2}=CH-Cl)$ The monomeric unit of Teflon (PTFE) is tetraflouroethene ($CF_{2}=CF_{2}$). It is resistant to heat and chemical attack. The monomeric unit of abkelite is phenol and formaldehyde. (a) phenol- $C_{6}H_{5}OH$ (b) formaldehude- $HCHO$ Natural rubber is a linear polymer of isoprene  (2-methyl-1, 3-butadiene) and is also called as cis - 1, 4 polyisoprenes. Due to the cis configuration about the double bond, it is difficult to come closer for effective compactness due to the weak intermolecular attraction (van der Waals). Thus natural rubber has a coiled structure and it can be stretched like spring(show elastic nature). Question The natural rubber has many flaws in following ways like- • it becomes soft at high temperature and brittle at low temperature (<283K). And • show very high water absorption capacity • soluble in a non-polar solvent and • poor resistant to the attack of oxidising agents. To improve all these physical properties we do vulcanisation of rubber. During this process, sulphur cross-links are formed, which makes it hard, tough with high tensile strength. Question
2020-04-04 13:57:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4770056903362274, "perplexity": 6767.347497686386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524043.56/warc/CC-MAIN-20200404134723-20200404164723-00412.warc.gz"}
http://www.self.gutenberg.org/articles/eng/Rate-distortion_theory
#jsDisabledContent { display:none; } My Account | Register | Help Rate-distortion theory Article Id: WHEBN0000192891 Reproduction Date: Title: Rate-distortion theory Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date: Rate-distortion theory Rate–distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding a given distortion D. Introduction Rate–distortion theory gives an analytical expression for how much compression can be achieved using lossy compression methods. Many of the existing audio, speech, image, and video compression techniques have transforms, quantization, and bit-rate allocation procedures that capitalize on the general shape of rate–distortion functions. Rate–distortion theory was created by Claude Shannon in his foundational work on information theory. In rate–distortion theory, the rate is usually understood as the number of bits per data sample to be stored or transmitted. The notion of distortion is a subject of on-going discussion. In the most simple case (which is actually used in most cases), the distortion is defined as the expected value of the square of the difference between input and output signal (i.e., the mean squared error ). However, since we know that most lossy compression techniques operate on data that will be perceived by human consumers (listening to music, watching pictures and video) the distortion measure should preferably be modeled on human perception and perhaps aesthetics: much like the use of probability in lossless compression, distortion measures can ultimately be identified with loss functions as used in Bayesian estimation and decision theory. In audio compression, perceptual models (and therefore perceptual distortion measures) are relatively well developed and routinely used in compression techniques such as MP3 or Vorbis, but are often not easy to include in rate–distortion theory. In image and video compression, the human perception models are less well developed and inclusion is mostly limited to the JPEG and MPEG weighting (quantization, normalization) matrix. Rate–distortion functions The functions that relate the rate and distortion are found as the solution of the following minimization problem: $\inf_\left\{Q_\left\{Y|X\right\}\left(y|x\right)\right\} I_Q\left(Y;X\right)\ \mbox\left\{subject to\right\}\ D_Q \le D^*.$ Here QY | X(y | x), sometimes called a test channel, is the conditional probability density function (PDF) of the communication channel output (compressed signal) Y for a given input (original signal) X, and IQ(Y ; X) is the mutual information between Y and X defined as $I\left(Y;X\right) = H\left(Y\right) - H\left(Y|X\right) \,$ where H(Y) and H(Y | X) are the entropy of the output signal Y and the conditional entropy of the output signal given the input signal, respectively: $H\left(Y\right) = - \int_\left\{-\infty\right\}^\infty P_Y \left(y\right) \log_\left\{2\right\} \left(P_Y \left(y\right)\right)\,dy$ $H\left(Y|X\right) =$ - \int_{-\infty}^{\infty} \int_{-\infty}^\infty Q_{Y|X}(y|x) P_X (x) \log_{2} (Q_{Y|X} (y|x))\, dx\, dy. The problem can also be formulated as a distortion–rate function, where we find the infimum over achievable distortions for given rate constraint. The relevant expression is: $\inf_\left\{Q_\left\{Y|X\right\}\left(y|x\right)\right\} E\left[D_Q\left[X,Y\right]\right] \mbox\left\{subject to\right\}\ I_Q\left(Y;X\right)\leq R.$ The two formulations lead to functions which are inverses of each other. The mutual information can be understood as a measure for prior uncertainty the receiver has about the sender's signal (H(Y)), diminished by the uncertainty that is left after receiving information about the sender's signal (H(Y | X)). Of course the decrease in uncertainty is due to the communicated amount of information, which is I(Y; X). As an example, in case there is no communication at all, then H(Y |X) = H(Y) and I(Y; X) = 0. Alternatively, if the communication channel is perfect and the received signal Y is identical to the signal X at the sender, then H(Y | X) = 0 and I(Y; X) = H(Y) = H(X). In the definition of the rate–distortion function, DQ and D* are the distortion between X and Y for a given QY | X(y | x) and the prescribed maximum distortion, respectively. When we use the mean squared error as distortion measure, we have (for amplitude-continuous signals): $D_Q = \int_\left\{-\infty\right\}^\infty \int_\left\{-\infty\right\}^\infty$ P_{X,Y}(x,y) (x-y)^2\, dx\, dy = \int_{-\infty}^\infty \int_{-\infty}^\infty Q_{Y|X}(y|x)P_{X}(x) (x-y)^2\, dx\, dy. As the above equations show, calculating a rate–distortion function requires the stochastic description of the input X in terms of the PDF PX(x), and then aims at finding the conditional PDF QY | X(y | x) that minimize rate for a given distortion D*. These definitions can be formulated measure-theoretically to account for discrete and mixed random variables as well. An analytical solution to this minimization problem is often difficult to obtain except in some instances for which we next offer two of the best known examples. The rate–distortion function of any source is known to obey several fundamental properties, the most important ones being that it is a continuous, monotonically decreasing convex (U) function and thus the shape for the function in the examples is typical (even measured rate–distortion functions in real life tend to have very similar forms). Although analytical solutions to this problem are scarce, there are upper and lower bounds to these functions including the famous Shannon lower bound (SLB), which in the case of squared error and memoryless sources, states that for arbitrary sources with finite differential entropy, $R\left(D\right) \ge h\left(X\right) - h\left(D\right) \,$ where h(D) is the differential entropy of a Gaussian random variable with variance D. This lower bound is extensible to sources with memory and other distortion measures. One important feature of the SLB is that it is asymptotically tight in the low distortion regime for a wide class of sources and in some occasions, it actually coincides with the rate–distortion function. Shannon Lower Bounds can generally be found if the distortion between any two numbers can be expressed as a function of the difference between the value of these two numbers. The Blahut–Arimoto algorithm, co-invented by Richard Blahut, is an elegant iterative technique for numerically obtaining rate–distortion functions of arbitrary finite input/output alphabet sources and much work has been done to extend it to more general problem instances. When working with stationary sources with memory, it is necessary to modify the definition of the rate distortion function and it must be understood in the sense of a limit taken over sequences of increasing lengths. R(D) = \lim_{n \rightarrow \infty} R_n(D) where R_n(D) = \frac{1}{n} \inf_{Q_{Y^n|X^n} \in \mathcal{Q}} I(Y^n, X^n) and \mathcal{Q} = \{ Q_{Y^n|X^n}(Y^n|X^n,X_0): E[d(X^n,Y^n)] \leq D \} where superscripts denote a complete sequence up to that time and the subscript 0 indicates initial state. Memoryless (independent) Gaussian source If we assume that PX(x) is Gaussian with variance σ2, and if we assume that successive samples of the signal X are stochastically independent (or equivalently, the source is memoryless, or the signal is uncorrelated), we find the following analytical expression for the rate–distortion function: $R\left(D\right) = \left\\left\{ \begin\left\{matrix\right\}$ \frac{1}{2}\log_2(\sigma_x^2/D ), & \mbox{if } D \le \sigma_x^2 \\ \\ 0, & \mbox{if } D > \sigma_x^2. \end{matrix} \right. The following figure shows what this function looks like: Rate–distortion theory tell us that no compression system exists that performs outside the gray area. The closer a practical compression system is to the red (lower) bound, the better it performs. As a general rule, this bound can only be attained by increasing the coding block length parameter. Nevertheless, even at unit blocklengths one can often find good (scalar) quantizers that operate at distances from the rate–distortion function that are practically relevant. This rate–distortion function holds only for Gaussian memoryless sources. It is known that the Gaussian source is the most "difficult" source to encode: for a given mean square error, it requires the greatest number of bits. The performance of a practical compression system working on—say—images, may well be below the R(D) lower bound shown. Connecting rate-distortion theory to channel capacity [1] Suppose we want to transmit information about a source to the user with a distortion not exceeding D. Rate–distortion theory tells us that at least R(D) bits/symbol of information from the source must reach the user. We also know from Shannon's channel coding theorem that if the source entropy is H bits/symbol, and the channel capacity is C (where C < H), then H − C bits/symbol will be lost when transmitting this information over the given channel. For the user to have any hope of reconstructing with a maximum distortion D, we must impose the requirement that the information lost in transmission does not exceed the maximum tolerable loss of H − R(D) bits/symbol. This means that the channel capacity must be at least as large as R(D).
2019-06-24 09:09:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633641004562378, "perplexity": 660.1736324321128}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999298.86/warc/CC-MAIN-20190624084256-20190624110256-00417.warc.gz"}
http://www.ncatlab.org/nlab/show/category+of+generalized+elements
# Idea Given a collection of “parameterized objects”, i.e. a functor $F:C\to D$, it is often of interest to consider the category whose objects are generalized elements of the objects of $D$ in the image of $F$, and whose morphisms are the maps between these generalized elements induced by the value of $F$ on morphisms in $C$. For $D=$ Set and and with generalized element read as “ordinary element of a set” is yields the category of elements of the (co)presheaf $F:C\to D$. Moreover, the description of of the category of elements of a presheaf in terms of a pullback of a generalized universal bundle generalizes directly to categories of generalized elements. # Definition Let $D$ be a pointed object in Cat, i.e. a category equipped with a choice ${\mathrm{pt}}_{D}:*\to D$ of one of its objects. Recall that a morphism ${\mathrm{pt}}_{D}\to d$ in $D$ may be called a generalized element in $D$ “with domain of definition” being the object ${\mathrm{pt}}_{D}$. For instance if $D=$ Set the canonical choice is ${\mathrm{pt}}_{\mathrm{Set}}=*$ the set with a single element. Generalized elements of a set “with domain of definition” $*$ are just the ordinary elements of a set. Notice that the over category $\left({\mathrm{pt}}_{D}/D\right)$ is the category of generalized elements of $D$ with domain of definition ${\mathrm{pt}}_{D}$: • objects are such generalized elements $\delta :{\mathrm{pt}}_{D}\to d$ of objects $d\in D$; • morphisms $\delta \to \gamma$ are given whenever a morphism $f:d\to d\prime$ in $D$ takes the element $\delta$ to $\delta \prime$, i.e. whenever there is a commuting triangle $\begin{array}{ccc}& & {\mathrm{pt}}_{D}\\ & {}^{\delta }↙& & {↘}^{\delta \prime }\\ d& & \stackrel{f}{\to }& & d\prime \end{array}\phantom{\rule{thinmathspace}{0ex}}.$\array{ && pt_D \\ & {}^\delta\swarrow && \searrow^{\delta'} \\ d &&\stackrel{f}{\to}&& d' } \,. Notice that the canonical projection $\left({\mathrm{pt}}_{D}/D\right)\to D$ from the over category that forgets the tip of these trangles may be regarded as the generalized universal bundle for the given pointed category $D$: it is the left composite vertical morphism in the pullback $\begin{array}{ccc}\left({\mathrm{pt}}_{D}/D\right)& \to & *\\ ↓& & ↓\\ {D}^{I}& \stackrel{{d}_{0}}{\to }& D\\ {↓}^{{d}_{1}}\\ D\end{array}$\array{ (pt_D/D) &\to& {*} \\ \downarrow && \downarrow \\ D^I &\stackrel{d_0}{\to}& D \\ \downarrow^{d_1} \\ D } (see also comma category for more on this perspective). So in fact such “categories of generalized elements” are precisely the generalized universal bundles in the 1-categorical context. And both are really fundamentally to be thought of as intermediate steps in the computation of weak pullbacks, as described now. The above allows to generalize the notion of category of generalized elements a bit further to that of generalized elements of functors with values in $D$: let $F:C\to D$ be a functor with codomain our category $D$ with point ${\mathrm{pt}}_{D}$. The category of generalized elements of $F$ is the pullback ${\mathrm{El}}_{{\mathrm{pt}}_{D}}\left(F\right):=C{×}_{D}\left({\mathrm{pt}}_{D}/D\right)$ $\begin{array}{ccc}{\mathrm{El}}_{{\mathrm{pt}}_{D}}\left(F\right)& \to & \left({\mathrm{pt}}_{D}/D\right)\\ ↓& & ↓\\ C& \stackrel{F}{\to }& D\end{array}\phantom{\rule{thinmathspace}{0ex}}.$\array{ El_{pt_D}(F) &\to& (pt_D/D) \\ \downarrow && \downarrow \\ C &\stackrel{F}{\to}& D } \,. This means: • the objects of ${\mathrm{El}}_{{\mathrm{pt}}_{D}}\left(F\right)$ are all the generalized elements ${\delta }_{c}:{\mathrm{pt}}_{D}\to F\left(c\right)$ for all $c\in C$; • a morphism ${\delta }_{c}\to {\delta }_{c\prime }$ between two such generalized elements is a commuting triangle $\begin{array}{ccc}& & {\mathrm{pt}}_{D}\\ & {}^{{\delta }_{c}}↙& & {↘}^{{\delta }_{c\prime }}\\ F\left(c\right)& & \stackrel{F\left(f\right)}{\to }& & F\left(c\prime \right)\end{array}\phantom{\rule{thinmathspace}{0ex}}.$\array{ && pt_D \\ & {}^{\delta_c}\swarrow && \searrow^{\delta_{c'}} \\ F(c) && \stackrel{F(f)}{\to} && F(c') } \,. for all morphisms $f:c\to c\prime$ in $C$. # Examples ## ordinary category of elements For $D=$ Set and ${\mathrm{pt}}_{\mathrm{Set}}=*$ the above reproduces the notion of category of elements# of a presheaf. ## Action Groupoid Given a vector space $V$, a group $G$ recall that a representation of $G$ on $V$ $V•⟲G$V\bullet\righttoleftarrow G is canonically identified with a functor $\rho :BG\to \mathrm{Vect}\phantom{\rule{thinmathspace}{0ex}}.$\rho : \mathbf{B} G \to Vect \,. $\rho :\left(*\stackrel{g}{\to }*\right)↦\left(V\stackrel{\rho \left(g\right)}{\to }V\right)\phantom{\rule{thinmathspace}{0ex}}.$\rho : ({*} \stackrel{g}{\to} {*}) \mapsto (V \stackrel{\rho(g)}{\to} V) \,. The category Vect of $k$-vector spaces for some field $k$ has a standard point ${\mathrm{pt}}_{\mathrm{Vect}}\to \mathrm{Vect}$, namely the field $k$ itself, regarded as the canonical 1-dimensional $k$-vector space over itself. The corresponding over category of generalized elements of Vect $\left({\mathrm{pt}}_{\mathrm{Vect}}/\mathrm{Vect}\right)$ has as objects pointed vector spaces and as morphisms linear maps of pointed vector spaces that map the chosen vectors to each other. Now, as described in detail at action groupoid the category of generalized elements of the representation $\rho$ is the action groupoid $V//G$ of $G$ acting on $V$ $\begin{array}{ccc}V//G& \to & {\mathrm{Vect}}_{*}\\ ↓& & ↓\\ BG& \to & \mathrm{Vect}\end{array}\phantom{\rule{thinmathspace}{0ex}}.$\array{ V//G &\to& Vect_* \\ \downarrow && \downarrow \\ \mathbf{B} G &\to& Vect } \,. As described there, $V//G\to BG$ is the groupoid incarnation of the vector bundle that is associated via $\rho$ to the universal $G$-bundle on the classifying space $BG$. ## References Revised on March 27, 2010 23:57:40 by Eric Forgy (119.247.164.191)
2013-05-25 14:54:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 68, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997571110725403, "perplexity": 645.729896141084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705956734/warc/CC-MAIN-20130516120556-00059-ip-10-60-113-184.ec2.internal.warc.gz"}
https://maslensbookkeeping.com.au/0qabfw/d7f727-connected-components-topology
### connected components topology [ [ and 0 X to , then ϵ {\displaystyle U\cup V=S\cup T} Its connected components are singletons, which are not open. X V 6. The set Cxis called the connected component of x. or ) [ , , , y {\displaystyle Y} S S {\displaystyle x} ) . It is an example of a space which is not connected. = Finally, if ( a z : . S Whether the empty space can be considered connected is a moot point.. Portions of this entry contributed by Todd ) {\displaystyle \gamma *rho(1)=z} {\displaystyle \gamma (b)=y} {\displaystyle S\subseteq X} {\displaystyle x_{0}\in S} {\displaystyle y} , are both open with respect to the subspace topology on In the following you may use basic properties of connected sets and continuous functions. x X c , ρ ) . ( = , and and is connected; once this is proven, {\displaystyle z\in X\setminus S} ) U This problem has been solved! ∩ is clopen (ie. x Unlimited random practice problems and answers with built-in Step-by-step solutions. Lemma 25.A. The are called the = {\displaystyle S} S ( V X b or γ {\displaystyle S=X} x η = ] γ V Each path component lies within a component. {\displaystyle B_{\epsilon }(\eta )\subseteq V} is called locally connected if and only if for {\displaystyle W} V , d such that {\displaystyle U} U X V and V (returned as lists of vertex indices) or ConnectedGraphComponents[g] This space is connected because it is the union of a path-connected set and a limit point. ( {\displaystyle \rho (d)=z} γ {\displaystyle y\in S} ( ρ {\displaystyle \Box }. V X S Then ) Example (the closed unit interval is connected): Set ⊆ ϵ y is not connected, a contradiction. {\displaystyle V=W\cap (S\cup T)} Then = {\displaystyle \gamma } : . are open and {\displaystyle x\in X} To get an example where connected components are not open, just take an infinite product with the product topology. ∈ . {\displaystyle O\cap W\cap f(X)} and U of all pathwise-connected to . {\displaystyle \gamma (a)=x} such that would contain a point be a topological space which is locally path-connected. V Hints help you try the next step on your own. = X {\displaystyle S\notin \{\emptyset ,X\}} 0 X Finding connected components for an undirected graph is an easier task. {\displaystyle U\cap V=\emptyset } ( are open in ] z with the topology induced by the Euclidean topology on {\displaystyle O} 1 U 2. x S X , then by local path-connectedness we may pick a path-connected open neighbourhood ( ∩ ) Examples Basic examples. X 1 ∩ − If you consider a set of persons, they are not organized a priori. INPUT: mg (NetworkX graph) - NetworkX Graph or MultiGraph that represents a pandapower network.. bus (integer) - Index of the bus at which the search for connected components originates. of a topological space is called connected if and only if it is connected with respect to the subspace topology. ) ∩ {\displaystyle \rho :[c,d]\to X} ( of A 1) Initialize all … be a point. . which is path-connected. = U is the disjoint union of two nontrivial closed subsets, contradiction. V W γ Some authors exclude the empty set (with its unique topology) as a connected space, but this article does not follow that practice. X : Previous question Next question = X Then S , U X is connected. {\displaystyle \rho :[c,d]\to X} S is a continuous image of the closed unit interval ∅ and Then {\displaystyle X} A Set {\displaystyle X} {\displaystyle \gamma :[a,b]\to X} X α ∖ x d That is, a space is path-connected if and only if between any two points, there is a path. Let Proof: First note that path-connected spaces are connected. : ∩ 1 R Subspace topology necessarily correspond to the layout of the other topological properties that is, space. Limit point of, where is partitioned by the equivalence class of, where is partitioned by the classes. Unvisited vertex, and S ∉ { ∅, X } is clopen ( ie might. Application: it proves that manifolds are connected available as GraphData [ g ! Term topology '' refers to the fact that path-connectedness implies connectedness ): let a... To noise, the isovalue might be erroneously exceeded for just a few components that path... Two points, there is no way to write with and disjoint open subsets not the same time.... Being in the same component is an easier task tool for creating Demonstrations and anything technical distinguish... In one large connected component a topological space may be decomposed into disjoint maximal subset. Components due by Tuesday, Aug 20, 2019 of a graph are the connected components, or connected... Xsuch that A¯âˆ©B6= âˆ, then each device must be connected if there is a path 08:36. The characteristics of bus topology and star topology ( 4 ) suppose a, B⊂Xare non-empty subsets... Component containing is the set of persons, they are not open, take!: it proves that manifolds are connected properties of connected component or at most a few pixels a few.!, just take an infinite product with the product topology where the are.. A root node and all other nodes are connected to every other device on the network you. Many small disconnected regions arise all open and closed at the same as connected into connected are. Are homeomorphic, connected components for an undirected graph is an example where connected components that A¯âˆ©B6= ∠then. U } path-connected if and then be considered connected is a path Weisstein, Eric W. component! Of path-connectedness means that the link only carries data for the two connected devices only be topological! On the network through a dedicated point-to-point link are each connected component Analysis typical! What I mean by social network { R } } and anything technical Cxof Xand this subset is.. Then γ ∗ ρ { \displaystyle \eta \in V } if necessary that 0 ∈ U \displaystyle. The term is typically used for non-empty topological spaces, pathwise-connected is connected. If necessary that 0 ∈ U { \displaystyle X } connected components topology continuous then each device is connected it! Shape or structure are connected, B⊂Xare non-empty connected subsets of X problem... Because it is path-connected if and then { \emptyset, X\ } } the user is interested in one connected., that’s not what I mean by social network since a function continuous when restricted to two closed subsets X! Open and closed ), and let X { \displaystyle X } be a topological space, which are organized... This shape does not necessarily correspond to the layout of connected sets and continuous functions is, a space is... ), and let X { \displaystyle X } be a topological.! Of a space X is locally path connected a number of graphs are available as [. For an undirected graph is an example of a space which can not be up! Of components and components are connected components topology open problems step-by-step from beginning to end available GraphData. Typically used for non-empty topological spaces decompose into connected components an infinite with. Pathwise-Connected to 's virtual shape or structure the equivalence classes are the connected components by. All strongly connected components are equal provided that X is said to be disconnected if it is,... \Displaystyle S\subseteq X } be a topological space } be a topological space decomposes into a union. An infimum, say η ∈ V { \displaystyle X } be a topological space and ∈! Still have the same number of components and components are equal provided X. Lemma 17.A deform the space in any continuous reversible manner and you still have same... Than full mesh topology through X equal provided that X is said to be disconnected if it the. Xpassing through X not organized a priori homework problems step-by-step from beginning to end Lemma 17.A [,. \Displaystyle U, V { \displaystyle \gamma * \rho } is connected because it is path-connected October... Used for non-empty topological spaces decompose into connected components correspond 1-1 noise, the isovalue be. Is typically used for non-empty topological spaces, pathwise-connected is not exactly the most intuitive connectedness:! Sets and continuous functions of bus topology and star topology ( 4 ) suppose a, non-empty... With and disjoint open subsets have any of the devices on a network 's virtual shape or structure is because... Used for non-empty topological spaces } if necessary that 0 connected components topology U { \displaystyle }... Hence, being in the network the same number of components and are... Manifolds are connected to a full meshed backbone components correspond 1-1 are each connected component ( )! Are structured by their relations, like friendship closed ), and we get all strongly components. A root node and all other nodes are connected if it is an easier.! U, V { \displaystyle S\notin \ { \emptyset, X\ } } to end an equivalence relation of.! C is a moot point a number of pieces '' then γ ∗ ρ { \displaystyle }. Creating Demonstrations and anything technical user is interested in one large connected component of connected components topology! Disconnected regions arise component of Xpassing through X is no way to write with and disjoint open subsets data. For the two connected devices on the network, if and only if it is example! Carries data for the two connected devices on the network through a dedicated point-to-point link and if... On a network 's virtual shape or structure R } } ( )! The pathwise-connected component containing is the set of such that there is a connected space need not\ have of! And you still have the same time root node and all other nodes are connected every x∈Xis... 5.7.4. reference let be a path-connected topological space decomposes into a disjoint union where the are.. { R } connected components topology the connected component or at most a few.! Of the devices on the network then each component of X lie in a component of space... ∅, X } { \displaystyle X } be a topological space let! And so C is a connected component Analysis a typical problem when are. Component containing is the union of two disjoint non-empty connected components topology sets other nodes connected! We get all strongly connected components due by Tuesday, Aug 20, 2019 speaking in... Is locally path connected or structure other nodes are connected if there is no way to write with and open. Product with the product topology note that the path remark 5.7.4. reference be... Layout of the other topological properties we have a partial converse to the fact that path-connectedness implies connectedness: be! Called the connected component. then A∪Bis connected in X then that ⊆..., connected, open and closed at the same component is an equivalence,! Necessarily correspond to the actual physical layout of the other topological properties that is used to distinguish topological spaces where! \ { \emptyset, X\ } } devices in the same time that path-connectedness implies connectedness:. Let X { \displaystyle X } be a topological space X is to. Partial converse to the fact that path-connectedness implies connectedness ): let X \displaystyle... { R } } Theorem 25.1, then each component of X. problem... Term topology '' refers to the layout of the other topological properties that is, a space X also! Root node and all other nodes are connected a topology as a network { \emptyset, X\ } } topology. Few components necessarily correspond to the fact that path-connectedness implies connectedness ): let a... Be erroneously exceeded for just a few pixels to write with and disjoint open.. Closed at the same component is an equivalence relation of path-connectedness components of a topology a! Provided that X is said to be disconnected if it is the set of subgraphs... The pathwise-connected component containing is the set of persons, they are open... Since connected subsets of Xsuch that A¯âˆ©B6= âˆ, then A∪Bis connected X... Where is partitioned by the equivalence classes are the set of all pathwise-connected to contributed by Todd Rowland Todd! Each device is connected to it forming a hierarchy } has an infimum, η... The most intuitive ( path-connectedness implies connectedness ): let be a space! Combines the characteristics of bus topology and star topology has only finitely many connected components # 1 tool for Demonstrations! A few pixels connectedness is not exactly the most intuitive, say η ∈ {... Device is connected because it is the union of a space which can not split... ) every point x∈Xis contained in a component of X ), and let {! Of X suppose by renaming U, V { \displaystyle X } is continuous shape does not correspond! Any of the devices on a network since the components are disjoint by 25.1... The characteristics of bus topology and star topology ( 4 ) suppose a, B⊂Xare non-empty connected of... X be a topological space decomposes into its connected components for an undirected graph is an example connected. Conclude since a function continuous when restricted to two closed subsets of Xsuch that A¯âˆ©B6= âˆ, then A∪Bis in... And so C is closed topology '' refers to the actual physical layout of connected devices only a!
2021-09-23 03:08:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9573032259941101, "perplexity": 1098.410910468231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00339.warc.gz"}
http://mathoverflow.net/questions/28235/variants-of-gronwalls-theorem?sort=newest
Variants of Grönwall's theorem Except the original Grönwall's theorem that $$\limsup_{n \to \infty} \frac{\sigma(n)}{n \log \log n} = e^{\gamma},$$ and the two variants $$\limsup_{\begin{smallmatrix} n\to\infty\cr n\ \text{is square free}\end{smallmatrix}} \frac{\sigma(n)}{n \log \log n} = \frac{6e^{\gamma}}{\pi^2}$$ and $$\limsup_{\begin{smallmatrix} n\to\infty\cr n\ \text{is odd}\end{smallmatrix}} \frac{\sigma(n)}{n \log \log n} = \frac{e^{\gamma}}{2}$$ that have been proven here, are there any other similar statements known? - Do you mean "similar statements" for the sum of divisors function $\sigma(n)=\sum_{d\mid n}d$? Because there are plenty of other multiplicative function for which similar asymptotics are known. –  Wadim Zudilin Jun 15 '10 at 11:29 "that have been proven here," Where? –  Andres Caicedo Jun 15 '10 at 13:47 I fixed the typos. Theorem 9 in the cited preprint contains 5 more similar asymptotics. I wonder what is wanted. –  Wadim Zudilin Jun 15 '10 at 14:50 Maybe a statement with $\limsup_{\begin{smallmatrix}n\to \infty \cr n\in S\end{smallmatrix}}(\cdots)=d_S e^{\gamma}$. Where $d_S$ is the density of $S$. –  Gjergji Zaimi Jun 15 '10 at 15:02 I'm sorry for not being clear enough, it's my first question here, though. I mean similar statments for the $\sigma(n)$ function, not necessarily asymptotics, but anything that involves limit points of the function $\frac{\sigma(n)}{n \log \log n}$. For example, is there an important sequence $a_n$ such that $\frac{\sigma{a_n}}{n \log \log n}$ converges, besides the sequence of primes? A result that establishes the connection between the density and the limit superior? Etc. Nothing particular. –  nikmil Jun 15 '10 at 22:53 show 1 more comment where the limit of the Choie, Lichiardopol, Moree and Sole's $$f_1(a_n) = \frac{\sigma(a_n)}{a_n \log \log a_n}$$ is the same $$e^\gamma .$$ That is, the limit for these numbers is the lim sup for all numbers. These are more natural than people realize. There is a simple recipe that takes some $\epsilon > 0$ and gives an explicit factorization for the best value $n_\epsilon;$ see page 7 in the Briggs pdf "Notes on the Riemann hypothesis and abundant numbers" at the bottom of the Wikipedia entry. The exponent of a prime $p$ in the factorization of $n_\epsilon$ is $$\left\lfloor \log_p \left( \frac{p^{1 + \epsilon} - 1}{p^\epsilon -1} \right) \right\rfloor - 1$$
2013-12-11 11:59:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115662574768066, "perplexity": 439.75713101970143}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164035500/warc/CC-MAIN-20131204133355-00053-ip-10-33-133-15.ec2.internal.warc.gz"}
http://www.r-bloggers.com/classification-using-neural-net-in-r/
# Classification using neural net in r October 9, 2013 By (This article was first published on Analytics , Education , Campus and beyond, and kindly contributed to R-bloggers) This is mostly for my students and myself for future reference. Classification is a supervised task , where we need preclassified data and then on new data , I can predict. Generally we holdout a % from the data available for testing and we call them training and testing data respectively.  So it's like this , if we know which emails are spam , then only using classification we can predict the emails as spam. I used the dataset http://archive.ics.uci.edu/ml/datasets/seeds# .  The data set has 7 real valued attributes and 1 for predicting .  http://www.jeffheaton.com/2013/06/basic-classification-in-r-neural-networks-and-support-vector-machines/ has influenced many of the writing , probably I am making it more obvious. The library to be used is library(nnet) , below are the list of commands for your reference 2.       Setting training set index ,  210 is the dataset size, 147 is 70 % of that    seedstrain<- sample(1:210,147) 3.       Setting test set index    seedstest <- setdiff(1:210,seedstrain) 4.       Normalize the value to be predicted , use that attribute of the dataset , that you want to predict    ideal <- class.ind(seeds$Class) 5. Train the model, -8 because you want to leave out the class attribute , the dataset had a total of 8 attributes with the last one as the predicted one seedsANN = nnet(irisdata[seedstrain,-8], ideal[seedstrain,], size=10, softmax=TRUE) 6. Predict on testset predict(seedsANN, seeds[seedstrain,-8], type="class") 7. Calculate Classification accuracy table(predict(seedsANN, seeds[seedstest,-8], type="class"),seeds[seedstest,]$Class) Happy Coding !
2014-10-20 22:55:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4531612694263458, "perplexity": 2382.3817717815336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443451.12/warc/CC-MAIN-20141017005723-00256-ip-10-16-133-185.ec2.internal.warc.gz"}
http://vyturelis.com/sign-convention-for-spherical-mirrors-and-lenses.htm
Sign Convention For Spherical Mirrors And Lenses 24 Feb 2011. on 24/2/11. The following table gives the sign convention for lenses: . table showing sign convention of concave and convex mirrors. hope u . 27 Feb 2011 . sign convenctions in spherical mirror and lens (concave & convex). Asked by . Sign Conventions for spherical Mirrors: I. Objects are always . 13 Jan 2013 . how is sign convention for spherical mirror different form sign of sphirical lenses. Asked by . Sign convention for the lens and mirror are same. Table 1: Sign convention for spherical mirrors and lenses. (B) Spherical Lenses. The shape of biconvex and biconcave spherical lenses are shown in Figures 4 . A spherical mirror consists of a small section of the surface of a sphere with one side of ... to all thin lenses, through the use of the following sign conventions: . Being able to describe images for concave and convex mirrors. 3. Being able to .. Before we can do some examples, we have to follow a sign convention. 6 Aug 2010. for optics, What is the sign conventions for concave, convex mirrors and. . image. The following table gives the sign convention for lenses: . Plane Mirrors. • Spherical Mirrors. • Lenses. • The Lens Maker's Equation. • Lens Aberrations . Sign conventions for spherical mirrors are given on the next slide. Sign Convention for Spherical Mirrors and Thin Lenses. Applies to: Mirror and Thin Lens Equation: 1/do + 1/di = 1/f. Magnification Equation: Image height/ Object . Images from Spherical Mirrors . When the parallel rays reach a spherical mirror, those near the central axis are . Summary of Sign Conventions for Lenses . One advantage that mirror optics have over lens optics is that mirrors do not . A convex mirror, fish eye mirror or diverging mirror, is a curved mirror in which the .. The sign convention used here is that the focal length is positive for concave . 1.3.7 Sign convention for mirrors. To describe imaging . (the "object"). A spherical mirror generates a stretched . 1.4 Lenses. 1.4.1 Concave and convex lenses . Raytracing Rules and Sign Conventions for Spherical Mirrors. 1. A ray that . conventions: so > 0 if object is in front of the lens. so < 0 if object is behind the lens. As in the case of lenses, the cartesian sign convention is used here, and that is the origin of the negative sign above. The radius r for a concave mirror is a . draw lens on a paper draw x-axis and y- axis through center of mirror or lens now according to coordinate system make center of mirror or lens your . Confused on mirror/lens sign conventions Introductory Physics . I reffered 5 or six books which says that focal length of convex mirror is . TABLE 41.1 Sign Convention for Spherical Mirrors and. Lenses. Quantity Conditions Sign. Focal lettglhf Cnncavc mirror +. Convex mirror —. Convex lens + . Image formation in a two-lens system. 4 Reflection at spherical mirrors. 4.1. A spherical convex mirror. 4.2. A spherical concave mirror. 4.3. The sign convention . For lenses, light converges to a point for a convex lens. A convex mirror diverges light, as does a concave lens. . The sign convention is just a little different. Summary of Sign conventions: Refractive spherical surfaces: Last time. Thin spherical lenses: Last time. Spherical Mirrors: Spherical Mirrors – Ray tracing . Outline. Reflection. Plane Mirrors. Concave/Convex Mirrors. Refraction. Lenses. Dispersion .. mirror. Sign Convention: the focal length is negative if the . 29 Jun 2012 . To become familiar with sign conventions and why they're used. . Move your pointer over each of the five lenses and mirrors. The info box will . 2 Nov 2012 . Explain in breif the sign conventions for lens n mirrors . is used for measuring various distances in the ray diagrams of spherical mirrors: . 23.3 Convex Mirrors (diverging mirrors) and Sign Conventions . Note: a convex- concave lenses is sometimes referred to as a meniscus. It is the shape used for . Flat mirror; Spherical mirrors. Find the image, method 2, the mirror/lens equation . These sign conventions apply to both concave and convex mirrors; The focal . Mirrors, Images formed by plane and spherical mirrors. Real-is-positive sign convention. Simple exercises on mirrors by ray tracing or use of formula. Practical . The following sign convention is used for measuring various distances in the ray diagrams of spherical mirrors: Object is always placed to the left of mirror . 18 Nov 2012 21 Apr 2003 . this consistently we will need to define certain sign conventions which . between convex and concave shapes. Introduction. Mirrors. Lenses . Sign convention by reflection by spherical mirrors (. 0 ). Share · Science Class X > Light . optics of lenses and mirrors (1). Physics: optics of lenses and . . types of lenses and mirrors, but you need to know the sign convention: virtual . Diverging types, f negative, concave lens, convex mirror (in the red means a . In particular - convex vs concave, and mirrors vs lenses.. Answer 1 of 2: There are several different sign conventions. The one I am accustomed . 4 Dec 2011 . Sign Convention for Spherical Mirrors and Thin Lenses Applies to: Mirror and Thin Lens Equation: 1/do + 1/di = 1/f Magnification Equation: . Flat Mirrors; Spherical Mirrors; Images Formed by Refraction; Thin Lenses; Optical . A real image is formed by a concave mirror . Sign Conventions for Mirrors . Sign Conventions Used for Mirrors, Refracting Surfaces, and Lenses. . We study optical surfaces, i.e., mirrors, curved refracting surfaces, and lenses to understand how and why they produce . A spherical mirror [which has $f_1 = f_2 \equiv f$ ] . 11 Dec 2010 . + for a converging lens + for concave mirrors. - for a diverging lens - for convex mirrors. ) o d Object . Sign Conventions for spherical Mirrors: . Introduction: In this lab we will investigate spherical mirrors and lenses. . will need to understand the mirror/lens equation, the sign conventions for mirrors and . Sign Convention for Reflection by Spherical Mirrors . Convex lens converges light rays, hence convex lenses are called converging lenses. Similarly, a double . . Convex Mirror. Sign convention remains same . Sign Convention for Spherical Mirrors . Refraction is responsible for image formation by lenses and the eye. Some Traditional Conventions for the Mirror, Interface, and Lens Equations. The following . Shallow Spherical Mirror Shallow Spherical Interface Thin Lens (with spherical sides). (reflection) . For mirrors and lenses, R has the same sign as f. Understand image formation by plane or spherical mirrors. Understand image formation by converging or diverging lenses .. Sign Convention for Lenses . . Sign Convention For Reflection By Spherical Mirrors » We follow a specific sign convention in order to use the . “Why image formed by convex lens is virtual ? A Reflection and Focal Point of Concave Mirror (Report errors in measured . by evaluating (1/so+1/si) and -2/R and 1/f separately with their proper sign convention! . A lens is a combination of two spherical interfaces, as treated in II. D. If the . 31 Oct 2012 . 9011041155 / 9011031155 Sign convention for lens 1. The sign conventions for lens are similar to the sign conventions of spherical mirror. formation of images by mirrors and lenses, bending of light by a. medium, twinkling of .. 9) New Cartesian sign convention for spherical mirrors :-. i) The object is . Notation for Mirrors and Lenses • The object distance is the distance from the object to the . Spherical Mirrors • A spherical mirror has the shape of a segment of a sphere • A .. Sign Conventions for Mirrors Image is inverted Image is upright . By experimenting with a mirror and lens, you will gain an understanding of optics which will . Spherical mirrors are mirrors whose shape is part of a sphere. . the equations given above still may be used with some changes in sign convention. For lenses, we follow sign conventions, similar to the one used for spherical mirrors. We apply the rules for signs of distances, except that all measurements are . Sitemap
2014-03-09 02:48:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8335031270980835, "perplexity": 2034.663469345452}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999670852/warc/CC-MAIN-20140305060750-00084-ip-10-183-142-35.ec2.internal.warc.gz"}
http://www.ds100.org/sp18/assets/lectures/lec25/rabbits.slides.html
# P-values, Probability, Priors, Rabbits, Quantifauxcation, and Cargo-Cult Statistics¶ ## Philip B. Stark, www.stat.berkeley.edu/~stark, @philipbstark Department of Statistics, University of California, Berkeley¶ If we are uncritical we shall always find what we want: we shall look for, and find, confirmations, and we shall look away from, and not see, whatever might be dangerous to our pet theories. In this way it is only too easy to obtain what appears to be overwhelming evidence in favor of a theory which, if approached critically, would have been refuted. —Karl Popper The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data. —J.W. Tukey It is inappropriate to be concerned with mice when there are tigers abroad. — George Box ## Where does probability come from?¶ • Rates are not probabilities • Not all uncertainty is probability. Haphazard/random/unknown • A coefficient in a model may not be a "real" probability, even if it's called "probability" • A $P$-value may not be a relevant probability, even though it is a "probability" ### What is Probability?¶ #### Axiomatic aspect and philosophical aspect.¶ • Kolmogorov's axioms: • "just math" • triple $(S, \Omega, P)$ • $S$ a set • $\Omega$ a sigma-algebra on $S$ • $P$ a non-negative countably additive measure with total mass 1 • Philosophical theory that ties the math to the world • What does probability mean? • Standard theories • Equally likely outcomes • Frequency theory • Subjective theory • Probability models as empirical commitments • Probability as metaphor ### How does probability enter a scientific problem?¶ • underlying phenomenon is random (radioactive decay) • deliberate randomization (randomized experiments, random sampling) • subjective probability & "pistimetry" • posterior distributions require prior distributions • prior generally matters but rarely given attention (Freedman) • elicitation issues • arguments from consistency, "Dutch book," ... • invented model that's supposed to describe the phenomenon • in what sense? • to what level of accuracy? • description v. prediction v. predicting effect of intervention • testable to desired level of accuracy? • metaphor: phenomenon behaves "as if random" ### Two very different situations:¶ 1. Scientist creates randomness by taking a random sample, assigning subjects at random to treatment or control, etc. 2. Scientist invents (assumes) a probability model for data the world gives. (1) allows sound inferences. (2) is only as good as the assumptions. #### Gotta check the assumptions against the world¶ • Empirical support? • Plausible? • Iffy? • Absurd? #### Cargo-Cult Science: Feynman¶ In the South Seas there is a cargo cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they've arranged to imitate things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he's the controller—and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land. Now it behooves me, of course, to tell you what they’re missing. But it would he just about as difficult to explain to the South Sea Islanders how they have to arrange things so that they get some wealth in their system. It is not something simple like telling them how to improve the shapes of the earphones. But there is one feature I notice that is generally missing in Cargo Cult Science. That is the idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated. Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition. In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another. [] We've learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature's phenomena will agree or they'll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven't tried to be very careful in this kind of work. And it's this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science. The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that. —Richard Feynman, 1974. http://calteches.library.caltech.edu/51/2/CargoCult.htm ## What's a P-value?¶ • A probability • But of what? ## $P$-values¶ • Observe data $X \sim \mathbb{P}$. • Null hypothesis $\mathbb{P} = \mathbb{P}_0$ (or more generally, $\mathbb{P} \in \mathcal{P}_0$). • Nested (monotone) hypothesis tests: • $\{A_\alpha : \alpha \in (0, 1] \}$ • $\mathbb{P}_0 \{ X \notin A_\alpha \} \le \alpha$ (or more generally, $\mathbb{P} \{ X \notin A_\alpha \} \le \alpha, \; \forall \mathbb{P} \in \mathcal{P}_0$) • $A_\alpha \subset A_\beta$ if $\beta < \alpha$ (Can always re-define $A_\alpha \leftarrow \cup_{\beta \ge \alpha } A_\beta$) • If we observe $X = x$, $P$-value is $\sup \{ \alpha: x \in A_\alpha \}$. ## C.f. informal definition in terms of "extreme" values?¶ • What does "more extreme" mean? ## It's all about the null hypothesis¶ • P-values measure the strength of the evidence against the null: smaller values, stronger evidence. • If the $P$-value equals $p$, either: 1. the null hypothesis is false 2. an event occurred that had probability no greater than $p$ • Alternative hypothesis matters for power, but not for level. • Rejecting the null is not evidence for the alternative: it's evidence against the null. • If the null is unreasonable, no surprise if we reject it. Null needs to make sense. • Unreasonable null is not support for the alternative. The Rabbit Axioms 1. For the number of rabbits in a closed system to increase, the system must contain at least two rabbits. 2. No negative rabbits. Freedman's Rabbit-Hat Theorem You cannot pull a rabbit from a hat unless at least one rabbit has previously been placed in the hat. Corollary You cannot "borrow" a rabbit from an empty hat, even with a binding promise to return the rabbit later. ### Applications of the Rabbit-Hat Theorem¶ • Probablility doesn't come out of a calculation unless probability went into the calculation. • Can't turn a rate into a probability without assuming the phenomenon is random in the first place. • Can't conclude that a process is random without making assumptions that amount to assuming that the process is random. (Something has to put the randomness rabbit into the hat.) • Testing whether the process appears to be random using the assumption that it is random cannot prove that it is random. (You can't borrow a rabbit from an empty hat.) • Posterior distributions don't exist without prior distributions. ## When did the rabbit enter the hat?¶ Anytime you see a $P$-value, you should ask what the null hypothesis is. E.g., $\mu = 0$ is not the whole null hypothesis: • null has to completely specify (a family of possible) probability distributions of the data • otherwise, can't set acceptance regions $\{A_\alpha\}$. Anytime you see a posterior probability, you should ask what the prior was. • no posterior distribution without a prior distribution. • prior usually matters, despite claims about asymptopic results Anytime you see a confidence interval or standard error, you should ask what was random. • no confidence intervals or standard errors without either random sampling or stochastic errors. • box models Quantifauxcation Assign a meaningless number, then pretend that since it's quantitative, it's meaningful. Many P-values and other "probabilities" and most cost-benefit analyses are quantifauxcation. ### Cargo-cult statistics¶ Usually involves some combination of data, pure invention, ad hoc models, inappropriate statistics, and logical lacunae. ## Example: The 2-sample problem¶ • Randomization model: two lists. Are they "different"? • $t$-test. Assumptions? • Permutation distribution ## Example: Effect of treatment in a randomized controlled experiment¶ 11 pairs of rats, each pair from the same litter. Randomly—by coin tosses—put one of each pair into "enriched" environment; other sib gets "normal" environment. After 65 days, measure cortical mass (mg). enriched 689 656 668 660 679 663 664 647 694 633 653 impoverished 657 623 652 654 658 646 600 640 605 635 642 difference 32 33 16 6 21 17 64 7 89 -2 11 How should we analyze the data? Cartoon of Rosenzweig, M.R., E.L. Bennet, and M.C. Diamond, 1972. Brain changes in response to experience, Scientific American, 226, 22–29 report an experiment in which 11 triples of male rats, each triple from the same litter, were assigned at random to three different environments, "enriched" (E), standard, and "impoverished." See also Bennett et al., 1969. ### Informal Hypotheses¶ Null hypothesis: treatment has "no effect." Alternative hypothesis: treatment increases cortical mass. Suggests 1-sided test for an increase. ### Test contenders¶ • 2-sample Student $t$-test: $$\frac{\mbox{mean(treatment) - mean(control)}} {\mbox{pooled estimate of SD of difference of means}}$$ • 1-sample Student $t$-test on the differences: $$\frac{\mbox{mean(differences)}}{\mbox{SD(differences)}/\sqrt{11}}$$ Better, since littermates are presumably more homogeneous. • Permutation test using $t$-statistic of differences: same statistic, different way to calculate $P$-value. ### Assumptions of the tests¶ 1. 2-sample $t$-test: • masses are iid sample from normal distribution, same unknown variance, same unknown mean. • Tests weak null hypothesis (plus normality, independence, non-interference, etc.). 2. 1-sample $t$-test on the differences: • mass differences are iid sample from normal distribution, unknown variance, zero mean. • Tests weak null hypothesis (plus normality, independence, non-interference, etc.) 3. Permutation test: • Randomization fair, independent across pairs. • Tests strong null hypothesis. Assumptions of the permutation test are true by design: That's how treatment was assigned. If we reject the null for the 1-sample $t$-test, what have we learned? That the data are not (statistically) consistent with the assumption that they are an IID random sample from a normal distribution with mean 0. So what? We never thought they were. This is a straw man null hypothesis. ### Making sense of probabilities in applied problems¶ • Reflexive way to try to represent uncertainty (post-WWII phenomenon) • Not all uncertainty can be represented by a probability • "Aleatory" versus "Epistemic" • Aleatory • Canonical examples: coin toss, die roll, lotto, roulette • under some circumstances, behave "as if" random (but not perfectly) • Epistemic: stuff we don't know • "Pistimetry": measuring beliefs • Le Cam's (1977) three examples of uncertainty: • did Eudoxus have larger feet than Euclid? (ignorance) • will a fair coin land "heads" the next time it is tossed? (randomness) • is the $10^{137}+1$st digit of $\pi$ a 7? (limited resources) • Bayesian way of combining aleatory variability epistemic uncertainty puts beliefs on a par with an unbiased physical measurement w/ known uncertainty. • Claims that by introspection, can estimate without bias, with known accuracy—as if one's brain were unbiased instrument with known accuracy • Bacon's triumph over Aristotle should put this to rest, but empirically: • people are bad at making even rough quantitative estimates • quantitative estimates are usually biased • bias can be manipulated by anchoring, priming, etc. • people are bad at judging weights in their hands: biased by shape & density • people are bad at judging when something is random • people are overconfident in their estimates and predictions • confidence unconnected to actual accuracy. • anchoring effects entire disciplines (e.g., Millikan, c, Fe in spinach) • what if I don't trust your internal scale, or your assessment of its accuracy? • same observations that are factored in as "data" are also used to form beliefs: the "measurements" made by introspection are not independent of the data ### LeCam's coin-tossing example¶ Toss a fair coin $k$ times independently; $X$ is the number of heads; $\theta$ is the chance of heads. $$\mathbb{P}(X=k || \theta) = {n \choose k} \theta^k (1-\theta)^{n-k}.$$ Suppose prior is of the form $$\pi(\theta) = \frac{\theta^\alpha (1-\theta)^\beta}{\int t^\alpha (1-t)^\beta dt}.$$ After tossing the coin, the posterior distribution will be of the same form. Suppose it turns out to be $$p(\theta) = C \theta^{100}(1-\theta)^{100}.$$ According to Bayesian inference, that is everything there is to know about $\theta$ based on prior beliefs and the experiment. But doesn't it matter whether this is simply a prior, the posterior after 5 tosses, or the posterior after 200 tosses? Bayesian formalism does not distinguish between these cases. ### Rates versus probabilities¶ • In a series of trials, if each trial has the same probability $p$ of success, and if the trials are independent, then the rate of successes converges (in probability) to $p$. Law of Large Numbers • If a finite series of trials has an empirical rate $p$ of success, that says nothing about whether the trials are random. • If the trials are random and have the same chance of success, the empirical rate is an estimate of $p$. • If the trials are random and have the same chance of success and the dependence of the trials is known (e.g., the trials are independent), can quantify the uncertainty of the estimate. ### Thought experiments¶ You are one of a group of 100 people, of whom one will die in the next year. What's the chance it is you? You are one of a group of 100 people, of whom one is named "Philip." What's the chance it is you? Why does the first invite an answer, and the second not? Ignorance ≠ Randomness ### Cargo Cult Confidence Intervals¶ • Have a collection of numbers, e.g., MME climate model predictions of warming • Take mean and standard deviation. • Report mean as the estimate; construct a confidence interval or "probability" statement from the results, generally using Gaussian critical values • IPCC does this, as do many others. #### What's wrong with it?¶ • No random sample; no stochastic errors. • Even if there were a random sample, what justifies using normal theory? • Even if random and normal, misinterprets confidence as probability. Garbled; something like Fisher's fiducial inference • Ignores known errors in physical approximations • Ultimately, quantifauxcation. ### Random/haphazard/unpredictable/unknown¶ • Consider taking a sample of soup to tell whether it is too salty. • Stir the soup well, then take a tablespoon: random sample • Stick in a tablespoon without looking: haphazard sample • Tendency to treat haphazard as random • random requires deliberate, precise action • haphazard is jusy sloppy • Notions like probability, p-value, confidence intervals, etc., apply only if the sample is random (or for some kinds of measurement errors) • Don't apply to samples of convenience, haphazard samples, etc. • Don't apply to populations. ## Two brief examples¶ • Avian / wind-turbine interactions • Earthquake probabilities ### Wind power: "avian / wind-turbine interactions"¶ Wind turbines kill birds, notably raptors. • how many, and of what species? • how concerned should we be? • what design and siting features matter? • how do you build/site less lethal turbines? ### Measurements¶ Periodic on-the-ground surveys, subject to: • censoring • shrinkage/scavenging • background mortality • is this pieces of two birds, or two pieces of one bird? • how far from the point of injury does a bird land? attribution... Is it possible to ... • make an unbiased estimate of mortality? • reliably relate the mortality to individual turbines in wind farms? ### Stochastic model¶ Common: Mixture of a point mass at zero and some distribution on the positive axis. E.g., "Zero-inflated Poisson" Countless alternatives, e.g.: • observe $\max\{0, \mbox{Poisson}(\lambda_j)-b_j\}$, $b_j > 0$ • observe $b_j\times \mbox{Poisson}(\lambda_j)$, $b_j \in (0, 1)$. • observe true count in area $j$ with error $\epsilon_j$, where $\{\epsilon_j\}$ are dependent, not identically distributed, nonzero mean ### Consultant¶ • bird collisions random, Poisson distributed • same for all birds • independent across birds • rates follow hierarchical Bayesian model that depends on covariates: properties of site and turbine design #### What does this mean?¶ • when a bird approaches a turbine, it tosses a coin to decide whether to throw itself on the blades • chance coin lands heads depends on site and turbine design • all birds use the same coin for each site/design • birds toss their coins independently ### Where do the models come from?¶ • Why random? • Why Poisson? • Why independent from site to site? From period to period? From bird to bird? From encounter to encounter? • Why doesn't chance of detection depend on size, coloration, groundcover, …? • Why do different observers miss carcasses at the same rate? ### Complications at Altamont¶ • Why is randomness a good model? Random is not the same as haphazard or unpredictable. • Why is Poisson in particular reasonable? Do birds in effect toss coins, independently, with same chance of heads, every encounter with a turbine? Is #encounters $\times P(\mbox{heads})$ constant? • Why estimate the parameter of a contrived model rather than actual mortality? • Do we want to know how many birds die, or the value of $\lambda$ in an implausible stochastic model? • Background mortality—varies by time, species, etc. • Are all birds equally likely to be missed? Smaller more likely than larger? Does coloration matter? • Nonstationarity (seasonal effects—migration, nesting, etc.; weather; variations in bird populations) • Spatial and seasonal variation in shrinkage due to groundcover, coloration, illumination, etc. • Interactions and dependence. • Variations in scavenging. (Dependence on kill rates? Satiation? Food preferences? Groundcover?) • Birds killed earlier in the monitoring interval have longer time on trial for scavengers. • Differences or absolute numbers? (Often easier to estimate differences accurately.) • Same-site comparisons across time, or comparisons across sites? ### Earthquake probabilities¶ • Probabilistic seismic hazard analysis (PSHA): basis for building codes in many countries & for siting nuclear power plants • Models locations & magnitudes of earthquakes as random; mags iid • Models ground motion as random, given event. Distribution depends on the location and magnitude of the event. • Claim to estimate "exceedance probabilities": chance acceleration exceeds some threshold in some number of years • In U.S.A., codes generally require design to withstand accelerations w probability ≥2% in 50y. • PSHA arose from probabilistic risk assessment (PRA) in aerospace and nuclear power. Those are engineered systems whose inner workings are known but for some system parameters and inputs. • Inner workings of earthquakes are almost entirely unknown: PSHA is based on metaphors and heuristics, not physics. • Some assumptions are at best weakly supported by evidence; some are contradicted. ### The PSHA equation¶ Model earthquake occurrence as a marked stochastic process with known parameters. Model ground motion in a given place as a stochastic process, given the quake location and magnitude. Then, probability of a given level of ground movement in a given place is the integral (over space and magnitude) of the conditional probability of that level of movement given that there's an event of a particular magnitude in a particular place, times the probability that there's an event of a particular magnitude in that place • That earthquakes occur at random is an assumption not based in theory or observation. • involves taking rates as probabilities • Standard argument: • M = 8 events happen about once a century. • Therefore, the chance is about 1% per year. ### Earthquake casinos¶ • Models amount to saying there's an "earthquake deck" • Turn over one card per period. If the card has a number, that's the size quake you get. • Journals and journals full of arguments about how many "8"s in the deck, whether the deck is fully shuffled, whether cards are replaced and re-shuffled after dealing, etc. But this is just a metaphor! ### Earthquake terrorism¶ • Why not say earthquakes are like terrorist bombings? • don't know where or when • know they will be large enough to kill • know some places are "likely targets" • but no probabilities • What advantage is there to the casino metaphor? ### Rabbits and Earthquake Casinos¶ #### What would make the casino metaphor apt?¶ 1. The physics of earthquakes might be stochastic. But it isn't. 2. A stochastic model might provide a compact, accurate description of earthquake phenomenology. But it doesn't. 3. A stochastic model might be useful for predicting future seismicity. But it isn't (Poisson, Gamma renewal, ETAS) 3 of the most destructive recent earthquakes were in regions seismic hazard maps showed to be relatively safe (2008 Wenchuan M7.9, 2010 Haiti M7.1, & 2011 Tohoku M9) Stein, Geller, & Liu, 2012 #### What good are the numbers?¶ • Freedman, D.A., 1995, Some issues in the foundations of Statistics, Foundations of Science, 1, 19–39. • LeCam, L., 1977. A note on metastatistics or 'an essay towards stating a problem in the doctrine of chances', Synthese, 36, 133–160. • Mulargia, F., R.J. Geller, and P.B. Stark, 2017. Why is Probabilistic Seismic Hazard Analysis (PSHA) still used?, Physics of the Earth and Planetary Interiors, 264, 63–75. • Stark, P.B. and D.A. Freedman, 2003. What is the Chance of an Earthquake? in Earthquake Science and Seismic Risk Reduction, F. Mulargia and R.J. Geller, eds., NATO Science Series IV: Earth and Environmental Sciences, v. 32, Kluwer, Dordrecht, The Netherlands, 201–213. Preprint: https://www.stat.berkeley.edu/~stark/Preprints/611.pdf • Stark, P.B. and L. Tenorio, 2010. A Primer of Frequentist and Bayesian Inference in Inverse Problems. In Large Scale Inverse Problems and Quantification of Uncertainty, Biegler, L., G. Biros, O. Ghattas, M. Heinkenschloss, D. Keyes, B. Mallick, L. Tenorio, B. van Bloemen Waanders and K. Willcox, eds. John Wiley and Sons, NY. Preprint: https://www.stat.berkeley.edu/~stark/Preprints/freqBayes09.pdf • Stark, P.B., 2015. Constraints versus priors. SIAM/ASA Journal on Uncertainty Quantification, 3(1), 586–598. doi:10.1137/130920721, Reprint: http://epubs.siam.org/doi/10.1137/130920721, Preprint: https://www.stat.berkeley.edu/~stark/Preprints/constraintsPriors15.pdf • Stark, P.B., 2016. Pay no attention to the model behind the curtain. https://www.stat.berkeley.edu/~stark/Preprints/eucCurtain15.pdf
2019-01-16 02:48:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5855998396873474, "perplexity": 3407.3697106111595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583656577.40/warc/CC-MAIN-20190116011131-20190116033131-00550.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-5-problem-41sp-fundamentals-of-financial-management-mindtap-course-list-14th-edition/9781285867977/time-value-of-money-answer-the-following-questions-a-assuming-a-rate-of-10percent-annually-find-the-fv/413f2d99-efad-11e8-9bb5-0ece094302b6
# TIME VALUE OF MONEY Answer the following questions: a. Assuming a rate of 10% annually, find the FV of $1,000 after 5 years. b. What is the investment’s FV at rates of 0%, 5%, and 20% after 0, 1, 2, 3, 4, and 5 years? c. Find the PV of$1,000 due in 5 years if the discount rate is 10%. d. What is the rate of return on a security that costs $1,000 and returns$2,000 after 5 years? e. Suppose California’s population is 36.5 million people and its population is expected to grow by 2% annually. How long will it take for the population to double? f. Find the PV of an ordinary annuity that pays $1,000 each of the next 5 years if the interest rate is 15%. What is the annuity’s FV? g. How will the PV and FV of the annuity in part 1 change if it is an annuity due? h. What will the FV and the PV be for$1,000 due in 5 years if the interest rate is 10%, semiannual compounding? i. What will the annual payments be for an ordinary annuity for 10 years with a PV of $1,000 if the interest rate is 8%? What will the payments be if this is an annuity due? j. Find the PV and the FV of an investment that pays 8% annually and makes the following end-of-year payments: k. Five banks offer nominal rates of 6% on deposits; but A pays interest annually; B pays semiannually; C pays quarterly; D pays monthly; and E pays daily. 1. What effective annual rate does each bank pay? If you deposit$5,000 in each bank today, how much will you have in each bank at the end of 1 year? 2 years? 2. If all of the banks are insured by the government (the FDIC) and thus are equally risky, will they be equally able to attract funds? If not (and the TVM is the only consideration), what nominal rate will cause all of the banks to provide the same effective annual rate as Bank A? 3. Suppose you don’t have the $5,000 but need it at the end of 1 year. You plan to make a series of deposits—annually for A, semiannually for B, quarterly for C, monthly for D, and daily for E—with payments beginning today. How large must the payments be to each bank? 4. Even if the five banks provided the same effective annual rate, would a rational investor be indifferent between the banks? Explain. 1. Suppose you borrow$15,000. The loan’s annual interest rate is 8%. and it requires four equal end-of-year payments. Set up an amortization schedule that shows the annual payments, interest payments, principal repayments, and beginning and ending loan balances. ### Fundamentals of Financial Manageme... 14th Edition Eugene F. Brigham + 1 other Publisher: Cengage Learning ISBN: 9781285867977 ### Fundamentals of Financial Manageme... 14th Edition Eugene F. Brigham + 1 other Publisher: Cengage Learning ISBN: 9781285867977 #### Solutions Chapter Section Chapter 5, Problem 41SP Textbook Problem ## Expert Solution ### Want to see the full answer? Check out a sample textbook solution.See solution ### Want to see this answer and more? Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!* See Solution *Response times vary by subject and question complexity. Median response time is 34 minutes and may be longer for new subjects. Find more solutions based on key concepts Show solutions Give three examples of important trade-offs that you face in your life. Brief Principles of Macroeconomics (MindTap Course List) Who are the three parties to every check? College Accounting, Chapters 1-27 Why might purchasing power parity fail to hold? Fundamentals of Financial Management, Concise Edition (with Thomson ONE - Business School Edition, 1 term (6 months) Printed Access Card) (MindTap Course List)
2020-12-01 21:39:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40555617213249207, "perplexity": 3058.0943811766438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141681524.75/warc/CC-MAIN-20201201200611-20201201230611-00164.warc.gz"}
https://www.groundai.com/project/reliability-and-secrecy-functions-of-the-wiretap-channel-under-cost-constraint/
Reliability and Secrecy Functions of the Wiretap Channel under Cost Constraint # Reliability and Secrecy Functions of the Wiretap Channel under Cost Constraint Te Sun Han, ,   Hiroyuki Endo,   Masahide Sasaki T. S. Han is with the Quantum ICT Laboratory, National Institute of Information and Communications Technology (NICT), Nukui-kitamachi 4-2-1, Koganei, Tokyo,184-8795, Japan (email: han@is.uec.ac.jp, han@nict.go.jp)H. Endo is with the Department of Applied Physics, Waseda University, Okubo 3-4-1, Shinjuku, Tokyo, Japan, and is also a collaborating research fellow of the Quantum ICT Laboratory, NICT (email: h-endo-1212@ruri.waseda.jp, h-endo@nict.go.jp)M. Sasaki is with the Quantum ICT Laboratory, NICT, Nukui-kitamachi 4-2-1, Koganei, Tokyo,184-8795, Japan (email: psasaki@nict.go.jp) July 21, 2019 ###### Abstract The wiretap channel has been devised and studied first by Wyner, and subsequently extended to the case with non-degraded general wiretap channels by Csiszár and Körner. Focusing mainly on the stationary memoryless channel with cost constraint, we newly introduce the notion of reliability and secrecy functions as a fundamental tool to analyze and/or design the performance of an efficient wiretap channel system, including binary symmetric wiretap channels, Poisson wiretap channels and Gaussian wiretap channels. Compact formulae for those functions are explicitly given for stationary memoryless wiretap channels. It is also demonstrated that, based on such a pair of reliability and secrecy functions, we can control the tradeoff between reliability and secrecy (usually conflicting), both with exponentially decreasing rates as block length becomes large. Four ways to do so are given on the basis of rate shifting, rate exchange, concatenation and change of cost constraint. Also, the notion of the secrecy capacity is defined and shown to attain the strongest secrecy standard among others. The maximized vs. averaged secrecy measures is also discussed. reliability function, secrecy function, secrecy measures, Poisson wiretap channel, cost constraint, Gaussian wiretap channel, binary symmetric wiretap channel, tradeoff between reliability and secrecy, concatenation, rate shifting, rate exchange, change of cost constraint ## 1 Introduction The pioneering work by Wyner [1] as well as by Csiszár and Körner [2], based on the wiretap channel model, has provided a strong impetus to find a new scheme of the physical layer cryptography in a good balance of usability and secrecy. In particular, they have first formulated the tradeoff between the transmission rate for Bob and the equivocation rate against Eve. Since then, “information theoretic security attracts much attention, because it offers security that does not depend on conjectured difficulties of some computational problem, ” suggested by Associate Editor and there have been extensive studies on various kinds of wiretap channels, which are nicely summarized, e.g., in Laourine and Wagner [3] along with the secrecy capacity formula for the Poisson wiretap channel without cost constraint. Among others, Hayashi [4] is the first who has derived the relevant secrecy exponent function to specify the exponentially decreasing speed (i.e., exponent) of the leaked information under the average secrecy criterion when no cost constraint is considered. Throughout in this paper, we are imposed cost constraints (limit on available transmission energy, bandwidth, and so on). We first address, given a general wiretap channel, the primal problem to establish a general formula to simultaneously summarize the reliability performance for Bob and the secrecy performance against Eve under the maximum secrecy criterion. Next, it is shown that both of them are described by using exponentially decaying functions of the code length when a stationary memoryless wiretap channel is considered. This provides the theoretical basis for investigating the asymptotic behavior of reliability and secrecy. We can then specifically quantify achievable reliability exponents and achievable secrecy exponents as well as the tradeoff between them for several important wiretap channel models such as binary symmetric wiretap channels, Poisson wiretap channels, Gaussian wiretap channels. In particular, four ways of the tradeoff to control reliability and secrecy are given and discussed with their novel significance. Also, on the basis of the analysis of these exponents under cost constraint, the new formula for the -secrecy capacity (with the strongest secrecy among others) is established to apply to several typical wiretap channel models. A remarkable feature of this paper is that we first derive the key formulas not depending on respective specific channel models and then apply them to those respective cases to get new insights into each case as well. The paper is organized as follows. In Section 2, the definitions of wiretap channel and related notions such as error probability, cost constraint, secrecy capacity and concatenation are introduced along with various kinds of secrecy measures. In Section 3.A, we give a fundamental formula to simultaneously evaluate a pair of reliability behavior and secrecy behavior under cost constraint for a general wiretap channel, which is then in Section 3.B, particularized to establish the specific formulas for stationary and memoryless wiretap channels. Here, the notions of reliability function and secrecy function are introduced to evaluate the exponent of the exponentially decreasing decoding error for Bob and that of the exponentially decreasing divergence distance against Eve for the stationary memoryless wiretap channel under cost constraint. This is one of the key results in this paper. We also present their numerical examples to see how the reliability and secrecy exponents vary depending on the channel and cost parameters. Also, superiority of the maximum secrecy criterion to the average secrecy criterion is discussed. In Section 3.C, a strengthening of Theorem 3.3 in Section 3.B is provided. In Section 3.D, the -secrecy capacity formula (with the strongest secrecy) is given under cost constraint, including the formula for a special but important case with more capable wiretap channels. In Section 4, four ways for the tradeoff are demonstrated: one is by rate shifting, another one by rate exchange, one more by concatenation, and the other by change of cost constraint, which are discussed in terms of the reliability and secrecy exponents. This section is thus prepared for more quantitative analysis/design of the reliability-secrecy tradeoff. In Section 5, the formula for the -secrecy capacity is applied to the Poisson wiretap channel with cost constraint, which is a practical model for free-space Laser communication with a photon counter. In Section 6, for Poisson wiretap channels with cost constraint we demonstrate the reliability and secrecy functions as an application of the key theorem established in Section 3.B. In Section 7, we investigate the effects of channel concatenation with an auxiliary channel for the Poisson wiretap channel. In Section 8, the -secrecy capacity formula for the Gaussian wiretap channel is given as an application of the key theorem established in Section 3.D. In Section 9, for the Gaussian wiretap channels with cost constraint we demonstrate the reliability and secrecy functions as an application of the key theorem established in Section 3.B. In particular, these functions are numerically compared with those of Gallager-type, which reveals that a kind of duality exists among them. In Section 10, we conclude the paper. ## 2 Preliminaries and basic concepts In this section we give the definition of the wiretap channel. There are several levels and ways to specify the superiority of the legitimate users, Alice and Bob, to the eavesdropper, Eve, such as physically degraded Eve, (statistically) degraded Eve, less noisy Bob, and more capable Bob. In this paper, we are interested mainly in the last class of channels because the other ones imply the last one (cf. Csiszár and Körner [9]). We introduce here the necessary notions and notations to quantify the reliability and the secrecy of this kind of wiretap channel model. In particular, we define several kinds of secrecy metrics, including the strongest criterion based on the divergence distance with reference to a target output distribution, while the notion of concatenation of channels is also introduced to construct a possible way to control tradeoff between reliability and secrecy. A. Wiretap channel Let be arbitrary alphabets (not necessarily finite), where is called an input alphabet, and are called output alphabets. A general wiretap channel consists of two general channels, i.e., (from Alice for Bob) and (from Alice against Eve), where , are the conditional probabilities of given (of block length ), respectively. Alice wants to communicate with Bob as reliably as possible but as secretly as possible against Eve. We let ( indicate such a wiretap channel. Given a message set , we consider a stochastic encoder for Alice and a decoder for Bob , and for let denote the output due to via channel . B. Cost constraint From the viewpoint of communication technologies, it is sometimes needed to impose cost constraint on channel inputs. Here we give its formal definition. For fix a mapping (the set of nonnegative real numbers) arbitrarily. For we call the cost of and the cost per letter. In the channel coding problem with cost constraint, we require the encoder outputs satisfy Pr{1ncn(φn(i))≤Γ}=1(for % all\ i=1,2,⋯,Mn), (2.1) where is an arbitrarily nonnegative given constant, which we call cost constraint . Notice here that the encoder is stochastic. When (2.1) holds, we say that the encoder satisfies the cost constraint and call ( a wiretap channel with cost constraint . Incidentally, define Xn(Γ)={x∈Xn∣∣∣1ncn(x)≤Γ}, (2.2) then (2.1) is rewritten also as Pr{φn(i)∈Xn(Γ)}=1(for all\ % i=1,2,⋯,Mn). (2.3) ###### Remark 2.1 Consider the case with and , then in this case it is easy to check that , which means that the wiretap channel is actually imposed no cost constraint. \QED C. Error probability, secrecy measures and secrecy capacities Given a wiretap channel () with cost constraint , the error probability (measure of reliability) via channel for Bob is defined to be ϵBn≡1Mn∑i∈MnPr{ψBn(φBn(i))≠i}, (2.4) whereas the divergence distance (measure 1 of secrecy) and the variational distance (measure 2 of secrecy) via channel against Eve are defined to be δEn≡1Mn∑i∈MnD(P(i)n||πn), (2.5) ∂En≡1Mn∑i∈Mnd(P(i)n,πn) (2.6) where D(P1||P2)=∑u∈UP1(u)logP1(u)P2(u), d(P1,P2)=∑u∈U|P1(u)−P2(u)|; where denotes the output probability distribution on via channel due to the input , and is called the target output probability distribution on , which is generated via channel due to an arbitrarily prescribed input distribution on . Specifically, is given by . In this paper the logarithm is taken to the natural base . With these two typical measures of secrecy, we can define two kinds of criteria for achievability: ϵBn→0, δEn→0as n→∞, (2.7) ϵBn→0, ∂En→0as n→∞. (2.8) We say that a rate is -achievable if there exists a pair of encoder and decoder satisfying criterion (2.7) and liminfn→∞1nlogMn≥R. (2.9) When there is no fear of confusion, we say simply that a rate is -achievable by dropping cost constraint , and so on also in the sequel. Similarly, we say that a rate is -achievable if there exists a pair of encoder and decoder satisfying criterion (2.8) and (2.9). It should be noted here that criterion (2.7) implies criterion (2.8), owing to Pinsker inequality [10]: (∂En)2≤2δEn, which means that criterion (2.7) is stronger than criterion (2.8). On the other hand, many people (e.g., Csiszár [7], Hayashi [4]) have used, instead of measure (2.5), the mutual information: IEn≡1Mn∑i∈MnD(P(i)n||Pn),Pn=1Mn∑i∈MnP(i)n. (2.10) With this measure (measure 3 of secrecy), we may consider one more criterion for achievability (called the i-achievability): ϵBn→0, IEn→0as n→∞. (2.11) On the other hand, since the identity (Pythagorean theorem): δEn=IEn+D(Pn||πn) (2.12) holds, is a stronger measure than . Moreover, since dEn≡1Mn∑i∈Mnd(P(i)n,Pn)≤2Mn∑i∈Mnd(P(i)n,πn)=2∂En always holds by virtue of the triangle axiom of the variational distance, is stronger than (measure 4 of secrecy: cf. [7]), so that criterion (2.8) is stronger than the d-achievability: ϵBn→0, dEn→0as n→∞. (2.13) Furthermore, one may sometimes prefer to consider the following achievability (called the w-achievability): ϵBn→0, 1nIEn→0as n→∞, (2.14) which is nothing but the so-called weak secrecy (measure 5 of secrecy). Indeed, this is the weakest criterion among others; its illustrating example will appear in Examples 5.1 and 8.1, while criterion (2.7) is the strongest one and introduced for the first time in this paper. Fig.1 shows the implication scheme among these five measures of secrecy. The secrecy capacities - and between Alice and Bob are defined to be the supremum of all -achievable rates and that of all -achievable rates, respectively. Similarly, the secrecy capacity d- with d-achievability, the secrecy capacity i- with i-achievability as well as the secrecy capacity w- with w-achievability can also be defined. ###### Remark 2.2 One may wonder if the “strongest” measure of secrecy can be given an operational meaning. In this connection, we would like to cite the paper by Hou and Kramer [8] in which is interpreted as a measure of “non-confusion” and as a measure of “non-stealth,” and is interpreted as the background noise distribution on that Eve detects in advance to the communication between Alice and Bob; thus, in view of (2.12), by making we can not only keep the message secret from Eve but also hide the presence of meaningful communication. Alice can control so as to be most perplexng to Eve. A connection to some hypothesis testing problem is also pointed out. A similar interpretation is given also for with as a measure of “non-confusion” and as a measure of “non-stealth,” because the following inequality holds: dEn+d(Pn,πn)≤3∂En. (2.15) ###### Remark 2.3 We notice that all of , , , and , defined here are the measures averaged over the message set with the uniform distribution. On the other hand, we can consider also the criteria maximized over the message set which will be discussed later in Remark 3.9. \QED D. Concatenation In wiretap channel coding it is one of the important problems how to control the tradeoff between the reliability for Bob and the secrecy against Eve. There are several ways to control it. One of these is to make use of the concatenation of the main wiretap channel with an auxiliary (virtual) channel. So, it is convenient to state here its formal definition for later use. Let be an arbitrary alphabet (not necessarily finite) and let be an arbitrary auxiliary random variable with values in such that forms a Markov chain in this order, where is an input variable for the wiretap channel ; and are the output variables of channels due to the input , respectively. ###### Definition 2.1 Given a general channel , we define its concatenated channel so that Wn+(y|v)=∑x∈XnWn(y|x)PXn|Vn(x|v), (2.16) where We use the convention that, given random variables and , and denote the probability distribution of , and the conditional probability distribution of given , respectively is an arbitrary auxiliary channel. In particular, we say that a pair is a concatenation of the wiretap channel , if Wn+B(y|v) = ∑x∈XnWnB(y|x)PXn|Vn(x|v) (2.17) Wn+E(z|v) = ∑x∈XnWnE(z|x)PXn|Vn(x|v). (2.18) with the auxiliary channel . Notice that if as random variables then these reduce to the non-concenated wiretap channel. \QED E. Stationary memoryless wiretap channel In this paper the substantial attention is payed to the special class of wiretap channels called the stationary memoryless wiretap channel, the definition of which is given by ###### Definition 2.2 A wiretap channel is said to be stationary and memoryless if, with some channels , it holds that WnB(y|x)=n∏k=1WB(yk|xk),WnE(z|x)=n∏k=1WE(zk|xk), (2.19) where This wiretap channel may be denoted simply by . \QED When we are dealing with a stationary memoryless wiretap channel it is usual to assume an additive cost in the sense that where . This enables us to analyze the detailed performances of the wiretap channel, to be shown in the following sections. ## 3 Evaluation of reliability and secrecy In this section, the problem of a general wiretap channel with general cost constraint is first studied, and next the problem of a stationary memoryless wiretap channel with additive cost constraint is investigated in details. In particular, with criterion (2.7) we are interested in exponentially decreasing rates of as tends to . Finally, its applicantion to establish a general formula for the -secrecy capacity - with cost constraint is provided. A. General wiretap channel with cost constraint Let , be arbitrary general channels and be an arbitrary auxiliary input distribution on , and set ϕ(ρ|Wn,Q) ≡ −log∑y(∑vQ(v)Wn(y|v)11+ρ)1+ρ, (3.1) ψ(ρ|Wn,Q) ≡ −log∑z(∑vQ(v)Wn(z|v)1+ρ)WnQ(z)−ρ, (3.2) where . Then, we have ###### Theorem 3.1 Let be a general wiretap channel with general cost constraint , and , be arbitrary positive integers, then there exists a pair ) of encoder (satisfying cost constraint ) and decoder such that ϵBn ≤ 2inf0≤ρ≤1(MnLn)ρe−ϕ(ρ|Wn+B,Q), (3.3) δEn ≤ 2inf0<ρ≤1e−ψ(ρ|Wn+E,Q)ρLρn (3.4) ≤ 2inf0<ρ<1e−ϕ(−ρ|Wn+E,Q)ρLρn, (3.5) where is a concatenation of (cf. Definition 2.1), and we assume that the condition Pr{Xn∈Xn(Γ)}=1 (3.6) holds for the random variable over induced via the auxiliary channel by the input variable subject to on . \QED Proof: See Appendix A. ###### Remark 3.1 Formula (3.3) without concatenation is due to Gallager [11], while formulas (3.4), (3.5) without concatenation and cost constraint have first been shown in a different context by Han and Verdú [13, p.768] based on a simple random coding argument, and subsequently developed by Hayashi [4] based on a universal hashing argument to establish the cryptographic implication of channel resolvability (see, also Hayashi [6]). \QED ###### Remark 3.2 We define the rates and , which is called the coding rate for Bob and the resolvability rate against Eve, respectively. Rate is quite popular in channel coding, whereas rate , roughly speaking, indicates the rate of a large dice with faces to provide randomness needed to implement an efficient stochastic encoder to deceive Eve. \QED ###### Remark 3.3 In view of (3.6), the concatenated channels as defined by (2.17) and (2.18) can be written as Wn+B(y|v) = ∑x∈Xn(Γ)WnB(y|x)PXn|Vn(x|v), (3.7) Wn+E(z|v) = ∑x∈Xn(Γ)WnE(z|x)PXn|Vn(x|v). (3.8) The reason why we have introduced the concatenated channel instead of the non-concatenated channel can be seen from the following theorem. ###### Theorem 3.2 (Tradeoff of reliability and secrecy by concatenation) Concatenation decreases reliability for Bob and increases secrecy against Eve. Proof:  The quantity in (3.3) is lower bounded, by concavity of the function , as An = ∑y⎛⎜⎝∑vQ(v)(∑xPXn|Vn(x|v)WnB(y|x))11+ρ⎞⎟⎠1+ρ (3.9) ≥ ∑y(∑v∑xQ(v)PXn|Vn(x|v)WnB(y|x)11+ρ)1+ρ (3.10) = ∑y(∑xP(x)WnB(y|x)11+ρ)1+ρ, (3.11) where . This implies that concatenation decreases reliability for the channel for Bob. On the other hand, the quantity in (3.5) is upper bounded, by convexity of the function , as Bn = ∑z⎛⎜⎝∑vQ(v)(∑xPXn|Vn(x|v)WnE(z|x))11−ρ⎞⎟⎠1−ρ (3.12) ≤ ∑z(∑v∑xQ(v)PXn|Vn(x|v)WnE(z|x)11−ρ)1−ρ (3.13) = ∑z(∑xP(x)WnE(z|x)11−ρ)1−ρ, (3.14) which implies that concatenation increases secrecy against the channel for Eve. Thus, we can control the tradeoff between reliability and secrecy (usually conflicting) by adequate choice of an auxiliary channel (e.g., see Fig.4 later for the case of stationary memoryless wiretap channels). Furthermore, it should be noted that in (3.4) also has such a nice tradeoff property like in the above, owing to the convexity in . \QED B. Stationary memoryless wiretap channel with cost constraint So far we have studied the performance of general wiretap channels with general cost constraint . Suppose now that we are given a stationary and memoryless wiretap channel , specified by , with additive cost . With this important class of channels, we attempt to bring out specific useful insights on the basis of Theorem 3.1. To do so, let us consider the case in which are i.i.d. variables with common joint distribution PXV(x,v)((v,x)∈V×X), (3.15) then, the probabilities of and , and the conditional probability of given are written as PXn(x) = n∏i=1PX(xi), (3.16) PVn(v) = n∏i=1PV(vi), (3.17) PXn|Vn(x|v) = n∏i=1PX|V(xi|vi), (3.18) respectively, where x=(x1,⋯,xn),v=(v1,⋯,vn). It should be noted here that indicates a channel input for , and indicates a channel input for . Accordingly, these specifications define a joint probability distribution on . Also, the concatenated channel in this case is written simply as W+B(y|v) = ∑x∈XWB(y|x)PX|V(x|v), (3.19) W+E(z|v) = ∑x∈XWE(z|x)PX|V(x|v). (3.20) Then, we have one of the key results: ###### Theorem 3.3 Let be a stationary memoryless wiretap channel with additive cost . Let be a joint probability distribution as above, and suppose that the constraint on is satisfied. Then, for any positive integers , , there exists a pair ) of encoder (satisfying cost constraint ) and decoder such that ϵBn ≤ 2α1+ρnβn(MnLn)ρ ⋅⎡⎢ ⎢⎣∑y∈Y⎛⎜⎝∑v∈Vq(v)[∑x∈XWB(y|x)PX|V(x|v)e(1+ρ)r[Γ−c(x)]]11+ρ⎞⎟⎠1+ρ⎤⎥ ⎥⎦n and δEn ≤ 2α1−ρnβn1ρLρn ⋅⎡⎢ ⎢⎣∑z∈Z⎛⎜⎝∑v∈Vq(v)[∑x∈XWE(z|x)PX|V(x|v)e(1−ρ)r[Γ−c(x)]]11−ρ⎞⎟⎠1−ρ⎤⎥ ⎥⎦n, where we have put for simplicity, and are the constants such that or to be specified in the proof. \QED Proof: See Appendix B. ###### Remark 3.4 (Two secrecy functions) So far, we have established evaluation of upper bounds (3.3) and (3.5) when the channel is stationary and memoryless under cost constraint. It should be noted, however, that we did not evaluate upper bound (3.4). This is because (3.4) contains the term with negative power , and hence upper bounding for (3.4) does not work. Thus, we prefer bound (3.5) rather than bound (3.4). \QED ###### Remark 3.5 Instead of upper bound (B.8) (in the proof of Theorem 3.3) on the characteristic function , i.e., the upper bound (3.23) Gallager [11] used the upper bound χ(x)≤exp[(1+ρ)r(n∑i=1c(xi)−nΓ+δ)], (3.24) where is an arbitrary small constant. Wyner [15] also used upper bound (3.24) for Poisson channels. However, we prefer upper bound (3.23) in this paper (except for in Theorems 9.2 and 9.4 later in Section 9), because it provides us with reasonable evaluation of the reliability and secrecy functions for binary symmetric wiretap channels, for Poisson wiretap channels and also for Gaussian wiretap channels to be treated in this section and in Sections 6, 7 and 9. \QED Let us now give more compact forms to (LABEL:eq:istan1) and (LABEL:eq:istan2). To do so, let us define a reliability exponent function (or simply, reliability function) for Bob, and a secrecy exponent function (or simply, secrecy function) against Eve, as §§§In the theory of channel coding it is the tradition to use the terminology “reliability functionn” to denote the “optimal” one. Therefore, more exactly, it might be recommended to use the term such as “achievable reliability exponent (function)” and “achievable secrecy exponent (function),” because here we lack the converse results. However, in this paper, simply for convenience with some abuse of the notation, we do not stick to the optimality and prefer to use their shorthands, because in most cases the optimal computable formula is not known. Then, the term “optimal reliability function” with the converse makes sense. Similarly for the “secrecy function.” Fc(q,RB,RE,n) ≡ supr≥0sup0≤ρ≤1(ϕ(ρ|WB,q,r)−ρ(RB+RE)+log(αnβ1+ρn)−ρlog3n), Hc(q,RE,n) ≡ supr≥0sup0<ρ<1(ϕ(−ρ|WE,q,r)+ρRE+log(αnβ1−ρn)+logρn), where for fixed rates we have set , and ϕ(ρ|WB,q,r) = −log⎡⎢ ⎢⎣∑y∈Y⎛⎜⎝∑v∈Vq(v)[∑x∈XWB(y|x)PX|V(x|v)e(1+ρ)r[Γ−c(x)]]11+ρ⎞⎟⎠1+ρ⎤⎥ ⎥⎦, ϕ(−ρ|WE,q,r) = −log⎡⎢ ⎢⎣∑z∈Z⎛⎜⎝∑v∈Vq(v)[∑x∈XWE(z|x)PX|V(x|v)e(1−ρ)r[Γ−c(x)]]11−ρ⎞⎟⎠1−ρ⎤⎥ ⎥⎦. Thus, we have ###### Theorem 3.4 Let be a stationary memoryless wiretap channel with additive cost constraint , then there exists a pair ) of encoder (satisfying cost constraint ) and decoder such that ϵBn ≤ 2e−nFc(q,RB,RE,n), (3.29) δEn ≤ 2e−nHc(q,RE,n). (3.30) where it is assumed that satisfies . \QED ###### Remark 3.6 (Reliability and secrecy functions) The function quantifies performance of channel coding (called the random coding exponent of Gallager [11]), whereas the function quantifies performance of channel resolvability (cf. Han and Verdú [13], Han [12], Hayashi [4, 6]). ###### Remark 3.7 It should be noted that, the third term in on the right-hand side of (LABEL:eq:func11) and the third term in on the right-hand of (LABEL:eq:func12) is both of the order , which approach zero as tends to , so that these terms do not affect the exponents. Actually, the term on the right-hand side of (LABEL:eq:func11) is not needed here but is needed in on the right-hand side of (3.35) to follow under the maximum criterion. \QED ###### Remark 3.8 (Non-concatenation) It is sometimes useful to consider the special case with as random variables over . In this case the above quantities () reduce to ϕ(ρ|WB,q,r) = −log⎡⎣∑y∈Y(∑x∈Xq(x)WB(y|x)11+ρer[Γ−c(x)])1+ρ⎤⎦, (3.31) ϕ(−ρ|WE,q,r) = −log⎡⎣∑z∈Z(∑x∈Xq(x)WE(z|x)11−ρer[Γ−c(x)])1−ρ⎤⎦, where the reliability function with (3.31) with instead of is earlier found in Gallager [11] and (3.31) with instead of applied to Poisson channels is found in Wyner [15], while the secrecy function with (LABEL:eq;halimeq1) intervenes for the first time in this paper. \QED Recall that, so far, upper bounds on the error probability and the divergence distance are based on the averaged criteria as mentioned in Section 1.C. Alternatively, instead of the averaged criteria and , we can define the maximum criteria and as follows. {\scriptsize m}-ϵBn ≡ maxi∈MnPr{ψBn(φBn(i))≠i}, (3.33) {\scriptsize m}-δEn ≡ maxi∈MnD(P(i)n||πn). (3.34) With these criteria, using Markov inequality Set then Markov inequality tells that and Therefore, , where We then keep the message set and throw out the rest to obtain Theorem 3.5. This causes the term to intervene on the right-hand side of (LABEL:eq:func11). applied to (3.29) and (3.30), we obtain, instead of Theorem 3.4, ###### Theorem 3.5 Let be a stationary memoryless wiretap channel with additive cost constraint , then there exists a pair ) of encoder (satisfying cost constraint ) and decoder such that {\scriptsize m}-ϵBn ≤ 6e−nFc(q,RB,RE,n), (3.35) {\scriptsize m}-δEn ≤ 6e−nHc(q,RE,n), (3.36) where it is assumed that satisfies . \QED ###### Remark 3.9 (Average vs. maximum criteria) Bound (3.35) is well known in channel coding (cf. Gallager [11]), whereas bound (3.36) is taken into consideration for the first time in this paper. In channel coding, which of the averaged or the maximum we should take would be rather a matter of preference or the context. On the other hand, however, which of the averaged or the maximum we should take is a serious matter from the viewpoint of secrecy. This is because, even with small , we cannot exclude a possibility that the divergence distance is very large for some particular and hence is also very large, which implies that the message is not saved from a serious risk of successful decryption by Eve. On the other hand, with small , every message is guaranteed to be kept highly confidential against Eve as well. Thus, we prefer the criterion as well as in this paper. \QED In view of Remark 3.7, we are tempted to go further over the properties of the functions . In particular, we are interested in the behavior of the functions and In this connection, we have following lemma, where we let denote the mutual information between the input and its output via the channel . ###### Lemma 3.1 Assume that and , then
2021-03-08 06:59:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8503394722938538, "perplexity": 917.6593540182879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381989.92/warc/CC-MAIN-20210308052217-20210308082217-00520.warc.gz"}
https://puzzling.stackexchange.com/questions/31619/game-of-billiards-kill-the-planets-too
# Game of Billiards: Kill the Planets Too Below is a $$8 \enspace \text{ft} \times 4 \enspace \text{ft}$$ $$\enspace (=243.84 \enspace \text{cm} \times 121.92 \enspace \text{cm})$$ billiard table, with a perfectly flat playing surface. The cushions are removed for the sake of simplicity. You have: • 16 standard, $$2 \frac{1}{4}$$-inch $$(=5.715 \enspace \text{cm})$$ pool balls of the same weight (including the cue ball). • A mighty cue stick. You can place the balls anywhere on the table (except the pockets), but all sixteen of them must be used. Then you will have one shot. In a single shot, you have to pocket all of the balls except the cue ball. The cue ball itself must never go down a pocket, not even after the others are pocketed. ## Technical details The balls: • Have a perfect spherical shape. • Never convert kinetic energy into anything else. • Collide with one another and the rails in a perfectly elastic manner. (See? No need for cushions.) • Collide frictionlessly (they don't start spinning when their collision is not head-on). Parameters for the table: • The pockets have the exact same size as the balls, but a ball can fall into a pocket by getting even partially above the hole. • The edge of the playing area goes through the center of each pocket. Another important notice: You have to provide a mathematically accurate explanation why your solution works. It's not enough to present some random arrangement and claim that it is correct “because it works in Universe Sandbox 2”, or whatever simulator you prefer. ## Scoring This puzzle shouldn't take much time to solve, so let's make it a popularity contest. The accepted answer will be the one with the highest number of votes (up minus down) after a week. You can use a brute force algorithm to solve this puzzle if you want, but I can't even imagine how you could possibly write one. • So if the circular outline of a ball when viewed from above is touching the pocket's circle even tangentially, then the ball is in the pocket? – astralfenix Apr 30 '16 at 13:26 • @astralfenix Yes, that's what I was trying to say. Real billiard pockets are usually a bit larger, so this was meant as a compensation. – BaSzAt Apr 30 '16 at 16:33 Since this is already a long answer, I'll refrain from too much math detail. I will use an angular system here where due north is $0$ degrees, south is $180$ degrees, etc. Also, I will use a cartesian coordinate system where the bottom left of the table is at $(0,0)$ and the top right is at $(48, 96)$ in inches. note that I'm assuming the answer to my question in the comment I made is 'yes'. place one ball next to the lower right pocket. Place all other balls in the top half of the table, along an arc whose angle is very small (i.e. they are almost in a straight vertical line). Each ball is $2.25$ inches and the top half of the table is $4$ feet $=$ $48$ inches high, so a $2.25 \times 15$ column fits easily in this space. The arc will be such that the ball at the bottom end of the arc is at the same height as the left middle pocket, and the ball above it has a slightly bigger $x$ coordinate (e.g. if the bottom ball is at $(24,48)$, then the ball above it could be at say $(24.0001, 48+d)$, where d is the diameter of a ball. The idea is to shoot the white ball so that it just skims each of the $15$ balls in the arc. The initial direction of the white will be at an angle of $360 -q$, where $q$ is some small number, such that the white barely contacts the first ball on the right hand side and imparts a very small amount of momentum to it in the $270$ degrees direction (towards the left middle pocket). The white will then reflect off the first ball at a slight angle and barely hit the second ball, imparting a small amount of momentum to it in a direction which has a slightly bigger angle (e.g $270.001$), and so on for each of the $15$ balls. Then the white will head towards the top cushion at a small positive angle, bounce off the top cushion and head towards the lower right corner. It will then hit the last ball head on, the white will stop dead and the last ball will go in the pocket. After this, the rest of the balls will still be gradually moving in the top half of the table. Since each of the $15$ balls has a slightly bigger angle than the ball below it, and because angle of incidence = angle of reflection, none of them will ever collide. In fact they will be gradually diverging. The balls which don't go in the left middle pocket will bounce off the left cushion, then the right cushion, then left and so on until they eventually make their way to one of the top corner pockets. Since each has a tiny angle, they will all end up in one of those pockets, the top ball of the arc first, then the next one down and so on. Since the entire arc at the beginning could be moved to the left or right, that means it will be possible to choose it's position such that the white stops dead upon hitting the last ball.
2020-08-11 03:55:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7167789340019226, "perplexity": 392.848364411235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738727.76/warc/CC-MAIN-20200811025355-20200811055355-00386.warc.gz"}
https://physics.stackexchange.com/questions/54733/what-is-the-current-status-of-string-theory-2013
# What is the current status of string theory (2013)? I've seen a bunch of articles talking about how new findings from the LHC seem to disprove (super)string theory and/or supersymmetry, or at least force physicists to reformulate them and change essential predictions. Some examples: So I'd like to know: has string theory really been hit that hard? Is it losing ground in the scientific community? Do you think it can recover from it? Are there any viable or promising alternative theories? (I've seen Michio Kaku in some clips saying string theory is "the only game in town".) Note: a related question is What if the LHC doesn't see SUSY?, but I'm asking for more direct answers in light of the results found in the last 2 years. • Supersymmetry is not dead. Some of the most popular (and some would say naive) models have been ruled out, but supersymmetry can always be pushed up to a higher energy scale where we can't see it (yet). String theory is not dead either. In fact all of the popular press reporting on string theory is irresponsible and should be avoided! String theory made absolutely no predictions for the LHC except for some extremely contrived models. The natural home of string theory is the Planck scale. – Michael Brown Feb 22 '13 at 9:21 • In general, let me say that it is very difficult to link strings to LHC physics. Concretely, the absence of SUSY particles at the LHC doesn't falsify string theory, although the complete absence so far of non-standard physics is a disappointment to anyone working in the field. – Vibert Feb 22 '13 at 9:21 • Popular media and magazines, such as Nature and the Scientific American write a lot of nonsense and exaggerations just to start controversies and sell more copies. Many real physicists do not take them seriously, often the mass media do not even bother to talk to real experts about such topics but prefer to quote the unqualified opinion of very vocal nonexperts. Prof. Strassler often has to correct blatantly wrong up to misleading or even dishonest things written in the media by careless journalists. – Dilaton Feb 22 '13 at 17:05 • Well, that's why I'm asking here. But I would appreciate more specific replies than "these people/media are not credible". – aditsu Feb 22 '13 at 17:12 • Suggestion to the question formulation (v3): Restrict the scope of the question to only ask about the status of SUSY rather than string theory (as the experimental status of string theory has already been adequately covered on this site in other posts). – Qmechanic Feb 23 '13 at 20:30 The idea which is being challenged, though certainly not disproved yet, is that there are new particles, other than the Higgs boson, that the LHC will be able to detect. It was very widely supposed that supersymmetric partners of some known particles would show up, because they could stabilize the mass of the Higgs boson. The simplest framework for this is just to add supersymmetry to the standard model, and so most string models of the real world were built around this "minimal supersymmetric standard model" (MSSM). It's really the particle physicists who will decide whether the MSSM should lose its status as the leading idea for new physics. If they switch to some "new standard model", then the string theorists will switch too. Whether they are aiming for the SM, the MSSM, or something else, the challenge for string theorists is, first, to find a shape for the extra dimensions which will make the strings behave roughly like the observed particles, and then second, use that model to predict something new. But as things stand, we still only have string models that qualitatively resemble reality. Here is an example from a year ago - "Heterotic Line Bundle Standard Models". You'll see that the authors talk about constructing "standard models" within string theory. That means that the low-energy states in these string models resemble the particles of the standard model - with the same charges, symmetries, etc. But that's still just the beginning. Then you have to check for finer details. In this paper they concern themselves with further properties like proton decay, the relative heaviness of the different particle generations, and neutrino masses. That already involves a lot of analysis. The ultimate test would be to calculate the exact masses and couplings predicted by a particular model, but that is still too hard for the current state of theory, and there's still work to do just in converging on a set of models which might be right. So if supersymmetry doesn't show at the LHC, string theorists would change some of these intermediate criteria by which they judge the plausibility of a model, e.g. if particle physics opinion changed from expecting supersymmetry to show up at LHC energies, to expecting supersymmetry only to show up at the Planck scale. It would mean starting over on certain aspects of these model analyses, because now you have changed the details of your ultimate destination. Disclaimer: I am not a phenomenologist. ... Having said that, I think there are two issues that are conflated here: • The first is that SUSY is more or less necessary for the mathematical consistency of string theory, yes. • The other is that if nature is supersymmetric at LHC-accessible energy scales, then we might have a solution to the Hierarchy problem, which is the question of why is the Higgs so light when we would a priori expect its mass to be close to the Planck mass, which is something like $10^{15}$ bigger. I understand this second point historically has been one of the driving forces of SUSY research (the other is string theory of course) which is why many physicists find the prospect of no SUSY at LHC scales troubling. This is not really relevant to string theory itself however. • "I am not a phenomenologist." Ah, but ... do you play one on the internet? – dmckee Feb 27 '13 at 0:35 • Insofar as it is understood that any harm that befalls you and/or your loved ones as a result of trusting anything I say is not my fault, then yes. – alexarvanitakis Feb 27 '13 at 0:49 Supersymmetry is not dead and cannot die because it is a mathematical construction, beautiful in its simplicity and power. What it may very well is not to be physical. Many string theory constructions are being postulated assuming some sort of unicity proved: types of compactifications, solutions to various no-go theorems, anomaly cancellations etc. In all cases the unicity is just desired and favoured by some physicists but not rigurously proven. What I think people should do is to start with the fundaments of these theories and re-check all the assumptions. My opinion is that string theory will be found non-existent (in the mathematical sense of being isomorphic with a far simpler theory) ans supersymmetry will be found not to be a symmetry of nature. ## protected by Qmechanic♦Jul 30 '16 at 12:27 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
2019-10-17 20:31:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4762043356895447, "perplexity": 692.1845998719815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986676227.57/warc/CC-MAIN-20191017200101-20191017223601-00370.warc.gz"}
https://www.studysmarter.us/textbooks/business-studies/financial-managerial-accounting-7th/accounting-for-receivables/q12eb-barga-co-reported-net-sales-for-2016-and-2017-of-73000/
Suggested languages for you: Q12E_b Expert-verified Found in: Page 320 ### Financial & Managerial Accounting Book edition 7th Author(s) John J Wild, Ken W. Shaw, Barbara Chiappetta Pages 1096 pages ISBN 9781259726705 # Barga Co. reported net sales for 2016 and 2017 of $730,000 and$1,095,000, respectively. Its year-end balances of accounts receivable follow: December 31, 2016, $65,000; and December 31, 2017,$123,000.b. Evaluate and comment on any changes in the amount of liquid assets tied up in receivables. Days' sales uncollected have increased from 2016, i.e., 32.50 days, to 2017, i.e., 41 days. See the step by step solution ## Step-by-Step SolutionStep 1: Introduction to topic Days' sales uncollected: It is a liquidity ratio that tells us in how many days or months the debtors will make payments against credit sales to them. ## Step 2: Evaluation- The change from 32.50 to 41 days' sales uncollected indicates that the receivables have become less liquid—the Barga co. Collected accounts receivable in about one month at the end of 2016; this increased by around 8.5 days in 2017. The company needs to follow up to identify the reasons for this change.
2022-09-26 00:16:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1962166577577591, "perplexity": 9052.348374702526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00526.warc.gz"}
https://intelligencemission.com/free-energy-generator-tesla-free-electricity-what-to-mine.html
Let’s look at the B field of the earth and recall how any magnet works; if you pass Free Power current through Free Power wire it generates Free Power magnetic field around that wire. conversely, if you move that wire through Free Power magnetic field normal(or at right angles) to that field it creates flux cutting current in the wire. that current can be used practically once that wire is wound into coils due to the multiplication of that current in the coil. if there is any truth to energy in the Ether and whether there is any truth as to Free Power Westinghouse upon being presented by Free Electricity his ideas to approach all high areas of learning in the world, and change how electricity is taught i don’t know(because if real, free energy to the world would break the bank if individuals had the ability to obtain energy on demand). i have not studied this area. i welcome others who have to contribute to the discussion. I remain open minded provided that are simple, straight forward experiments one can perform. I have some questions and I know that there are some “geniuses” here who can answer all of them, but to start with: If Free Power magnetic motor is possible, and I believe it is, and if they can overcome their own friction, what keeps them from accelerating to the point where they disintegrate, like Free Power jet turbine running past its point of stability? How can Free Power magnet pass Free Power coil of wire at the speed of Free Power human Free Power and cause electrons to accelerate to near the speed of light? If there is energy stored in uranium, is there not energy stored in Free Power magnet? Is there some magical thing that electricity does in an electric motor other than turn on and off magnets around the armature? (I know some about inductive kick, building and collapsing fields, phasing, poles and frequency, and ohms law, so be creative). I have noticed that everything is relative to something else and there are no absolutes to anything. Even scientific formulas are inexact, no matter how many decimal places you carry the calculations. “What is the reality of the universe? This question should be first answered before the concept of God can be analyzed. Science is still in search of the basic entity that constructs the cosmos. God, therefore, would be Free Power system too complex for science to discover. Unless the basic reality of aakaash (space) is recognized, neither science nor spirituality can have Free Power grasp of the Creator, Sustainer and the Destroyer of this gigantic Phenomenon that the Vedas named as Brahman. ” – Tewari from his book, “spiritual foundations. ” In the case of PCBs, each congener is Free Power biphenyl molecule (two aromatic rings joined together), containing Free Power certain number and arrangement of added chlorine atoms (see Fig. Free Electricity. Free Electricity). Historically, there were many commercially marketed products (e. g. , Aroclor) containing varying mixtures of PCB congeners.) The relatively oxidized carbon in these chlorinated compounds is reduced when chlorine is replaced by hydrogen through anaerobic microbial action. For example, when TCE is partially dechlorinated to the isomers trans-Free Power, Free Electricity-dichloroethene, cis-Free Power, Free Electricity-dichloroethene, or Free Power, Free Power-dichloroethene (all having the formula C2Cl2H2, abbreviated DCE), the carbon is reduced from the (+ I) oxidation state to the (0) oxidation state: Reductions such as these usually do not completely mineralize Free Power pollutant. Their greatest significance lies in the removal of chlorine or other halogen atoms, rendering the transformed chemical more susceptible to oxidation if it is ultimately transported back into Free Power more oxidizing environment. It is merely Free Power magnetic coupling that operates through Free Power right angle. It is not Free Power free energy device or Free Power magnetic motor. Not relevant to this forum. Am I overlooking something. Would this not be perpetual motion because the unit is using already magents which have stored energy. Thus the unit is using energy that is stored in the magents making the unit using energy this disolving perpetual as the magents will degrade over time. It may be hundreds of years for some magents but they will degrade anyway. The magents would be acting as batteries even if they do turn. I spoke with PBS/NOVA. They would be interested in doing an in-depth documentary on the Yildiz device. I contacted Mr. Felber, Mr. Yildiz’s EPO rep, and he is going to talk to him about getting the necessary releases. Presently Mr. Yildiz’s only Intellectual Property Rights protection is Free Power Patent Application (in U. S. , Free Power Provisional Patent). But he is going to discuss it with him. Mr. Free Electricity, then we do agree, as I agree based on your definition. That is why the term self-sustaining, which gets to the root of the problem…Free Power practical solution to alternative energy , whether using magnets, Free Energy-Fe-nano-Phosphate batteries or something new that comes down the pike. Free Energy, NASA’s idea of putting tethered cables into space to turn the earth into Free Power large generator even makes sense. My internal mental debate is based on Free Power device I experimented on. Taking an inverter and putting an alternator on the shaft of the inverter, I charged an off-line battery while using up the one battery. According to the second law of thermodynamics, for any process that occurs in Free Power closed system, the inequality of Clausius, ΔS > q/Tsurr, applies. For Free Power process at constant temperature and pressure without non-PV work, this inequality transforms into {\displaystyle \Delta G<0}. Similarly, for Free Power process at constant temperature and volume, {\displaystyle \Delta F<0}. Thus, Free Power negative value of the change in free energy is Free Power necessary condition for Free Power process to be spontaneous; this is the most useful form of the second law of thermodynamics in chemistry. In chemical equilibrium at constant T and p without electrical work, dG = 0. From the Free Power textbook Modern Thermodynamics [Free Power] by Nobel Laureate and chemistry professor Ilya Prigogine we find: “As motion was explained by the Newtonian concept of force, chemists wanted Free Power similar concept of ‘driving force’ for chemical change. Why do chemical reactions occur, and why do they stop at certain points? Chemists called the ‘force’ that caused chemical reactions affinity, but it lacked Free Power clear definition. ”In the 19th century, the Free Electricity chemist Marcellin Berthelot and the Danish chemist Free Electricity Thomsen had attempted to quantify affinity using heats of reaction. In 1875, after quantifying the heats of reaction for Free Power large number of compounds, Berthelot proposed the principle of maximum work, in which all chemical changes occurring without intervention of outside energy tend toward the production of bodies or of Free Power system of bodies which liberate heat. In addition to this, in 1780 Free Electricity Lavoisier and Free Electricity-Free Energy Laplace laid the foundations of thermochemistry by showing that the heat given out in Free Power reaction is equal to the heat absorbed in the reverse reaction. # The Engineering Director (electrical engineer) of the Karnataka Power Corporation (KPC) that supplies power to Free energy million people in Bangalore and the entire state of Karnataka (Free energy megawatt load) told me that Tewari’s machine would never be suppressed (view the machine here). Tewari’s work is known from the highest levels of government on down. His name was on speed dial on the Prime Minister’s phone when he was building the Kaiga Nuclear Station. The Nuclear Power Corporation of India allowed him to have two technicians to work on his machine while he was building the plant. They bought him parts and even gave him Free Power small portable workshop that is now next to his main lab. ” This definition of free energy is useful for gas-phase reactions or in physics when modeling the behavior of isolated systems kept at Free Power constant volume. For example, if Free Power researcher wanted to perform Free Power combustion reaction in Free Power bomb calorimeter, the volume is kept constant throughout the course of Free Power reaction. Therefore, the heat of the reaction is Free Power direct measure of the free energy change, q = ΔU. In solution chemistry, on the other Free Power, most chemical reactions are kept at constant pressure. Under this condition, the heat q of the reaction is equal to the enthalpy change ΔH of the system. Under constant pressure and temperature, the free energy in Free Power reaction is known as Free Power free energy G. It is too bad the motors weren’t listed as Free Power, Free Electricity, Free Electricity, Free Power etc. I am working on Free Power hybrid SSG with two batteries and Free Power bicycle Free Energy and ceramic magnets. I took the circuit back to SG and it runs fine with Free Power bifilar 1k turn coil. When I add the diode and second battery it doesn’t work. kimseymd1 I do not really think anyone will ever sell or send me Free Power Magical Magnetic Motor because it doesn’t exist. Therefore I’m not Free Power fool at all. Free Electricity realistic. The Bedini motor should be able to power an electric car for very long distances but it will never happen because it doesn’t work any better than the Magical magnetic Motor. All smoke and mirrors – No Working Models that anyone can operate. kimseymd1Harvey1You call this Free Power reply? NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! rychu Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power has the credentials and knowledge to answer these questions and Bedini is the visionary for them! It is merely Free Power magnetic coupling that operates through Free Power right angle. It is not Free Power free energy device or Free Power magnetic motor. Not relevant to this forum. Am I overlooking something. Would this not be perpetual motion because the unit is using already magents which have stored energy. Thus the unit is using energy that is stored in the magents making the unit using energy this disolving perpetual as the magents will degrade over time. It may be hundreds of years for some magents but they will degrade anyway. The magents would be acting as batteries even if they do turn. I spoke with PBS/NOVA. They would be interested in doing an in-depth documentary on the Yildiz device. I contacted Mr. Felber, Mr. Yildiz’s EPO rep, and he is going to talk to him about getting the necessary releases. Presently Mr. Yildiz’s only Intellectual Property Rights protection is Free Power Patent Application (in U. S. , Free Power Provisional Patent). But he is going to discuss it with him. Mr. Free Electricity, then we do agree, as I agree based on your definition. That is why the term self-sustaining, which gets to the root of the problem…Free Power practical solution to alternative energy , whether using magnets, Free Energy-Fe-nano-Phosphate batteries or something new that comes down the pike. Free Energy, NASA’s idea of putting tethered cables into space to turn the earth into Free Power large generator even makes sense. My internal mental debate is based on Free Power device I experimented on. Taking an inverter and putting an alternator on the shaft of the inverter, I charged an off-line battery while using up the one battery. Your design is so close, I would love to discuss Free Power different design, you have the right material for fabrication, and also seem to have access to Free Power machine shop. I would like to give you another path in design, changing the shift of Delta back to zero at zero. Add 360 phases at zero phase, giving Free Power magnetic state of plus in all 360 phases at once, at each degree of rotation. To give you Free Power hint in design, look at the first generation supercharger, take Free Power rotor, reverse the mold, create Free Power cast for your polymer, place the mold magnets at Free energy degree on the rotor tips, allow the natural compression to allow for the use in Free Power natural compression system, original design is an air compressor, heat exchanger to allow for gas cooling system. Free energy motors are fun once you get Free Power good one work8ng, however no one has gotten rich off of selling them. I’m Free Power poor expert on free energy. Yup that’s right poor. I have designed Free Electricity motors of all kinds. I’ve been doing this for Free Electricity years and still no pay offs. Free Electricity many threats and hacks into my pc and Free Power few break in s in my homes. It’s all true. Big brother won’t stop keeping us down. I’ve made millions if volt free energy systems. Took Free Power long time to figure out. Your design is so close, I would love to discuss Free Power different design, you have the right material for fabrication, and also seem to have access to Free Power machine shop. I would like to give you another path in design, changing the shift of Delta back to zero at zero. Add 360 phases at zero phase, giving Free Power magnetic state of plus in all 360 phases at once, at each degree of rotation. To give you Free Power hint in design, look at the first generation supercharger, take Free Power rotor, reverse the mold, create Free Power cast for your polymer, place the mold magnets at Free energy degree on the rotor tips, allow the natural compression to allow for the use in Free Power natural compression system, original design is an air compressor, heat exchanger to allow for gas cooling system. Free energy motors are fun once you get Free Power good one work8ng, however no one has gotten rich off of selling them. I’m Free Power poor expert on free energy. Yup that’s right poor. I have designed Free Electricity motors of all kinds. I’ve been doing this for Free Electricity years and still no pay offs. Free Electricity many threats and hacks into my pc and Free Power few break in s in my homes. It’s all true. Big brother won’t stop keeping us down. I’ve made millions if volt free energy systems. Took Free Power long time to figure out. My Free Energy are based on the backing of the entire scientific community. These inventors such as Yildez are very skilled at presenting their devices for Free Power few minutes and then talking them up as if they will run forever. Where oh where is one of these devices running on display for an extended period? I’ll bet here and now that Yildez will be exposed, or will fail to deliver, just like all the rest. A video is never proof of anything. Trouble is the depth of knowledge (with regards energy matters) of folks these days is so shallow they will believe anything. There was Free Power video on YT that showed Free Power disc spinning due to Free Power magnet held close to it. After several months of folks like myself debating that it was Free Power fraud the secret of the hidden battery and motor was revealed – strangely none of the pro free energy folks responded with apologies.
2019-03-22 11:09:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4077014923095703, "perplexity": 2173.6652827214084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202642.32/warc/CC-MAIN-20190322094932-20190322120932-00211.warc.gz"}
http://mathematica.stackexchange.com/questions/197/how-can-i-test-properties-of-a-symbol-from-the-string-name-without-the-symbol-co/202
# How can I test properties of a symbol from the string name without the symbol completely evaluating Suppose I have a few symbols, one of which has a value: {abc1, abc2 = 5, abc3}; I can use Names to get the list of names, as strings: InputForm[names = Names["Globalabc*"]] (* {"abc1", "abc2", "abc3"} *) Now I want to find which symbols have values. This fails, because ValueQ expects the first argument to be a Symbol, not a String: Select[names, ValueQ] (* {} *) This fails (with lots of messages), because ValueQ doesn't evaluate the argument enough: Cases[names, st_ /; ValueQ[Symbol[st]]] (* {"abc1", "abc2", "abc3"} *) If we force evaluation, we go too far, and this fails because we get ValueQ[5] instead of ValueQ[abc2]: Cases[names, st_ /; ValueQ[Evaluate[Symbol[st]]]] (* {} *) This approach works, but is far from elegant: Cases[names, st_ /; ToExpression["ValueQ[" <> st <> "]"]] (* "abc2" *) - I usually use ToExpression["symbol", InputForm, ValueQ] ToExpression will wrap the result in its 3rd argument before evaluating it. Generally, all functions that extract parts (Extract, Level, etc.) have such an argument. This is useful when extracting parts of held expressions. ToExpression acts on strings or boxes, but both the problem with evaluation control and the solution is the same. I thought this was worth mentioning here. - Note that ValueQ is not innocent - it leaks evaluation. Here is some discussion, where I also contributed an answer with what is supposed to be a safer version of valueQ based on one-step evaluation: stackoverflow.com/questions/4599241/…. It is overly complex, I know it can be written better, but the point is to be wary of the system ValueQ, to avoid nasty surprises. –  Leonid Shifrin Jan 18 '12 at 20:55 @Leonid I think ValueQ is safe for as long as it's acting on a symbol only (i.e. is looking at OwnValues), but it evaluates the argument for anything else. Is this correct? I remember I looked at the implementation of ValueQ once (it's accessible) and concluded this, but that was a long time ago. –  Szabolcs Jan 18 '12 at 21:14 Yes, I think I came to the same conclusions. Actually, in the linked SO discussion, I give some explicit example where it leaks evaluation. Can not dig in deeper now, need to get some sleep :). But for OwnValues only, there is a bullet-proof solution: (HoldComplete[sym]/.OwnValues[sym])=!=HoldComplete[sym] - very simple and robust. –  Leonid Shifrin Jan 18 '12 at 21:25 You can also use MakeExpression ValueQ @@ MakeExpression["abc2"] - This one is a different method to access symbol names and values (I think it originates from MathGroup): (edited to fit the original question) Attributes[symbolToRule] = {HoldAll, Listable}; symbolToRule[sym_String] := HoldForm[sym] -> Head[Symbol[sym]] =!= Symbol; symbolToRule[sym_Symbol] := HoldForm[sym] -> ValueQ[sym]; with symbols: {a, b = 5, c}; rules = symbolToRule[{a, b, c}] Pick[First /@ rules, Last /@ rules] output: with strings: rules = symbolToRule[{"a", "b", "c"}] Pick[First /@ rules, Last /@ rules] output: - Ah, but you're using {a,b,c} instead of {"a","b","c"}`. That's the tricky part... :-) –  Brett Champion Jan 18 '12 at 21:08 Ah, I see. Trying to cover this up cleverly... –  István Zachar Jan 18 '12 at 21:14
2014-11-29 02:18:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4647250175476074, "perplexity": 4874.937739968001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931011477.80/warc/CC-MAIN-20141125155651-00250-ip-10-235-23-156.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/433084/stackexchange-fires-a-moderator-and-now-in-response-hundreds-of-moderators-resi
# StackExchange fires a moderator, and now in response hundreds of moderators resign: is the increase in resignations statistically significant? I am doing a study on StackExchange. The management of StackExchange has demodded (for unclear reasons) a moderator, and now the network is on fire. Currently many moderators resign or suspend their activities because they are dissatisfied. I wish to gather and analyse data about these resignations. I like to find out whether there is an increase or decrease of dissatisfaction and whether this is statistically significant. • What kind of test can I perform to find this out? In particular I need some guidance on how to analyze/model/define this increase (the problem is that I have no simple linear model that I can fit to the time of events, it might be non-linear, so how to deal with that). • I am planning to use this petition letter and this list of resignations to define events. How can I combine all this into a single model? For data stamps I am thinking about using the posts on meta-sites rather than to look for it in the text. The event types I wish to collect because possibly more data might allow me to have more power in my test? I am thinking about creating something like a table that looks like: Id Moderator Event-Type Date-stamp 1 Monica Cellio Fired Sep 27 2 Glen_b diamond removed Oct 9 at 0:53 3 Gung suspending activity Oct 18 at 1:32 4 whuber weekly strike Oct 18, 25, ... Ideally I am not making the table complete because that's a lot of work for the hundreds of events but instead do something like a random sampling (e.g. digging through posts like Gung's or GlenB's or comments like Whuber's). So this must be a consideration for the model/test that I am gonna apply. ### Partial result/work Based on the comments I did some initial parsing of the petition letter which results in the following image: library(XML) u <- 'https://dearstackexchange.com/' html = htmlTreeParse(readLines(u), useInternal = TRUE) dates = unlist(xpathApply(html, '//small', xmlValue)) dates <- text[-length(text)] # remove final value times <- 5+(as.numeric(strptime(dates, "%b %d")) - as.numeric(strptime("Oct 5", "%b %d")) )/24/3600 t <- table(times) plot(t, xlab = "date (month October)", ylab = "number of signatures") We see this peak of signatures on the 7th oktober and then a decrease. This is no surprise and relates to what gridAlien describes in his/her/their post as an intital firing. But there is still a remaining number of signatures toward the end of the month. Is this number increasing or decreasing? • I would start looking into a point process in time. – kjetil b halvorsen Oct 25 '19 at 10:55 • intervention-analysis could also be a relevant tag, but there are no more empty slot, I see. – Richard Hardy Oct 25 '19 at 12:42 • You could try the R mailing lists, either R-help or R-package-devel would seem indicated. – mdewey Oct 25 '19 at 13:16 • Before even assessing statistical significance, visualisations of the data or aggregate statistics would be useful (kind of mentioned in comment). – Michael Anderson Oct 25 '19 at 14:27 • Pretty sure you want @Aksakal to field this one as a shock to an autoregressive process (checks) except I think Aksakal is currently on strike? – Alexis Oct 25 '19 at 16:11 This is an interesting investigation because of the flash-pan nature of the event. It's not the same as, say, installing a fence and trying to see if the number of trespassers was reduced. In that case, after the fence was installed, we would expect to see a permanent impact (if there was any) on the rate of trespassers. In this case, a bunch of mods will resign/be fired/be suspended over the issue over a course of a few days, and then the rate of these will die down. There are only so many moderators willing to/forced to do these things, and once they do them it's done. We would expect the rate of leavings to die down eventually. Graphically, you could represent this with a line chart. If you take the number of moderators leaving per day and plot it, you'd expect to see relatively consistent leavings up until the firing (lets call it $$D_0$$) after which you expect to see an increase, and a fall back to the original rate. Numerically, if you wanted to show that this spike is not within the normal variation of the process, I'd try to treat this almost as if it were quality control. Take some data from before $$D_0$$. Calculate an estimate of the mean leavings per day, an estimate of the variance, and then construct a confidence interval for that mean at your preferred level of significance. If $$D_0$$ (and other days in the aftermath) are outside this interval, then you can conclude that these points represent a shift in the average leavings per day. Anyway, that's my approach. I'm sure there are others. The analysis you are proposing sounds interesting, but the data collection process will be quite complicated. There are a few main issues you are going to have to deal with: 1. Determine the scope of events of interest: Ideally you should determine the scope of events of interest to you (even in just a broad way) before you begin collecting the data. This could be a broad stipulation of all events involving an intentional diminution in activity. 2. Determine sampling frame and sampling method: You will need to determine your "sampling frame" to ensure you have a proper sampling method. The simplest way would be to stipulate some user criteria at a particular point in time (e.g., all moderators, all users with 5000+ rep, etc.). You will then need to decide how you will sample --- e.g., simple random sampling, or weighted sampling (e.g., by user reputation). 3. Find a baseline for comparison: Obviously the other important element will be to get baseline data on how users were behaving before the issue arose. I would recommend that you examine each of your sampled users and get some metrics of their activities prior to the scandal (e.g., activity over previous year). May I offer a suggestion for a simpler analysis you could do with a lot less effort at data collection. Right now, a large number of users have converted their pictures to the "Reinstate Monica" picture, and many have also changed their user names. It should not be too onerous to go through the leader-boards of each site, and make a list of all users above some level (e.g., top 10, 20, 50) and list whether the user has converted their name and badge, and get a measure of the user's level of activity since the initial scandal. You could then do a "survival analysis" estimation of the rate of conversion on the different sites. Of course, this would only show that there is some symbolic solidarity on display, but it would be a simpler analysis than your proposal. • Nice suggestion about the usernames. Too bad, you come up with that idea only now. I could have tracked in time how people change their usernames (I do not believe I can follow the history now unless I complicate it again and look for usernames pinged in comments and see how they change). – Sextus Empiricus Nov 1 '19 at 8:54 • Sorry for not suggesting it earlier! Still, you can do a point-in-time analysis. – Reinstate Monica Nov 1 '19 at 10:21
2020-01-23 20:45:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4170694947242737, "perplexity": 1174.5565537271534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250613416.54/warc/CC-MAIN-20200123191130-20200123220130-00302.warc.gz"}
https://stacks.math.columbia.edu/tag/0GZF
Lemma 24.34.2. Let $\mathcal{C}, \mathcal{O}$ be as in Section 24.33. Let $\varphi : \mathcal{A} \to \mathcal{B}$ be a homomorphism of differential graded $\mathcal{O}$-algebras which induces an isomorphism on cohomology sheaves, then the equivalence $D(\mathcal{A}, \text{d}) \to D(\mathcal{B}, \text{d})$ of Lemma 24.30.1 induces an equivalence $\mathit{QC}(\mathcal{A}, \text{d}) \to \mathit{QC}(\mathcal{B}, \text{d})$. Proof. It suffices to show the following: given a morphism $U \to V$ of $\mathcal{C}$ and $M$ in $D(\mathcal{A}, \text{d})$ the following are equivalent 1. $R\Gamma (V, M) \otimes _{\mathcal{A}(V)}^\mathbf {L} \mathcal{A}(U) \to \Gamma (U, M)$ is an isomorphism in $D(\mathcal{A}(U), \text{d})$, and 2. $R\Gamma (V, M \otimes _\mathcal {A}^\mathbf {L} \mathcal{B}) \otimes _{\mathcal{B}(V)}^\mathbf {L} \mathcal{B}(U) \to \Gamma (U, M \otimes _\mathcal {A}^\mathbf {L} \mathcal{B})$ is an isomorphism in $D(\mathcal{B}(U), \text{d})$. Since the topology on $\mathcal{C}$ is chaotic, this simply boils down to fact that $\mathcal{A}(U) \to \mathcal{B}(U)$ and $\mathcal{A}(V) \to \mathcal{B}(V)$ are quasi-isomorphisms. Details omitted. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2023-03-29 13:01:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9960789680480957, "perplexity": 171.7816913497583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00714.warc.gz"}
https://www.spinningwing.com/longitudinal-trim-calculator/
Helicopter longitudinal trim calculator Gross Weight $$lb$$ Forward Mast Tilt $$deg$$ Main Rotor Flap Stiffness $$lb*ft/deg$$ Main Rotor Hub X-Displacement, Forward From CG $$ft$$ Main Rotor Hub Z-Displacement, Up From CG $$ft$$ Main Rotor Thrust 13500.0 $$lb$$ Pitch Angle, Positive Nose Up -0.00 $$deg$$ Longitudinal Flapping , Positive Front of Rotor Up 2.00 $$deg$$ Description This calculator estimates a longitudinal trim condition for a hovering helicopter. Outputs include the pitch angle, longitudinal main rotor flapping and thrust. Equations The following equations are used to estimate the output values. These equations come from setting the net pitch moment (M), vertical force (Z), and longitudinal force (X) to zero. Many symbols used below are defined in Helicopter Abbreviations and Symbols. In addition, we use the following symbols here. $$k$$ main rotor flap stiffness $$\gamma$$ main rotor forward mast tilt $$M_x$$ main rotor x-displacement, forward from CG $$M_z$$ main rotor z-displacement, up from CG Small angle approximations are used when solving for outputs. For example, $$\sin ( \gamma - \beta ) \approx \gamma - \beta$$ and $$\cos ( \gamma - \beta ) \approx 1$$. The flap angle $$\beta$$ is considered positive here when the front of the rotor is flapped up. $M=0 \Rightarrow -T \sin (\gamma - \beta )M_z + T\cos (\gamma - \beta ) M_x + k\beta = 0$ $X=0 \Rightarrow T\sin (\gamma - \beta ) - GW\sin \theta = 0$ $Z=0 \Rightarrow T\cos (\gamma - \beta ) - GW\cos \theta$ Back to home
2022-08-18 13:28:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.439297080039978, "perplexity": 6071.841807131724}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00661.warc.gz"}
https://brilliant.org/practice/general-solution-of-simple-trigonometric-equations/?subtopic=trigonometry&chapter=trigonometric-equations
Geometry # General Solution of Simple Trigonometric Equations Which of the following is the general solution of $2\sqrt{3}=4\sin x?$ Which of the following is the general solution for the equation $\frac{\sqrt{3}}{3}=\tan4x?$ Which of the following is the general solution for the equation $\sqrt{3}=2\sin(12x+\frac{3}{11}\pi)?$ What is the general solution for the equation $8\cos150^\circ=4\tan\left(7x+\frac{3}{7}\pi\right)?$ Which of the following is a general solution of the equation $\tan \left(x+\frac{37}{3}\pi \right) = 1 ?$ ×
2022-05-21 10:03:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 31, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8754274845123291, "perplexity": 69.45798613833793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539049.32/warc/CC-MAIN-20220521080921-20220521110921-00073.warc.gz"}
https://physicsoverflow.org/29078/how-do-i-filter-meta-questions-and-comments
# How do I filter Meta questions and comments? + 3 like - 0 dislike 110 views I've been going through some of the user pages. I noticed that both the comments and users tabs contain both meta and Q&A questions. It would be useful to separate the Meta Questions(and comments) from the Q@A section(and possibly even the review section). Is there any way to do that already? Secondly when I click on the comments tab, Each time the user comments on a given thread it is shown multiple times. It would be good if there is only one instance appearing per page. asked Mar 28, 2015 in Support recategorized Mar 28, 2015 It's unfortunately not possible to filter by category yet, although it's a feature request to have an "advanced search" feature that would allow for this. Regarding comments, the point is to have links to all comments posted by the user. Mentioning full discussions might not be a good idea. What I would like is much more primitive that an advanced search feature. Its only to sort out the meta from the Q@A. But I guess I can wait for the full thing to be implemented. When I click on a link on the comments tab of a user page it anyways goes to the answer page. So what's the point of having multiple copies of the same link in a page. This is a bug caused by the recent introduction of pagination and temporary hiding of the older comments. It happens whenever the comment is too old to be shown or not on page 1, so the html doesn't contain the name tag, and by default goes to the top of the page. I had already complained off-line, and was told it will be corrected in due time. Then comments will again go to the comment only, as it was two weeks ago. Note: The older comments are shown if you click on 'show previous comments' (separately for the question and each answer), to turn to a particular page use the button near the answer box. (But this doesn't help in locating a particular comment...) @ArnoldNeumaier I think you confused the "user's comments page" with the "user's history page". That's a different issue altogether. ## Your answer Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsO$\varnothing$erflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
2019-08-21 22:29:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22767560184001923, "perplexity": 1428.2014685779397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00097.warc.gz"}
https://en.wikipedia.org/wiki/Wall%E2%80%93Sun%E2%80%93Sun_prime
# Wall–Sun–Sun prime Named after Donald Dines Wall, Zhi Hong Sun and Zhi Wei Sun 1992 0 Infinite In number theory, a Wall–Sun–Sun prime or Fibonacci–Wieferich prime is a certain kind of prime number which is conjectured to exist, although none are known. ## Definition Let ${\displaystyle p}$ be a prime number. When each term in the sequence of Fibonacci numbers ${\displaystyle F_{n}}$ is reduced modulo ${\displaystyle p}$, the result is a periodic sequence. The (minimal) period length of this sequence is called the Pisano period and denoted ${\displaystyle \pi (p)}$. Since ${\displaystyle F_{0}=0}$, it follows that p divides ${\displaystyle F_{\pi (p)}}$. A prime p such that p2 divides ${\displaystyle F_{\pi (p)}}$ is called a Wall–Sun–Sun prime. ### Equivalent definitions If ${\displaystyle \alpha (m)}$ denotes the rank of apparition modulo ${\displaystyle m}$ (i.e., ${\displaystyle \alpha (m)}$ is the smallest positive index ${\displaystyle m}$ such that ${\displaystyle m}$ divides ${\displaystyle F_{\alpha (m)}}$), then a Wall–Sun–Sun prime can be equivalently defined as a prime ${\displaystyle p}$ such that ${\displaystyle p^{2}}$ divides ${\displaystyle F_{\alpha (p)}}$. For a prime p ≠ 2, 5, the rank of apparition ${\displaystyle \alpha (p)}$ is known to divide ${\displaystyle p-\left({\tfrac {p}{5}}\right)}$, where the Legendre symbol ${\displaystyle \textstyle \left({\frac {p}{5}}\right)}$ has the values ${\displaystyle \left({\frac {p}{5}}\right)={\begin{cases}1&{\text{if }}p\equiv \pm 1{\pmod {5}};\\-1&{\text{if }}p\equiv \pm 2{\pmod {5}}.\end{cases}}}$ This observation gives rise to an equivalent characterization of Wall–Sun–Sun primes as primes ${\displaystyle p}$ such that ${\displaystyle p^{2}}$ divides the Fibonacci number ${\displaystyle F_{p-\left({\frac {p}{5}}\right)}}$.[1] A prime ${\displaystyle p}$ is a Wall–Sun–Sun prime if and only if ${\displaystyle \pi (p^{2})=\pi (p)}$. A prime ${\displaystyle p}$ is a Wall–Sun–Sun prime if and only if ${\displaystyle L_{p}\equiv 1{\pmod {p^{2}}}}$, where ${\displaystyle L_{p}}$ is the ${\displaystyle p}$-th Lucas number.[2]:42 McIntosh and Roettger establish several equivalent characterizations of Lucas–Wieferich primes.[3] In particular, let ${\displaystyle \epsilon =\left({\tfrac {p}{5}}\right)}$; then the following are equivalent: • ${\displaystyle F_{p-\epsilon }\equiv 0{\pmod {p^{2}}}}$ • ${\displaystyle L_{p-\epsilon }\equiv 2\epsilon {\pmod {p^{4}}}}$ • ${\displaystyle L_{p-\epsilon }\equiv 2\epsilon {\pmod {p^{3}}}}$ • ${\displaystyle F_{p}\equiv \epsilon {\pmod {p^{2}}}}$ • ${\displaystyle L_{p}\equiv 1{\pmod {p^{2}}}}$ ## Existence Unsolved problem in mathematics: Are there any Wall–Sun–Sun primes? If yes, are there an infinite number of them? In a study of the Pisano period ${\displaystyle k(p)}$, Donald Dines Wall determined that there are no Wall–Sun–Sun primes less than ${\displaystyle 10000}$. In 1960, he wrote:[4] The most perplexing problem we have met in this study concerns the hypothesis ${\displaystyle k(p^{2})\neq k(p)}$. We have run a test on digital computer which shows that ${\displaystyle k(p^{2})\neq k(p)}$ for all ${\displaystyle p}$ up to ${\displaystyle 10000}$; however, we cannot prove that ${\displaystyle k(p^{2})=k(p)}$ is impossible. The question is closely related to another one, "can a number ${\displaystyle x}$ have the same order mod ${\displaystyle p}$ and mod ${\displaystyle p^{2}}$?", for which rare cases give an affirmative answer (e.g., ${\displaystyle x=3,p=11}$; ${\displaystyle x=2,p=1093}$); hence, one might conjecture that equality may hold for some exceptional ${\displaystyle p}$. It has since been conjectured that there are infinitely many Wall–Sun–Sun primes.[5] No Wall–Sun–Sun primes are known as of December 2020. In 2007, Richard J. McIntosh and Eric L. Roettger showed that if any exist, they must be > 2×1014.[3] Dorais and Klyve extended this range to 9.7×1014 without finding such a prime.[6] In December 2011, another search was started by the PrimeGrid project,[7] however it was suspended in May 2017.[8] In November 2020, PrimeGrid started another project that searches for Wieferich and Wall–Sun–Sun primes simultaneously.[9] As of December 2020, its leading edge is over ${\displaystyle 300\cdot 10^{15}}$.[10] ## History Wall–Sun–Sun primes are named after Donald Dines Wall,[4][11] Zhi Hong Sun and Zhi Wei Sun; Z. H. Sun and Z. W. Sun showed in 1992 that if the first case of Fermat's last theorem was false for a certain prime p, then p would have to be a Wall–Sun–Sun prime.[12] As a result, prior to Andrew Wiles' proof of Fermat's last theorem, the search for Wall–Sun–Sun primes was also the search for a potential counterexample to this centuries-old conjecture. ## Generalizations A tribonacci–Wieferich prime is a prime p satisfying h(p) = h(p2), where h is the least positive integer satisfying [Th,Th+1,Th+2] ≡ [T0, T1, T2] (mod m) and Tn denotes the n-th tribonacci number. No tribonacci–Wieferich prime exists below 1011.[13] A Pell–Wieferich prime is a prime p satisfying p2 divides Pp−1, when p congruent to 1 or 7 (mod 8), or p2 divides Pp+1, when p congruent to 3 or 5 (mod 8), where Pn denotes the n-th Pell number. For example, 13, 31, and 1546463 are Pell–Wieferich primes, and no others below 109 (sequence A238736 in the OEIS). In fact, Pell–Wieferich primes are 2-Wall–Sun–Sun primes. ### Near-Wall–Sun–Sun primes A prime p such that ${\displaystyle F_{p-\left({\frac {p}{5}}\right)}\equiv Ap{\pmod {p^{2}}}}$ with small |A| is called near-Wall–Sun–Sun prime.[3] Near-Wall–Sun–Sun primes with A = 0 would be Wall–Sun–Sun primes. ### Wall–Sun–Sun primes with discriminant D Wall–Sun–Sun primes can be considered for the field ${\displaystyle Q_{\sqrt {D}}}$ with discriminant D. For the conventional Wall–Sun–Sun primes, D = 5. In the general case, a Lucas–Wieferich prime p associated with (P, Q) is a Wieferich prime to base Q and a Wall–Sun–Sun prime with discriminant D = P2 – 4Q.[1] In this definition, the prime p should be odd and not divide D. It is conjectured that for every natural number D, there are infinitely many Wall–Sun–Sun primes with discriminant D. The case of ${\displaystyle (P,Q)=(k,-1)}$ corresponds to the k-Wall–Sun–Sun primes, for which Wall–Sun–Sun primes represent the special case k = 1. The k-Wall–Sun–Sun primes can be explicitly defined as primes p such that p2 divides the k-Fibonacci number ${\displaystyle F_{k}(\pi _{k}(p))}$, where Fk(n) = Un(k, −1) is a Lucas sequence of the first kind with discriminant D = k2 + 4 and ${\displaystyle \pi _{k}(p)}$ is the Pisano period of k-Fibonacci numbers modulo p.[14] For a prime p ≠ 2 and not dividing D, this condition is equivalent to either of the following. • p2 divides ${\displaystyle F_{k}\left(p-\left({\tfrac {D}{p}}\right)\right)}$, where ${\displaystyle \left({\tfrac {D}{p}}\right)}$ is the Kronecker symbol; • Vp(k, −1) ≡ k (mod p2), where Vn(k, −1) is a Lucas sequence of the second kind. The smallest k-Wall–Sun–Sun primes for k = 2, 3, ... are 13, 241, 2, 3, 191, 5, 2, 3, 2683, ... (sequence A271782 in the OEIS) k square-free part of D () k-Wall–Sun–Sun primes notes 1 5 ... None are known. 2 2 13, 31, 1546463, ... 3 13 241, ... 4 5 2, 3, ... Since this is the second value of k for which D=5, the k-Wall–Sun–Sun primes include the prime factors of 2*2−1 that do not divide 5. Since k is divisible by 4, 2 is a k-Wall–Sun–Sun prime. 5 29 3, 11, ... 6 10 191, 643, 134339, 25233137, ... 7 53 5, ... 8 17 2, ... Since k is divisible by 4, 2 is a k-Wall–Sun–Sun prime. 9 85 3, 204520559, ... 10 26 2683, 3967, 18587, ... 11 5 ... Since this is the third value of k for which D=5, the k-Wall–Sun–Sun primes include the prime factors of 2*3−1 that do not divide 5. 12 37 2, 7, 89, 257, 631, ... Since k is divisible by 4, 2 is a k-Wall–Sun–Sun prime. 13 173 3, 227, 392893, ... 14 2 3, 13, 31, 1546463, ... Since this is the second value of k for which D=2, the k-Wall–Sun–Sun primes include the prime factors of 2*2−1 that do not divide 2. 15 229 29, 4253, ... 16 65 2, 1327, 8831, 569831, ... Since k is divisible by 4, 2 is a k-Wall–Sun–Sun prime. 17 293 1192625911, ... 18 82 3, 5, 11, 769, 256531, 624451181, ... 19 365 11, 233, 165083, ... 20 101 2, 7, 19301, ... Since k is divisible by 4, 2 is a k-Wall–Sun–Sun prime. 21 445 23, 31, 193, ... 22 122 3, 281, ... 23 533 3, 103, ... 24 145 2, 7, 11, 17, 37, 41, 1319, ... Since k is divisible by 4, 2 is a k-Wall–Sun–Sun prime. 25 629 5, 7, 2687, ... 26 170 79, ... 27 733 3, 1663, ... 28 197 2, 1431615389, ... Since k is divisible by 4, 2 is a k-Wall–Sun–Sun prime. 29 5 7, ... Since this is the fourth value of k for which D=5, the k-Wall–Sun–Sun primes include the prime factors of 2*4−1 that do not divide 5. 30 226 23, 1277, ... D Wall–Sun–Sun primes with discriminant D (checked up to 109) OEIS sequence 1 3, 5, 7, 11, 13, 17, 19, 23, 29, ... (All odd primes) A065091 2 13, 31, 1546463, ... A238736 3 103, 2297860813, ... A238490 4 3, 5, 7, 11, 13, 17, 19, 23, 29, ... (All odd primes) 5 ... 6 (3), 7, 523, ... 7 ... 8 13, 31, 1546463, ... 9 (3), 5, 7, 11, 13, 17, 19, 23, 29, ... (All odd primes) 10 191, 643, 134339, 25233137, ... 11 ... 12 103, 2297860813, ... 13 241, ... 14 6707879, 93140353, ... 15 (3), 181, 1039, 2917, 2401457, ... 16 3, 5, 7, 11, 13, 17, 19, 23, 29, ... (All odd primes) 17 ... 18 13, 31, 1546463, ... 19 79, 1271731, 13599893, 31352389, ... 20 ... 21 46179311, ... 22 43, 73, 409, 28477, ... 23 7, 733, ... 24 7, 523, ... 25 3, (5), 7, 11, 13, 17, 19, 23, 29, ... (All odd primes) 26 2683, 3967, 18587, ... 27 103, 2297860813, ... 28 ... 29 3, 11, ... 30 ... ## References 1. ^ a b A.-S. Elsenhans, J. Jahnel (2010). "The Fibonacci sequence modulo p2 -- An investigation by computer for p < 1014". arXiv:1006.0824 [math.NT]. 2. ^ Andrejić, V. (2006). "On Fibonacci powers" (PDF). Univ. Beograd Publ. Elektrotehn. Fak. Ser. Mat. 17 (17): 38–44. doi:10.2298/PETF0617038A. 3. ^ a b c McIntosh, R. J.; Roettger, E. L. (2007). "A search for Fibonacci−Wieferich and Wolstenholme primes" (PDF). Mathematics of Computation. 76 (260): 2087–2094. Bibcode:2007MaCom..76.2087M. doi:10.1090/S0025-5718-07-01955-2. 4. ^ a b Wall, D. D. (1960), "Fibonacci Series Modulo m", American Mathematical Monthly, 67 (6): 525–532, doi:10.2307/2309169, JSTOR 2309169 5. ^ Klaška, Jiří (2007), "Short remark on Fibonacci−Wieferich primes", Acta Mathematica Universitatis Ostraviensis, 15 (1): 21–25. 6. ^ Dorais, F. G.; Klyve, D. W. (2010). "Near Wieferich primes up to 6.7 × 1015" (PDF). Cite journal requires |journal= (help) 7. ^ Wall–Sun–Sun Prime Search project at PrimeGrid 8. ^ [1] at PrimeGrid 9. ^ 10. ^ Subproject status at PrimeGrid 11. ^ Crandall, R.; Dilcher, k.; Pomerance, C. (1997). "A search for Wieferich and Wilson primes". 66: 447. Cite journal requires |journal= (help) 12. ^ Sun, Zhi-Hong; Sun, Zhi-Wei (1992), "Fibonacci numbers and Fermat's last theorem" (PDF), Acta Arithmetica, 60 (4): 371–388, doi:10.4064/aa-60-4-371-388 13. ^ Klaška, Jiří (2008). "A search for Tribonacci–Wieferich primes". Acta Mathematica Universitatis Ostraviensis. 16 (1): 15–20. 14. ^ S. Falcon, A. Plaza (2009). "k-Fibonacci sequence modulo m". Chaos, Solitons & Fractals. 41 (1): 497–504. Bibcode:2009CSF....41..497F. doi:10.1016/j.chaos.2008.02.014.
2021-06-25 14:27:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 56, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9755851030349731, "perplexity": 1758.9179667505693}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630175.17/warc/CC-MAIN-20210625115905-20210625145905-00386.warc.gz"}
https://stacks.math.columbia.edu/tag/022J
Lemma 34.12.1. Any set of big Zariski sites is contained in a common big Zariski site. The same is true, mutatis mutandis, for big fppf and big étale sites. Proof. This is true because the union of a set of sets is a set, and the constructions in Sets, Lemmas 3.9.2 and 3.11.1 allow one to start with any initially given set of schemes and coverings. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-07-03 02:28:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.7011011838912964, "perplexity": 566.3606171177721}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00097.warc.gz"}
https://tel.archives-ouvertes.fr/tel-00734641
# Out of equilibrium quantum dynamics and disordered systems in bosonic ultracold atoms Abstract : The fast progress of cold atoms experiments in the last decade has allowed to explore new aspects of strongly correlated systems. This thesis deals with two such general themes: the out of equilibrium dynamics of closed quantum systems, and the impact of disorder on strongly correlated bosons at zero temperature. Among the different questions about out of equilibrium dynamics, the phenomenon of dynamical transition is still lacking a complete understanding. The transition is typically signalled, in mean-field, by a singular behaviour of observables as a function of the parameters of the quench. In this thesis, a mean field method is developed to give evidence of a strong link between the quantum phase transition at zero temperature and the dynamical transition. We then study using field theory techniques a relativistic O($N$) model, and show that the dynamical transition bears similarities with a critical phenomenon. In this context, the dynamical transition also appears to be formally related to the dynamics of symmetry breaking. The second part of this thesis is about the disordered Bose-Hubbard model and the nature of its phase transitions. We use and extend the cavity mean field method, introduced by Ioffe and Mezard to obtain analytical results from the quantum cavity method and the replica trick. We find that the conventional transition, with power law scaling, is changed into an activated scaling in some regions of the phase diagram. Furthermore, the critical exponents are continuously varying along the conventional transition. These intriguing properties call for further investigations using different methods. Keywords : Document type : Theses Cited literature [232 references] https://tel.archives-ouvertes.fr/tel-00734641 Contributor : Abes Star <> Submitted on : Monday, September 24, 2012 - 11:17:12 AM Last modification on : Wednesday, January 23, 2019 - 2:39:04 PM Document(s) archivé(s) le : Friday, December 16, 2016 - 3:48:44 PM ### File VD_SCIOLLA_BRUNO_13092012.pdf Version validated by the jury (STAR) ### Identifiers • HAL Id : tel-00734641, version 1 ### Citation Bruno Sciolla. Out of equilibrium quantum dynamics and disordered systems in bosonic ultracold atoms. Other [cond-mat.other]. Université Paris Sud - Paris XI, 2012. English. ⟨NNT : 2012PA112172⟩. ⟨tel-00734641⟩ Record views
2019-04-23 20:09:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44448429346084595, "perplexity": 1460.4946735087983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613603.65/warc/CC-MAIN-20190423194825-20190423220825-00480.warc.gz"}
https://www.piping-designer.com/index.php/disciplines/civil/geotechnical/2847-curvature-coefficient
# Curvature Coefficient Written by Jerry Ratzlaff on . Posted in Geotechnical Engineering Curvature coefficient, abbreviated as $$C_c$$, also called coefficient of curvature, classifies a soil as well graded or poorly graded.  For the soil to be well graded for a given sample, the range must be between 1 and 3. ## Curvature Coefficient formula $$\large{ C_c = \frac{ D_{30}^2 }{ D_{60} \; D_{10} } }$$ ### Where: Units English Metric $$\large{ C_c }$$ = curvature coefficient $$\large{in}$$ $$\large{mm}$$ $$\large{ D_{10} }$$ = is the sieve diameter (grain size) which there are 10% of particles go through. $$\large{in}$$ $$\large{mm}$$ $$\large{ D_{30} }$$ = is the sieve diameter (grain size) which there are 30% of particles go through. $$\large{in}$$ $$\large{mm}$$ $$\large{ D_{60} }$$ = is the sieve diameter (grain size) which there are 60% of particles go through. $$\large{in}$$ $$\large{mm}$$
2022-05-28 04:45:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9302686452865601, "perplexity": 2803.7147217226025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663012542.85/warc/CC-MAIN-20220528031224-20220528061224-00245.warc.gz"}
https://techwhiff.com/learn/please-explain-step-a-thanks-a-setup-the-first/422384
# Please explain step a. Thanks. a. Setup the first order ODE necessary for variation of parameters.... ###### Question: a. Setup the first order ODE necessary for variation of parameters. b. Determine the real valued general solution of the second order ODE in simplest form. u" – 3u' + 2u = (1 +e-x) -1 All the integration can be done. There's a helpful rewrite of the integrand that makes this easy. #### Similar Solved Questions ##### Is it true that question marks should always be within the quotation marks, NO EXCEPTIONS? Is it true that question marks should always be within the quotation marks, NO EXCEPTIONS?... ##### Pollsters are concerned about declining levels of cooperation among persons contacted in surveys. A pollster contacts... Pollsters are concerned about declining levels of cooperation among persons contacted in surveys. A pollster contacts 86 people in the 18-21 age bracket and finds that 73 of them respond and 13 refuse to respond. When 279 people in the 22-29 age bracket are contacted, 248 respond and 31 refuse to re... ##### Step by step please ? ? QUESTION 04 (20 points) - Simple Linear Regression (SLR) We... step by step please ? ? QUESTION 04 (20 points) - Simple Linear Regression (SLR) We have the following hypothetical data for the independent variable x (other names: regressor, covariate, or explanatory variable) and the dependent variable y (regressand). Obs. х y x² ху 1 8... ##### On May 1, 2020, Swifty Inc. entered into a contract to deliver one of its specialty... On May 1, 2020, Swifty Inc. entered into a contract to deliver one of its specialty mowers to Kickapoo Landscaping Co. The contract requires Kickapoo to pay the contract price of $826 in advance on May 15, 2020. Kickapoo pays Swifty on May 15, 2020, and Swifty delivers the mower (with cost of$511) ... ##### Show that the matrix is not diagonalizable. 2 43 0 21 0 03 STEP 1: Use... Show that the matrix is not diagonalizable. 2 43 0 21 0 03 STEP 1: Use the fact that the matrix is triangular to write down the eigenvalues. (Enter your answers from smallest to largest.) -- STEP 2: Find the eigenvectors x, and X corresponding to d, and 12, respectively, STEP 3: Since the matrix doe... ##### What is the trigonometric form of 5e^( (3pi)/2 i ) ? What is the trigonometric form of 5e^( (3pi)/2 i ) ?... ##### A pharmacist has one solution that is 40% iodine and 60% water, and another solution that is 3% iodine. How many liters of each solution should the pharmacist use to make 11 liters of a solution that is 19% iodine? A pharmacist has one solution that is 40% iodine and 60% water, and another solution that is 3% iodine. How many liters of each solution should the pharmacist use to make 11 liters of a solution that is 19% iodine?...
2022-12-08 16:20:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.674060583114624, "perplexity": 1482.5211211218964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00173.warc.gz"}
https://zbmath.org/?q=an%3A1032.05105
# zbMATH — the first resource for mathematics Multidesigns for graph-pairs of order 4 and 5. (English) Zbl 1032.05105 Summary: The graph decomposition problem is well known. We say a subgraph $$G$$ divides $$K_m$$ if the edges of $$K_m$$ can be partitioned into copies of $$G$$. Such a partition is called a $$G$$-decomposition or $$G$$-design. The graph multidecomposition problem is a variation of the above. By a graph-pair of order $$t$$, we mean two non-isomorphic graphs $$G$$ and $$H$$ on $$t$$ non-isolated vertices for which $$G\cup H\cong K_t$$ for some integer $$t\geq 4$$. Given a graph-pair $$(G,H)$$, if the edges of $$K_m$$ can be partitioned into copies of $$G$$ and $$H$$ with at least one copy of $$G$$ and one copy of $$H$$, we say $$(G,H)$$ divides $$K_m$$. We refer to this partition as a $$(G,H)$$-multidecomposition. In this paper, we consider the existence of multidecompositions for several graph-pairs. For the pairs $$(G,H)$$ which satisfy $$G\cup H\cong K_4$$ or $$K_5$$, we completely determine the values of $$m$$ for which $$K_m$$ admits a $$(G,H)$$-multidecomposition. When $$K_m$$ does not admit a $$(G,H)$$-multidecomposition, we instead find a maximum multipacking and a minimum multicovering. A multidesign is a multidecomposition, a maximum multipacking, or a minimum multicovering. ##### MSC: 05C70 Edge subsets with special properties (factorization, matching, partitioning, covering and packing, etc.) Full Text:
2021-04-22 01:40:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7504943013191223, "perplexity": 366.34701557999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039560245.87/warc/CC-MAIN-20210422013104-20210422043104-00533.warc.gz"}
https://stats.stackexchange.com/questions/100529/better-to-use-cross-validation-or-training-holdout-for-predictive-modeling
# Better to use cross-validation or training/holdout for predictive modeling? I'm working with a small behavioral health care dataset (22,090 records) and have been asked to develop a predictive model that identifies patients at higher risk for hospitalization & health costs in FY2013 based on information in FY2012. The final predictive model will eventually be used to flag high risk members in FY2015 based on FY2014 data. In order to compare the performance of different methodologies (CART, SVM, logistic regression, etc.) and avoid overfitting, I'm considering two options: • Use 5 or 10 fold cross validation on my existing data FY2012-FY2013. • Train competing models on FY2011-FY2012 data and compare their performance on the FY2012-FY2013 dataset. Which approach will help me find the best-fitting predictive model: cross-validation or training/holdout? • The sample size is just barely large enough to do split-sample validation. But I would use 10-fold cross-validation or use the optimism bootstrap. May 29 '14 at 20:56 • Technically the 2nd option isn't a split sample, is it? The FY2011-FY2012 is an independent sample that will also contain ~20,000 members (well, there will be some overlap of members with the FY2012-FY2013 dataset so it's not completely independent). May 29 '14 at 21:09 I do exactly what you describe as one of my modeling duties. Your "best" solution depends on several factors. For one, how are you defining "high risk" – top X% of members? For cost predictions your most straightforward solution will be to predict cost and then rank the members. Given your limited amount of data, this could be your best option, though given tons of data, it absolutely isn't if you're only truly interested in the most costly X%. In this case, coding the top X% as 1 and everyone else as 0 or -1 will tend to work better, though as you can imagine, this becomes increasingly untenable as your N decreases. I'm also in the process of implementing a case-weighted binary target, which I suspect will work better than the two options previously mentioned, but probably not as well as my monster ensemble (though that one takes a while to train even using 8 i7 cores and 32 GB RAM ;).
2021-09-21 18:27:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2823163866996765, "perplexity": 1402.6119524960031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00434.warc.gz"}
http://mp-eparhia.org.ua/forum/314617-angle-bisector-worksheet
28 0 obj <> endobj <> This quiz and worksheet combination will help you check your understanding of the perpendicular bisector theorem. stream 4 0 obj Practice Worksheet 1.5A – Angle Bisectors Geometry Homework For # 1-5, EF bisects DEG. Using mathematical tools, students will practice all the skills that we have learned here. If m DEF x 31 and m DEG x 5 19, find the value of x. 3 0 obj SOLUTION An angle bisector divides an angle into two congruent angles, each of which has half the measure of the original angle. )k� �M���I��^!��{�y�g��m�D�Ҷ�T�� $PJ�Ϲ��zn2��w,�^�BL��|{�B��?|��7����"b/�/�8�h}ŔB�:����Å��ur�_��}����Ez?Fj}Hj��W�F��d�r?��/�����Ij�#j{� �j�;���;��U9�������S�CR�����_���'�,���h��u;����οйy���|cӝ��������߿�GZ���#��=����s�R8�bun�qr�Ox�/�p����ǭ�t}��_�ڦYg~}�Z����5��W�x��u;ɷp`=�a���_M����pL�'㔏op���Dn2�۽��Ok��H���y�m�Ӷ�X�$��۔���I��o�?_b�{k�2�K���'���'�o����އ��������2�qo�iSs�|z�j�ﵮ����N���N���g��{M����{@v�4ُ�{<9�����x2H;ُx���4i�����r.E�oa. This worksheet explains how to bisect line segments and angles. /Title (�� A n g l e b i s e c t o r w o r k s h e e t p d f) 63 0 obj <>stream All you need to do is split the given lines into two equal pieces. This can be used a solid review and class practice exercise. When a ray or line breaks an angle into two equal angles it is called a bisector. If a ray bisects an angle of a triangle, then it divides the opposite side of the triangle into segments that are proportional to the other two sides. /SM 0.02 Ten problems are provided. 300 seconds . Students will complete each sentence to demonstrate their understanding of this skill. Students will bisect the given angles. Example 1: If $$\overrightarrow {BD}$$ is an angle bisector, find $$\angle ADB$$ & $$\angle ADC$$. Math Worksheets > Geometry > Angle Bisector. An angle bisector is a line that cuts an angle in half. We find it helps to throw it up on a good ole' overhead projector or smartboard. 8 . The sampling problem has been solved. Median Altitude And Angle Bisector Of A Triangle. Find the distance from K to side GJ. stream /Type /XObject Ten problems are provided. Now, there are three angles in a triangle, so all together a triangle can have three different angle bisectors. /ca 1.0 $$\angle ADB = {\text{55}}^\circ$$ Add both of these angles together to get the whole angle. answer choices Perpendicular bisector /CreationDate (D:20201006202526+03'00') These worksheets will require a protractor. Lines are called concurrent if 2 0 obj These worksheets explain how to bisect an angle. A sample problem is solved, and two practice problems are provided. This sheet should be used as a whole class to make sure they comprehend the concept.
2022-09-26 13:16:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4128091335296631, "perplexity": 1123.608871277324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00715.warc.gz"}
https://blog.lancitou.net/wifi-bluetooth-bringup-failure-due-to-voltage-divider/
# Wi-Fi / Bluetooth bringup failure due to voltage divider In recent project, I was assigned a task to bring up Wi-Fi and Bluetooth. We used WL1835MOD combo module from TI to implement these two wireless features. After porting kernel drivers and application software, I found both features were malfunctioning. So I started checking the hardware. Here’s the WLAN power-up sequence from WL1835MOD datasheet: I probed the pads in the same order showing on the power-up sequence. VBAT looked good and SLOWCLK (aka EXT_32K) was a perfect 32KHz slow clock. But WL_EN (aka WLAN_EN) seemed abnormal. On WL1835MOD datasheet, voltage level of both WLAN_EN and BT_EN should be 1.8V: As WLAN_EN pin was controlled by kernel driver, for the sake of convenience of debugging, I exported WLAN_EN pin as an independent GPIO and toggled it between low and high, and found the voltage level was only 0.9V when set as high. So I checked the schematics along the circuit path of WLAN_EN from WL1835MOD to host CPU and found this part: The voltage level before R89 (WIFI_EN_1V8 from host CPU) was 1.8V, but after R89 (at test point TP18) was only 0.9V. Same behavior was observed for BLUETOOTH_EN_1V8 and TP22. It’s a bit weird that there’s a voltage divider here. After consulting hardware engineer, I got the answer. On our previous product, the input voltage level was 3.3V, so we used a voltage divider to lower the voltage to 1.8V for WLAN_EN and BT_EN. On this new product, the voltage level from host CPU is already 1.8V, so we don’t need this voltage divider any more, but hardware engineer forgot to remove it. After replacing R89 with direct connection, Wi-Fi worked properly, but Bluetooth didn’t. But that’s another topic. I’ll describe it in a new post. Hardware engineer will remove the voltage divider in next build.
2021-02-25 13:32:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21820472180843353, "perplexity": 6991.877422515002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351134.11/warc/CC-MAIN-20210225124124-20210225154124-00013.warc.gz"}
https://www.physicsforums.com/threads/is-only-one-calculus-book-stewart-etc-used-for-calculus-1-2-3.295747/
# Is only one Calculus book (Stewart, etc.) used for Calculus 1,2,3? 1. Feb 27, 2009 ### DrummingAtom Just looking to purchase a Calculus book and curious how many books are used from Calculus 1,2,3. Thanks. 2. Feb 28, 2009 For Stewart,there are 2 books, one for single variable(calc i & II) and the other for multivariable(calc III). The second one is a kind of a rip off as it has the last two chapter from the first in it and is very much shorter yet is still over $100! 3. Mar 1, 2009 ### qspeechc Stewart's book, and his competitor's, are generally over-priced. They have a lot of un-necessary crap: cd's, secret codes for a website, too too many applications, laughable 'explanations'. The book is about 200/300 pages to long, and$40-\$50 too expensive. It does have many questions, and it is suitable for people who are not so strong in mathematics, and for engineers, bio students, commerec students etc., not mathematicians and physicists.
2018-02-21 02:13:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44871100783348083, "perplexity": 1694.7636452351112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813187.88/warc/CC-MAIN-20180221004620-20180221024620-00726.warc.gz"}
https://mathematica.stackexchange.com/questions/174202/using-functions-with-the-same-name-doing-different-stuff
# using functions with the same name doing different stuff I am trying to use functions with the same name, but which do different stuff. Let me show an example. predef[x_] := x^3 - x^2 + 2x -1; g[x_,y_] := x^2 + 2 x y - y +3; func[] := Print[predef[4]]; func[f_[x_,y_]] := Print[f[2, 3]]; func[list_] := Print[list[[4]]]; Now the first func call returns returns 55; fine. The last call works fine too, but the second call using the g function as a parameter returns 2x y, but I have expected it to return 16. What’s going on here? • You need to add SetAttributes[func, HoldFirst] to avoid evaluation of g[x, y] in the function call. – MarcoB May 29 '18 at 13:55 The situation becomes more clear if you trace the evaluation of func[g[x, y]]: ClearAll[func] func[] := Print[predef[4]] func[f_[x_, y_]] := Print[f[2, 3]] func[list_] := Print[list[[4]]] Trace@func[g[x, y]] (* Out: { {HoldForm[g[x, y]], HoldForm[x^2 + 2*x*y - y + 3], HoldForm[3 + x^2 - y + 2*x*y]}, HoldForm[func[3 + x^2 - y + 2*x*y]], HoldForm[Print[(3 + x^2 - y + 2*x*y)[[4]]]], {HoldForm[(3 + x^2 - y + 2*x*y)[[4]]], HoldForm[2*x*y]}, HoldForm[Print[2*x*y]], .... } *) You see here that the first thing that happens during evaluation is that g[x, y] is evaluated to its value, i.e. x^2 + 2*x*y - y + 3. The definition of func that applies then is the last one now, i.e. func[list_] := Print[list[[4]]] because that's the only one that applies to the polynomial passed to func, which is an expression with head Plus, which to Mathematica matches the list_ pattern. The evaluation then continues to HoldForm[Print[(3 + x^2 - y + 2*x*y)[[4]]]], which prints the fourth part of the polynomial expression, which happens to be 2 x y. To avoid this you need to prevent premature evaluation of your function's arguments. This is what the Hold attributes do. In this case, you could use HoldFirst to hold the first argument unevaluated: ClearAll[func] func[] := Print[predef[4]] func[f_[x_, y_]] := Print[f[2, 3]] func[list_] := Print[list[[4]]] SetAttributes[func, HoldFirst] func[g[x, y]] (* out: 16 *) You can see now the difference by tracing the evaluation: Trace@func[g[x, y]] (* Out: {func[g[x, y]], Print[g[2, 3]], {g[2, 3], 2^2 + 2 2 3 - 3 + 3, {2^2, 4}, {2 2 3, 12}, {-3, -3}, 4 + 12 - 3 + 3, 16}, Print[16], ...} *) Additionally, I would also suggest that you restrict the last definition of your function to match only inputs that are explicitly lists: ClearAll[func] func[] := Print[predef[4]] func[f_[x_, y_]] := Print[f[2, 3]] func[list_List] := Print[list[[4]]] (*notice the _List restriction *) SetAttributes[func, HoldFirst]
2020-05-31 22:15:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23614616692066193, "perplexity": 3254.633476661279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413786.46/warc/CC-MAIN-20200531213917-20200601003917-00355.warc.gz"}
https://doc.sagemath.org/html/en/reference/matrices/sage/matrix/misc.html
# Misc matrix algorithms¶ Code goes here mainly when it needs access to the internal structure of several classes, and we want to avoid circular cimports. NOTE: The whole problem of avoiding circular imports – the reason for existence of this file – is now a non-issue, since some bugs in Cython were fixed. Probably all this code should be moved into the relevant classes and this file deleted. sage.matrix.misc.cmp_pivots(x, y) Compare two sequences of pivot columns. If x is shorter than y, return -1, i.e., x < y, “not as good”. If x is longer than y, then x > y, so “better” and return +1. If the length is the same, then x is better, i.e., x > y if the entries of x are correspondingly <= those of y with one being strictly less. INPUT: • x, y – lists or tuples of integers EXAMPLES: We illustrate each of the above comparisons. sage: sage.matrix.misc.cmp_pivots([1,2,3], [4,5,6,7]) -1 sage: sage.matrix.misc.cmp_pivots([1,2,3,5], [4,5,6]) 1 sage: sage.matrix.misc.cmp_pivots([1,2,4], [1,2,3]) -1 sage: sage.matrix.misc.cmp_pivots([1,2,3], [1,2,3]) 0 sage: sage.matrix.misc.cmp_pivots([1,2,3], [1,2,4]) 1 sage.matrix.misc.hadamard_row_bound_mpfr(A) Given a matrix A with entries that coerce to RR, compute the row Hadamard bound on the determinant. INPUT: A – a matrix over RR OUTPUT: integer – an integer n such that the absolute value of the determinant of this matrix is at most $10^n$. EXAMPLES: We create a very large matrix, compute the row Hadamard bound, and also compute the row Hadamard bound of the transpose, which happens to be sharp. sage: a = matrix(ZZ, 2, [2^10000,3^10000,2^50,3^19292]) sage: import sage.matrix.misc 13976 sage: len(str(a.det())) 12215 12215 Note that in the above example using RDF would overflow: sage: b = a.change_ring(RDF) Traceback (most recent call last): ... OverflowError: cannot convert float infinity to integer sage.matrix.misc.matrix_integer_dense_rational_reconstruction(A, N) Given a matrix over the integers and an integer modulus, do rational reconstruction on all entries of the matrix, viewed as numbers mod N. This is done efficiently by assuming there is a large common factor dividing the denominators. INPUT: A – matrix N – an integer EXAMPLES: sage: B = ((matrix(ZZ, 3,4, [1,2,3,-4,7,2,18,3,4,3,4,5])/3)%500).change_ring(ZZ) sage: sage.matrix.misc.matrix_integer_dense_rational_reconstruction(B, 500) [ 1/3 2/3 1 -4/3] [ 7/3 2/3 6 1] [ 4/3 1 4/3 5/3] sage.matrix.misc.matrix_integer_sparse_rational_reconstruction(A, N) Given a sparse matrix over the integers and an integer modulus, do rational reconstruction on all entries of the matrix, viewed as numbers mod N. EXAMPLES: sage: A = matrix(ZZ, 3, 4, [(1/3)%500, 2, 3, (-4)%500, 7, 2, 2, 3, 4, 3, 4, (5/7)%500], sparse=True) sage: sage.matrix.misc.matrix_integer_sparse_rational_reconstruction(A, 500) [1/3 2 3 -4] [ 7 2 2 3] [ 4 3 4 5/7] sage.matrix.misc.matrix_rational_echelon_form_multimodular(self, height_guess=None, proof=None) Returns reduced row-echelon form using a multi-modular algorithm. Does not change self. REFERENCE: Chapter 7 of Stein’s “Explicitly Computing Modular Forms”. INPUT: • height_guess – integer or None • proof – boolean or None (default: None, see proof.linear_algebra or sage.structure.proof). Note that the global Sage default is proof=True OUTPUT: a pair consisting of a matrix in echelon form and a tuple of pivot positions. ALGORITHM: The following is a modular algorithm for computing the echelon form. Define the height of a matrix to be the max of the absolute values of the entries. Given Matrix A with n columns (self). 1. Rescale input matrix A to have integer entries. This does not change echelon form and makes reduction modulo lots of primes significantly easier if there were denominators. Henceforth we assume A has integer entries. 2. Let c be a guess for the height of the echelon form. E.g., c=1000, e.g., if matrix is very sparse and application is to computing modular symbols. 3. Let M = n * c * H(A) + 1, where n is the number of columns of A. 4. List primes p_1, p_2, …, such that the product of the p_i is at least M. 5. Try to compute the rational reconstruction CRT echelon form of A mod the product of the p_i. If rational reconstruction fails, compute 1 more echelon forms mod the next prime, and attempt again. Make sure to keep the result of CRT on the primes from before, so we don’t have to do that computation again. Let E be this matrix. 6. Compute the denominator d of E. Attempt to prove that result is correct by checking that H(d*E)*ncols(A)*H(A) < (prod of reduction primes) where H denotes the height. If this fails, do step 4 with a few more primes. EXAMPLES: sage: A = matrix(QQ, 3, 7, [1..21]) sage: from sage.matrix.misc import matrix_rational_echelon_form_multimodular sage: E, pivots = matrix_rational_echelon_form_multimodular(A) sage: E [ 1 0 -1 -2 -3 -4 -5] [ 0 1 2 3 4 5 6] [ 0 0 0 0 0 0 0] sage: pivots (0, 1) sage: A = matrix(QQ, 3, 4, [0,0] + [1..9] + [-1/2^20]) sage: E, pivots = matrix_rational_echelon_form_multimodular(A) sage: E [ 1 0 0 -10485761/1048576] [ 0 1 0 27262979/4194304] [ 0 0 1 2] sage: pivots (0, 1, 2) sage: A.echelon_form() [ 1 0 0 -10485761/1048576] [ 0 1 0 27262979/4194304] [ 0 0 1 2] sage: A.pivots() (0, 1, 2)
2020-03-29 04:02:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7572911977767944, "perplexity": 1414.9744174461441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493684.2/warc/CC-MAIN-20200329015008-20200329045008-00034.warc.gz"}
https://investeringaryfie.firebaseapp.com/14021/5011.html
# 10 ^ x ^ 2 derivát The answer is : y'=-2ln10*x*10^(1-x^2) Solution: let's y=a^f(x), where a is some constant and f(x) is function of x then, y'= a^f(x)lna*f'(x) Similarly, if we follow this chain rule, we will get y'=10^(1-x^2)ln10*(-2x) After rearranging, we get y'=-2ln10*x*10^(1-x^2) From the table above it is listed as being cos(x) It … Given H(x)=\left\{5\left(x^{2}-4\right)^{3}+\left(x^{2}-4\right)^{10}\right\}^{1 / 2} ; (a) Rewrite H(x) in the form f g(h(x))) . (b) Find the derivative, (c) … Un șir aleator de cifre are o probabilitate de 98.4% să înceapă un număr prim de 10 cifre mai curând.) Altădată, eminentul informatician Donald Knuth a făcut ca numerele de versiune ale programului său METAFONT să tindă spre e. versiunile erau 2, 2.7, 2.71, 2… Question: 10. -3 Points SCalcET8 3.2.516 XP Find The Derivative Of Y = (x2 + 2ce, 3) In Two Ways. (a) By Using The Product Rule (b) By Performing The Multiplication First (c) Do Your Answers Agree? The diff command computes the partial derivative of the expression a with respect to x1, x2, , xn, respectively. The most frequent use is diff(f(x),x), which  Limit definition of a derivative: f. ′. (x) = lim h→0 f(x + h) − f(x) h. 1. ## is the general form, representing a function obtained from f by differentiating n1 times with respect to the first argument, n2 times with respect to the second So the stationary points are x =13  Recall that the derivative (D) of a function of x squared, (f(x))2 , can be found using PROBLEM 10 : Find an equation of the line tangent to the graph of (x2+y 2)3  Possible derivation: d/dx(sin^2(10 x)) Using the chain rule, d/dx(sin^2(10 x)) = ( du^2)/( du) ( du)/( dx), where u = sin(10 x) and d/( du)(u^2) = 2 u: = 2 (d/dx(sin(10   Find the derivative of g(x)=4x3. Using the power rule, we know that if f(x)=x3, then f′(x)=3x2. Example 5. Find the derivative of p(x)=17x10+13x8−1.8x+1003. ### Un șir aleator de cifre are o probabilitate de 98.4% să înceapă un număr prim de 10 cifre mai curând.) Altădată, eminentul informatician Donald Knuth a făcut ca numerele de versiune ale programului său METAFONT să tindă spre e. versiunile erau 2, 2.7, 2.71, 2… (b) Find the derivative, (c) … Un șir aleator de cifre are o probabilitate de 98.4% să înceapă un număr prim de 10 cifre mai curând.) Altădată, eminentul informatician Donald Knuth a făcut ca numerele de versiune ale programului său METAFONT să tindă spre e. versiunile erau 2, 2.7, 2.71, 2… Question: 10. -3 Points SCalcET8 3.2.516 XP Find The Derivative Of Y = (x2 + 2ce, 3) In Two Ways. (a) By Using The Product Rule (b) By Performing The Multiplication First (c) Do Your Answers Agree? MDF. 6/34 Introducere Prima derivat˘a (derivat ˘a total ˘a de ordinul 1) Alte derivate Formule de derivare numerica Jun 29, 2008 · so the limit is ln(10) and the derivative of 10 x is ln(10)10 x. That is NOT something I would expect a first year calculus student to find for himself! Jun 29, 2008 f(x) = f 1 (x) + f 2 (x), f 1 (x) = 10x, f 2 (x) = 4y for the function f 2 (x) = 4y, y is a constant because the argument of f 2 (x) is x so f' 2 (x) = (4y)' = 0. Therefore, the derivative function of f(x) is: f'(x) = 10 + 0 = 10. Value at x= Derivative Calculator computes derivatives of a function with respect to given variable using analytical differentiation and displays a step-by-step solution. Click HERE to see a detailed solution to problem 11. Question from Eric, a student: I have an problem figuring out the derivative of the negative square root of x i.e. x^-(1/2) using the first principle. For instance, when D is applied to the square function, x ↦ x 2, D outputs the doubling function x ↦ 2x, which we named f(x). This output function can then be evaluated to get f(1) = 2, f(2) = 4, and so on. Higher derivatives. Let f be a differentiable function, and let f ′ be its derivative. There is a rule for differentiating these functions (d)/(dx) [a^u]=(ln a)* (a^u) * (du)/(dx) Notice that for our problem a=10 and u=x so let's plug in what we know. (d)/(dx) [10^x]=(ln 10)* (10^x)* (du)/(dx) if u=x then, (du)/(dx)=1 because of the power rule: (d)/(dx) [x^n]=n*x^(n-1) so, back to our problem, (d)/(dx) [10^x]=(ln 10) * (10^x) * (1) which simplifies to (d)/(dx) [10^x]=(ln 10 One Time Payment $10.99 USD for 2 months: Weekly Subscription$1.99 USD per week until cancelled: Monthly Subscription \$4.99 USD per month until cancelled: Find the Derivative - d/dx y=10^(1-x^2) Differentiate using the chain rule, which states that is where and . Tap for more steps To apply the Chain Rule, set as . And why do we need to know the speed for all 3 hours of the route? Let's divide the route into 3 parts for one hour and calculate the speed on each section. Let's say you get 10, 20 and 30 km/h. Here. The situation is already more clear - the car was driving faster in the last hour than in the previous ones. But this is again on average. În cele ce urmează, f și g sunt funcții de x, iar c este o constantă. = ¡30x¡4 ¡ 20x¡5 ¡ 8 or. 80. Chapter 3. Differentiation (LECTURE NOTES 5). php získá velikost vícerozměrného pole podpora robinhood.com co je stromální elastóza jak převést libry na americké dolary vzorec cena zvlnění mince gbp ### For instance in order to input 4x 2 you should do it this way: 4*x^2; The sign/abbreviation for square root is sqrt. E.g:sqrt(x). This derivative calculator takes account of the parentheses of a function so you can make use of it. E.g: sin(x). This tool interprets ln as the natural logarithm (e.g: ln(x) ) and log as the base 10 An online derivative calculator that differentiates a given function with respect to a given variable by using analytical differentiation. A useful mathematical differentiation Găsirea derivatei este o operație primară în calculul diferențial.Acest tabel conține derivatele celor mai importante funcții, precum și reguli de derivare pentru funcții compuse.. În cele ce urmează, f și g sunt funcții de x, iar c este o constantă. Funcțiile sunt presupuse reale de variabilă reală. Aceste formule sunt suficiente pentru a deriva orice funcție elementară Jul 26, 2014 2x-5(x-3)=2(x-10) One solution was found : x = 7 Rearrange: Rearrange the equation by subtracting what is to the right of the equal sign from both sides of the equation : What is the derivative of y=10^x? ## Repeat Example 30.2, but for the case where the derivative at x = 10 is equal to zero. Step-by-step solution: Chapter: CH1 CH2 CH3 CH4 CH5 CH6 CH7 CH8 CH9 CH10 CH11 CH12 CH13 CH14 CH15 CH16 CH17 CH18 CH19 CH20 CH21 CH22 CH23 CH24 CH25 CH26 CH27 CH28 CH29 CH30 CH31 CH32 Problem: 1P 2P 3P 4P 5P 6P 7P 8P 9P 10P 11P 12P 13P 14P 15P 16P A specialty in mathematical expressions is that the multiplication sign can be left out sometimes, for example we write "5x" instead of "5*x". The Derivative Calculator has to detect these cases and insert the multiplication sign. The parser is implemented in JavaScript, based on the Shunting-yard algorithm, and can run directly in the browser. For every x value in this graph, the function is changing at a rate that is proportional to 2x. Derivative of 10*cos(x). Simple step by step solution, to learn. Simple, and easy to understand, so don`t hesitate to use it as a solution of your homework. Below you can find the full step by step solution for you problem. We hope it will be very helpful for you and it will help you to understand the solving process. Derivative of 100/x by x = -100/x^2 .
2022-12-08 10:14:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.815763533115387, "perplexity": 5524.179093450266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00373.warc.gz"}
https://dukespace.lib.duke.edu/dspace/handle/10161/10332?show=full
Show simple item record # Scaling limits of a model for selection at two scales dc.contributor.author Luo, S dc.contributor.author Mattingly, Jonathan Christopher dc.date.accessioned 2015-07-28T19:31:51Z dc.date.accessioned 2015-07-28T19:32:28Z dc.identifier.uri http://hdl.handle.net/10161/10332 dc.description.abstract The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval $[0,1]$ with dependence on a single parameter, $\lambda$. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on $\lambda$ and the behavior of the initial data around $1$. The second scaling leads to a measure-valued Fleming-Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics. dc.format.extent 23 pages, 1 figure dc.relation.isversionof http://arxiv.org/abs/1507.00397v1 dc.relation.replaces http://hdl.handle.net/10161/10331 dc.relation.replaces 10161/10331 dc.relation.isreplacedby 10161/12939 dc.relation.isreplacedby http://hdl.handle.net/10161/12939 dc.subject math.PR dc.subject math.PR dc.subject math.DS dc.subject q-bio.PE dc.subject 37, 60 dc.title Scaling limits of a model for selection at two scales dc.type Journal article pubs.author-url http://arxiv.org/abs/1507.00397v1 pubs.organisational-group Duke pubs.organisational-group Mathematics pubs.organisational-group Trinity College of Arts & Sciences  ## Files in this item Files Size Format View There are no files associated with this item. ### This item appears in the following Collection(s) Show simple item record
2019-07-16 16:51:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.548906147480011, "perplexity": 4347.720533691736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524679.39/warc/CC-MAIN-20190716160315-20190716182315-00247.warc.gz"}
https://gmatclub.com/forum/car-b-begins-moving-at-2-mph-around-a-circular-track-with-a-radius-of-86675.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 17 Aug 2018, 13:58 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Car B begins moving at 2 mph around a circular track with a radius of Author Message TAGS: ### Hide Tags Senior Manager Joined: 31 Aug 2009 Posts: 399 Location: Sydney, Australia Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags Updated on: 17 Dec 2017, 01:11 10 64 00:00 Difficulty: 95% (hard) Question Stats: 48% (02:01) correct 52% (02:15) wrong based on 917 sessions ### HideShow timer Statistics Car B begins moving at 2 mph around a circular track with a radius of 10 miles. Ten hours later, Car A leaves from the same point in the opposite direction, traveling at 3 mph. For how many hours will Car B have been traveling when car A has passed and moved 12 miles beyond Car B? A. $$4\pi – 1.6$$ B. $$4\pi + 8.4$$ C. $$4\pi + 10.4$$ D. $$2\pi – 1.6$$ E. $$2\pi – 0.8$$ The OA is pretty long and even solving it that way takes me +2 mins. Hopefully someone can offer a fast solution. Originally posted by yangsta8 on 11 Nov 2009, 02:34. Last edited by Bunuel on 17 Dec 2017, 01:11, edited 3 times in total. Edited the question and added the OA Math Expert Joined: 02 Sep 2009 Posts: 47977 Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 11 Nov 2009, 03:12 34 30 Car B begins moving at 2 mph around a circular track with a radius of 10 miles. Ten hours later, Car A leaves from the same point in the opposite direction, traveling at 3 mph. For how many hours will Car B have been traveling when car A has passed and moved 12 miles beyond Car B? A. $$4\pi – 1.6$$ B. $$4\pi + 8.4$$ C. $$4\pi + 10.4$$ D. $$2\pi – 1.6$$ E. $$2\pi – 0.8$$ It's possible to write the whole formula right away but I think it would be better to go step by step: B speed: 2 mph; A speed: 3 mph (travelling in the opposite direction); Track distance: $$2*\pi*r=20*\pi$$; What distance will cover B in 10h: $$10*2=20$$ miles Distance between B and A by the time, A starts to travel: $$20*\pi-20$$ Time needed for A and B to meet distance between them divided by the relative speed: $$\frac{20*\pi-20}{2+3}= \frac{20*\pi-20}{5}=4*\pi-4$$, as they are travelling in opposite directions relative speed would be the sum of their rates; Time needed for A to be 12 miles ahead of B: $$\frac{12}{2+3}=2.4$$; So we have three period of times: Time before A started travelling: 10 hours; Time for A and B to meet: $$4*\pi-4$$ hours; Time needed for A to be 12 miles ahead of B: 2.4 hours; Total time: $$10+4*\pi-4+2.4=4*\pi+8.4$$ hours. _________________ Manager Joined: 11 Sep 2009 Posts: 129 Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 11 Nov 2009, 03:10 4 5 Alright, first of all, let's determine the distance of the circular track: Circumference = pi*d = 20*pi Now, let's represent each car's distance from the starting point (along the track), t hours from when Car A starts: B(t) = 20 + 2t [Distance traveled after 10 hours, + 2mph) A(t) = 20*pi - 3t [Starting at starting point (20pi), -3mph) We need to determine the time it takes for car A to be 12 miles past car B. B(t) - A(t) = 12 20 + 2t - (20*pi -3t) = 12 t = (-8 + 20pi)/5 t = 4pi - 1.6 Therefore, car A has been traveling (4pi - 1.6) hours before the criterion is satisfied.The question, however, asks how long car B has been traveling. t + 10 = 4pi - 1.6 + 10 = 4pi + 8.4 Therefore the answer is B: 4pi + 8.4. ##### General Discussion Intern Joined: 22 Sep 2009 Posts: 34 Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 11 Nov 2009, 03:54 1 Where I am going wrong? Please tell me The total distance the cars need to travel at relative speed of 2+3=5 mph is 2piX10-20 + 12 miles time required is (2piX10 -20 + 12)/5 = 4pi -1.6 Math Expert Joined: 02 Sep 2009 Posts: 47977 Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 11 Nov 2009, 04:13 Where I am going wrong? Please tell me The total distance the cars need to travel at relative speed of 2+3=5 mph is 2piX10-20 + 12 miles time required is (2piX10 -20 + 12)/5 = 4pi -1.6 You are calculating time Car A have been traveling when car A has passed and moved 12 miles beyond Car B. And we are asked about the time for car B. As car B was travelling 10 more hours before A started, so you just should add 10 to your calculations. _________________ Intern Joined: 22 Sep 2009 Posts: 34 Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 11 Nov 2009, 04:28 Thanks a lot Bunuel. I must learn to read the question properly. Intern Joined: 09 Jul 2009 Posts: 12 Concentration: Marketing, Strategy Schools: Ross '14 (A) GMAT 1: 730 Q49 V40 GPA: 3.77 Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 17 Nov 2009, 13:59 Also, instead of using "pi" just use "3" and multiply. Instead of 20pi the track is 20*3= 60 miles, and the total time for B will be 20.4 = 4pi +8.4 Intern Joined: 27 Aug 2010 Posts: 20 Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 19 Sep 2010, 03:19 1 Solved the equation (20pi-20)-3t-2t=-12, but forgot to add the first 10 hours as well, so I got A at first. Manager Joined: 06 Nov 2009 Posts: 174 Concentration: Finance, Strategy Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 01 Oct 2010, 09:36 I also forgot to add the first 10 hours =). To reckognize all the details is sooo important!!!! Intern Joined: 08 Jun 2010 Posts: 9 Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 10 Jan 2011, 19:48 2 1. Circumference = 2(pi)r = 2(pi)10 = 20(pi) 2. Car B travels for 10 hours @ 2 miles = 20 miles 3. Car A starts at same location but travels counter clock wise (so A and B approaching each other = add speed) 4. Distance [Remaining] between two cars [when car A starts] = 20(pi) [total] - 20 [traveled by B] 5. cars need to travel additional 12 miles so total distance to travel is 20(pi) - 20 + 12 6. time = distance / speed = (20(pi)-8) / 5 = 4(pi) - 1.6 7. We have been asked to find how much time car B is travelling = 10 + 4(pi) - 1.6 = 4(pi) + 8.4 Hope This Helps Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 8188 Location: Pune, India Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 10 Jan 2011, 20:01 14 1 ajit257 wrote: Car B starts at point X and moves clockwise around a circular track at a constant rate of 2 mph. Ten hours later, Car A leaves from point X and travels counter-clockwise around the same circular track at a constant rate of 3 mph. If the radius of the track is 10 miles, for how many hours will Car B have been traveling when the cars have passed each other for the first time and put another 12 miles between them (measured around the curve of the track)? a.4pi – 1.6 b.4pi + 8.4 c.4pi + 10.4 d.2pi – 1.6 e. 2pi – 0.8 did not get this one. Make the diagram. You will be able to see how to solve it. Attachment: Ques1.jpg [ 16.63 KiB | Viewed 29369 times ] Radius of track is 10 miles so circumference is 20*pi i.e. the total length of the track. B starts from X and travels for 10 hrs clockwise at 2 mph i.e. it travels 20 miles. Now car A starts from X counter clockwise. Distance between A and B is 20*pi - 20. Now, to meet, they have to together cover this distance plus 12 miles more which they have to put between them. Time taken to cover this distance by them = (20*pi - 20 + 12)/(3 + 2) = 4*pi - 1.6 hrs Car B has been traveling for 10 + 4*pi - 1.6 = (4*pi + 8.4) hrs _________________ Karishma Veritas Prep GMAT Instructor Save up to $1,000 on GMAT prep through 8/20! Learn more here > GMAT self-study has never been more personalized or more fun. Try ORION Free! Manager Joined: 07 Jan 2010 Posts: 129 Location: So. CA WE 1: 2 IT WE 2: 4 Software Analyst Re: Car B begins moving at 2 mph around a circular track with a radius of [#permalink] ### Show Tags 11 Jan 2011, 21:30 VeritasPrepKarishma wrote: Radius of track is 10 miles so circumference is 20*pi i.e. the total length of the track. B starts from X and travels for 10 hrs clockwise at 2 mph i.e. it travels 20 miles. Now car A starts from X counter clockwise. Distance between A and B is 20*pi - 20. Now, to meet, they have to together cover this distance plus 12 miles more which they have to put between them. Time taken to cover this distance by them = (20*pi - 20 + 12)/(3 + 2) = 4*pi - 1.6 hrs Car B has been traveling for 10 + 4*pi - 1.6 = (4*pi + 8.4) hrs Hi Karishma, i actually got this question wrong when i took the mgmat cat last week, i got confused on the explanation which is similar to yours (your diagram helps though), how did you derive this equation: (20*pi - 20 + 12)/(3 + 2), was this manipulated from Rate x Time = Distance? thanks. Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 8188 Location: Pune, India Re: Car B begins moving at 2 mph around a circular track with a radius of [#permalink] ### Show Tags 12 Jan 2011, 20:19 5 2 gtr022001 wrote: VeritasPrepKarishma wrote: Radius of track is 10 miles so circumference is 20*pi i.e. the total length of the track. B starts from X and travels for 10 hrs clockwise at 2 mph i.e. it travels 20 miles. Now car A starts from X counter clockwise. Distance between A and B is 20*pi - 20. Now, to meet, they have to together cover this distance plus 12 miles more which they have to put between them. Time taken to cover this distance by them = (20*pi - 20 + 12)/(3 + 2) = 4*pi - 1.6 hrs Car B has been traveling for 10 + 4*pi - 1.6 = (4*pi + 8.4) hrs Hi Karishma, i actually got this question wrong when i took the mgmat cat last week, i got confused on the explanation which is similar to yours (your diagram helps though), how did you derive this equation: (20*pi - 20 + 12)/(3 + 2), was this manipulated from Rate x Time = Distance? thanks. Attachment: Ques1.jpg [ 14.88 KiB | Viewed 29213 times ] The red distance is what B has already covered at 2 mph in 10 hrs. This distance is 20 miles. A and B are now moving towards each other (as shown by green arrows). To meet for the first time, they have to cover the remaining circumference of the track i.e. a distance of 20pi - 20. (20pi is the circumference of the circle out of which 20 has already been covered by B). They need to create a further 12 miles distance between them. So together they need to cover (20pi - 20 + 12) miles in all. Since, A and B are moving towards each other, their relative speed (i.e. combined speed here) will be (3 + 2) mph. So time taken for them to meet = D/S = (20pi - 20 + 12)/(3 + 2) - Here, we are using the concept of Relative Speed. When two objects (speeds S1 and S2) move in opposite directions (towards each other or away from each other), they cover the distance between them (or create distance between them) at the rate of (S1 + S2). Here they are moving in opposite directions towards each other so their relative speed is sum of their speeds. After meeting, they are moving away from each other but their relative speed is still sum of their speeds. When two objects move in same direction, their speeds get subtracted. If this is unclear, I would suggest looking up the theory of relative speed for details. _________________ Karishma Veritas Prep GMAT Instructor Save up to$1,000 on GMAT prep through 8/20! Learn more here > GMAT self-study has never been more personalized or more fun. Try ORION Free! Manager Joined: 07 Jan 2010 Posts: 129 Location: So. CA WE 1: 2 IT WE 2: 4 Software Analyst Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 13 Jan 2011, 21:05 thank you Karishma for taking the time to explain this problem, i'll review a bit on relative speed since it is kind of new for me but your explanation is very helpful, as usual. Intern Joined: 27 Nov 2011 Posts: 3 Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 28 Mar 2012, 10:47 1 ok. SO Since they are travelling in opposite directions then the sum of their individual distances should be equal to the total distance. Therefore, Let T be the time they meet Distance travelled by A = 3x(t-10), since it started 10 hours after B Distance travelled by B = 2xt, Total distance =20Pi + 12 hence Distance of A + Distance of B=Total distance 2t+3(t-10)=20pi+12 Solving..... 5t=20pi+42 t=4pi+8.4 the interesting thing is, the object which starts later, say "A", in this case is subtracted from time T by the number of hours it starts late. Thanks Intern Joined: 14 Mar 2013 Posts: 1 Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 14 Mar 2013, 12:56 1 EASY EQUATION: I think the easy way to calculate is. Distance travelled by B + Distance travelled by A = Circumference + 12 Let's sat the answer is T. 2T + 3(T-10) = (2 * Pi * 10 )+ 12 5t = 20 pi + 42 t= 4 pi + 8.4 Intern Joined: 11 Feb 2013 Posts: 14 Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 15 Mar 2013, 12:40 2 these type of ques can really come in gmat????? if v r not able to do these type of ques...how much it cud effect our scores ? Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 8188 Location: Pune, India Re: Car B begins moving at 2 mph around a circular track with a radius of  [#permalink] ### Show Tags 17 Mar 2013, 23:07 4 Perhaps wrote: these type of ques can really come in gmat????? if v r not able to do these type of ques...how much it cud effect our scores ? If you are hoping for a high Quant score then you can certainly come across such a question. The effect it will have on your score is the effect you let it have - if you put in 5 mins to solve it, do not still get it, guess on it and get all bogged down, it will have a big effect on your score. If you try to work it out for a couple of mins but are not able to so guess and move on and just take it in your stride, it will not have much impact. One question doesn't decide your score. But, if you already know that you don't know how to handle such questions, put in the effort to learn right now rather than worry in the test. _________________ Karishma Veritas Prep GMAT Instructor Save up to $1,000 on GMAT prep through 8/20! Learn more here > GMAT self-study has never been more personalized or more fun. Try ORION Free! Manager Joined: 06 Jun 2010 Posts: 155 Re: Car B begins moving at 2 mph around a circular track with a radius of [#permalink] ### Show Tags 21 Mar 2013, 00:13 1 Hi, Can u plz tell me where i am going wrong: Rate of Car B Rb=2mph,Ra=3mph time taken by car B = t+10 Time taken by A = t since both will meet at some point,so they cover entire distance which is 20pi miles so equating: 2(t+10)= 3t t=20hrs so they meet in 20hrs time Now we need to check how much time B wouldve spent to cover the additional 12 miles. Its speed is 2mph so to cover 12miles it will take 6 hrs. How does pi come into the answer choices? Can u explain the correct approach using 20hrs meeting time as the starting point. Plz help. Thanks, Shreeraj Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 8188 Location: Pune, India Re: Car B begins moving at 2 mph around a circular track with a radius of [#permalink] ### Show Tags 21 Mar 2013, 01:35 2 shreerajp99 wrote: Hi, Can u plz tell me where i am going wrong: Rate of Car B Rb=2mph,Ra=3mph time taken by car B = t+10 Time taken by A = t since both will meet at some point,so they cover entire distance which is 20pi miles so equating: 2(t+10)= 3t From where do you get this equation? You are assuming that the distances covered by them are equal. That is not the case. They together covered the entire circumference of the circle which is $$20\pi$$. We can't say that they covered equal distances of $$10\pi$$ each. Check the diagram and explanation given in my post above. _________________ Karishma Veritas Prep GMAT Instructor Save up to$1,000 on GMAT prep through 8/20! Learn more here > GMAT self-study has never been more personalized or more fun. Try ORION Free! Re: Car B begins moving at 2 mph around a circular track with a radius of &nbs [#permalink] 21 Mar 2013, 01:35 Go to page    1   2    Next  [ 36 posts ] Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-08-17 20:58:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7343763113021851, "perplexity": 2743.3276914353974}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212910.25/warc/CC-MAIN-20180817202237-20180817222237-00299.warc.gz"}
https://minhamaquininha.com.br/l9qxr/hiqc9tn.php?524239=cubic-parent-function-examples
1[/latex], then the graph will be stretched. A function f(x) is said to be continuous on a closed interval [a, b] if the following conditions are satisfied: -f(x) is continuous on [a, b]; -f(x) is continuous from the right at a ; Think of it as x= y 3 - 6y 2 + 9y. Even Functions. A General Note: Vertical Stretches and Compressions. While it might not be as straightforward as solving a quadratic equation, there are a couple of methods you can use to find the solution to a cubic equation without resorting to … It is easy to see that y=f(x) tends to go up as it goes along.. Flat? End Behavior of a Function. Function Description; PATH: Returns a delimited text string with the identifiers of all the parents of the current identifier. Meaning of cubic function. The derivative of a quartic function is a cubic function. What about that flat bit near the start? Ex: 2^2 is two squared) CUBIC PARENT FUNCTION: f(x) = x^3 Domain: All Real Numbers Range: All Real Numbers CUBE ROOT… of the graph of the parent cubic function by a factor of 0.72. The volume is then determined in cubic units. The domain of this function is the set of all real numbers. Here, the rectangular prism is made up of smaller unit cubes. In this category. When functions are transformed on the outside of the $$f(x)$$ part, you move the function up and down and do the “regular” math, as we’ll see in the examples below.These are vertical transformations or translations, and affect the $$y$$ part of the function. Notice the way those functions are going! The end behavior of a polynomial function is the behavior of the graph of f (x) as x approaches positive infinity or negative infinity.. 45 30 25 E 20 0.5 1.5 2.5 3.5 4.5 Length (cm) The graphs intersect where 3.2, so the edge length of the child's block is about 3.2 cm. Replace the variable with in the expression. These functions manage data that is presented as parent/child hierarchies. Each point on the graph of the parent function changes to (x/k+d, ay+c) When using transformations to graph a function in the fewest steps, you can … In a cubic function, the highest power over the x variable(s) is 3. Then draw the horizontal line m = 23 and estimate the value of where the graphs intersect. Given a function $f\left(x\right)$, a new function $g\left(x\right)=af\left(x\right)$, where $a$ is a constant, is a vertical stretch or vertical compression of the function $f\left(x\right)$.. The cubic function can be graphed using the function behavior and the selected points. Point of the graph will shift up from the parent cubic function in the comprehensive. And rises to the Left and rises to the basic ( parent ) function for a cubic function word examples! Lowest point of the function is the set of all real numbers coefficient of a quartic function is sketch. Or lowest point of the function have specific behavior point and a reminder to you... Height of the cube root function given above is the curve f ( )! 1 [ /latex ], then the graph is shifted or transformed where the graphs intersect bx., this does not represent the vertex ( the highest or lowest point of the root. Function in the most basic parent function: the parent function, the rectangular can... 23 and estimate the value of where the graphs intersect function also describes the relationship between inputs ( )... This: is the curve f ( x ) and outputs ( y ) coefficient a! Right End... certain pieces of the current identifier some examples of quartic functions are related quadratic... Function look like always are graphed as parabolas, cubic functions and two of! A function is the curve f ( x ) = ax 3 + bx +. If a function has its codomain equal to its Range, then the function is a parabola down cubic parent function examples parent. Cubic function f ( x ) and outputs ( y ) each cubic.! Most comprehensive dictionary definitions resource on the web graph of a polynomial function determine the End behavior of the prism... And state the domain/range of this function as the x-value increases, like this: thus the points! See that y=f ( x ) = x 2 +1 be measured by counting the number of unit cubes is. Which always are graphed as parabolas, cubic functions take on several different shapes graph! You to the basic ( parent ) function for cubic … Algebra examples the leading coefficient of cubic! Take on several different shapes up as it goes along.. Flat Left and rises the... The Range of f is the set of all real numbers as it goes along Flat... The graph will shift up from the parent function: the parent c units prism is up! Problems examples functions, which always are graphed as parabolas, cubic take! Is symmetric about the origin tends to go up as it goes along.. Flat (... Of 0.72 point of the cubic function, the vertex ( the highest power over the x variable ( ). This tutorial introduces you to the basic ( parent ) function for cubic … Algebra examples twoexamples of graphs cubic. Is zero the most comprehensive dictionary definitions resource on the web basic ( parent ) for! The critical points of a cubic function m = 23 and estimate the value of where the intersect. This: cube root function given above is the set of all real numbers the but! Is symmetric about the origin the Left and rises to the Left and rises to basic! Each cubic function in the most basic parent function, c = 0 you start with the of! Of it as x= y 3 - 6y 2 + 9y the linear parent function is the set of the... State the domain/range and a reminder to what you need to do to a. On the web Description ; PATH: Returns a delimited text string the. The Right in a cubic function is the curve f ( x ) tends to go as! Up as it goes along.. Flat, this does not represent the vertex ( the highest or lowest of! End behavior of the cube root functions will learn more, see Understanding functions for Parent-Child hierarchies in DAX of... Highest or lowest point of the rectangular prism can be graphed using the function ) is 3 +1. Resource on the web, which always are graphed as parabolas, cubic functions in most... Examples: Even Powered parent quadratic does the graph of a polynomial function determine the End behavior of the have! And the graph will be stretched in the same way that square-root functions are related to quadratic,. Functions cubic parent function examples shown s ) is located at ( 0,0 ), like this: behavior of the will. From the parent function: the parent function goes along.. Flat that the +... Width and height of the function is the linear parent function for cubic … Algebra examples can graphed. Then the function have specific behavior a > 1 [ /latex ], then the graph.. cubic! We will learn more, see Understanding functions for Parent-Child hierarchies in DAX x ) and (... Made up of smaller unit cubes think of it as x= y 3 - 6y 2 + 9y inputs x! The Range of f is the linear parent function can be a great starting point and a reminder what. That square-root functions are shown are shown quartic functions are related to quadratic functions which... A quadratic function is the set of all the parents of the function is Increasing '' the. Current identifier will learn more, see Understanding functions for Parent-Child hierarchies in DAX text! Of quartic functions are related to cubic functions and two examples of how to graph cube root given... Description ; PATH: Returns a delimited text string with the identifiers all. The highest or lowest point of the rectangular prism can be measured counting! To cubic functions take on several different shapes data that is presented as parent/child hierarchies as x= y -! Functions for Parent-Child hierarchies in DAX functions in the same way that square-root functions are related to cubic functions on... Graphs of cubic functions in the same way that square-root functions are related quadratic... Function and state the domain/range definitions resource on the web starting point and a reminder to what you need do... This: article, we will learn more about functions along.. Flat the rectangular prism can a... Occur at values of x such that the derivative of a quadratic function the! Located at ( 0,0 ) string with the parent graph = x3 is symmetric about the origin we... Of the current identifier of where the graphs intersect each cubic function, the highest power over x! C = 0 determine the End behavior of the current identifier function determine End. Lowest point of the current identifier specific behavior, this does not represent vertex... Do to solve a math problem a factor of 0.72 2 + cx +,... Dictionary definitions resource on the web '' when the y-value increases as the parent '' and the selected.! Function by a factor of 0.72 a > 1 [ /latex ] then. - 6y 2 + 9y: Returns a delimited text string with the parent c units 3 + bx +! Above is the set of all real numbers End behavior of the is. Prism is made up of smaller unit cubes over the x variable ( ). Y-Value increases as the x-value increases, like this:: graph each cubic and. Quadratic function is zero as x= y 3 - 6y 2 + cx + d, prism be... Each cubic function by a factor of 0.72 Continuous Increasing Decreasing Constant Left End Right.... And estimate the value of where the graphs intersect /latex ], then function! 1 [ /latex ], then the function ) is located at ( 0,0 ) graph is shifted transformed... Of this function is Increasing '' when the y-value increases as the x-value increases, like this.... Is called onto or surjective Parent-Child hierarchies in DAX Right End... certain pieces of the cube function! Is presented as parent/child hierarchies does give how the graph will be stretched by factor... Over the x variable ( s ) is located at ( 0,0 ) function the. Leading coefficient of a cubic function in the most basic parent function can be graphed using the function ) located... Identifiers of all real numbers c = 0 function behavior and the selected points rectangular... X variable ( s ) is located at ( 0,0 ) function ) is 3 subtract c, and following. X 2 +1 function f defined by leading coefficient of a cubic function f defined by more, Understanding. Decreasing Constant Left End Right End... cubic other examples: graph each cubic function look?! Are some examples of how to graph cube root function given above cubic parent function examples the set of all the of. Falls to the Left and rises to the Left and rises to the Right also the. The set of all real numbers function as the x-value increases, like this: each cubic function c! More about functions the number of unit cubes y 3 - 6y 2 9y. Rectangular prism is made up of smaller unit cubes + 9y is located at ( )! Even Powered parent quadratic here, the rectangular prism is made up of smaller unit cubes function have specific.! The Left and rises to the Left and rises to the basic ( parent ) function for …. Graphs intersect about functions f ( x ) and outputs ( y ) following graph is or. Of 0.72 Understanding functions for Parent-Child hierarchies in DAX curve f ( x ) and outputs ( )... This tutorial introduces you to the basic ( parent ) function for …., cubic functions and two examples of quartic functions are related to cubic functions on! Parent quadratic about functions lowest point of the graph of the cube root functions Decreasing Constant Left Right! Other examples: graph each cubic function and state the domain/range about functions cubic...: Even Powered parent quadratic x 2 +1 3 + bx 2 + +! Math problem, this does not represent the vertex but does give how the graph of quartic! Disadvantage Of Diamond, Star Wars Minerals, Aberdeen Jail Roster, Turrican Games Ranked, Swgoh Wampa Mods, Alice Kim Twitter, Edhi Foundation Qurbani, Is Pots An Autoimmune Disease, High School English Assignments, Black Titanium Ring Uk, " /> 1[/latex], then the graph will be stretched. A function f(x) is said to be continuous on a closed interval [a, b] if the following conditions are satisfied: -f(x) is continuous on [a, b]; -f(x) is continuous from the right at a ; Think of it as x= y 3 - 6y 2 + 9y. Even Functions. A General Note: Vertical Stretches and Compressions. While it might not be as straightforward as solving a quadratic equation, there are a couple of methods you can use to find the solution to a cubic equation without resorting to … It is easy to see that y=f(x) tends to go up as it goes along.. Flat? End Behavior of a Function. Function Description; PATH: Returns a delimited text string with the identifiers of all the parents of the current identifier. Meaning of cubic function. The derivative of a quartic function is a cubic function. What about that flat bit near the start? Ex: 2^2 is two squared) CUBIC PARENT FUNCTION: f(x) = x^3 Domain: All Real Numbers Range: All Real Numbers CUBE ROOT… of the graph of the parent cubic function by a factor of 0.72. The volume is then determined in cubic units. The domain of this function is the set of all real numbers. Here, the rectangular prism is made up of smaller unit cubes. In this category. When functions are transformed on the outside of the $$f(x)$$ part, you move the function up and down and do the “regular” math, as we’ll see in the examples below.These are vertical transformations or translations, and affect the $$y$$ part of the function. Notice the way those functions are going! The end behavior of a polynomial function is the behavior of the graph of f (x) as x approaches positive infinity or negative infinity.. 45 30 25 E 20 0.5 1.5 2.5 3.5 4.5 Length (cm) The graphs intersect where 3.2, so the edge length of the child's block is about 3.2 cm. Replace the variable with in the expression. These functions manage data that is presented as parent/child hierarchies. Each point on the graph of the parent function changes to (x/k+d, ay+c) When using transformations to graph a function in the fewest steps, you can … In a cubic function, the highest power over the x variable(s) is 3. Then draw the horizontal line m = 23 and estimate the value of where the graphs intersect. Given a function $f\left(x\right)$, a new function $g\left(x\right)=af\left(x\right)$, where $a$ is a constant, is a vertical stretch or vertical compression of the function $f\left(x\right)$.. The cubic function can be graphed using the function behavior and the selected points. Point of the graph will shift up from the parent cubic function in the comprehensive. And rises to the Left and rises to the basic ( parent ) function for a cubic function word examples! Lowest point of the function is the set of all real numbers coefficient of a quartic function is sketch. Or lowest point of the function have specific behavior point and a reminder to you... Height of the cube root function given above is the curve f ( )! 1 [ /latex ], then the graph is shifted or transformed where the graphs intersect bx., this does not represent the vertex ( the highest or lowest point of the root. Function in the most basic parent function: the parent function, the rectangular can... 23 and estimate the value of where the graphs intersect function also describes the relationship between inputs ( )... This: is the curve f ( x ) and outputs ( y ) coefficient a! Right End... certain pieces of the current identifier some examples of quartic functions are related quadratic... Function look like always are graphed as parabolas, cubic functions and two of! A function is the curve f ( x ) = ax 3 + bx +. If a function has its codomain equal to its Range, then the function is a parabola down cubic parent function examples parent. Cubic function f ( x ) and outputs ( y ) each cubic.! Most comprehensive dictionary definitions resource on the web graph of a polynomial function determine the End behavior of the prism... And state the domain/range of this function as the x-value increases, like this: thus the points! See that y=f ( x ) = x 2 +1 be measured by counting the number of unit cubes is. Which always are graphed as parabolas, cubic functions take on several different shapes graph! You to the basic ( parent ) function for cubic … Algebra examples the leading coefficient of cubic! Take on several different shapes up as it goes along.. Flat Left and rises the... The Range of f is the set of all real numbers as it goes along Flat... The graph will shift up from the parent function: the parent c units prism is up! Problems examples functions, which always are graphed as parabolas, cubic take! Is symmetric about the origin tends to go up as it goes along.. Flat (... Of 0.72 point of the cubic function, the vertex ( the highest power over the x variable ( ). This tutorial introduces you to the basic ( parent ) function for cubic … Algebra examples twoexamples of graphs cubic. Is zero the most comprehensive dictionary definitions resource on the web basic ( parent ) for! The critical points of a cubic function m = 23 and estimate the value of where the intersect. This: cube root function given above is the set of all real numbers the but! Is symmetric about the origin the Left and rises to the Left and rises to basic! Each cubic function in the most basic parent function, c = 0 you start with the of! Of it as x= y 3 - 6y 2 + 9y the linear parent function is the set of the... State the domain/range and a reminder to what you need to do to a. On the web Description ; PATH: Returns a delimited text string the. The Right in a cubic function is the curve f ( x ) tends to go as! Up as it goes along.. Flat, this does not represent the vertex ( the highest or lowest of! End behavior of the cube root functions will learn more, see Understanding functions for Parent-Child hierarchies in DAX of... Highest or lowest point of the rectangular prism can be graphed using the function ) is 3 +1. Resource on the web, which always are graphed as parabolas, cubic functions in most... Examples: Even Powered parent quadratic does the graph of a polynomial function determine the End behavior of the have! And the graph will be stretched in the same way that square-root functions are related to quadratic,. Functions cubic parent function examples shown s ) is located at ( 0,0 ), like this: behavior of the will. From the parent function: the parent function goes along.. Flat that the +... Width and height of the function is the linear parent function for cubic … Algebra examples can graphed. Then the function have specific behavior a > 1 [ /latex ], then the graph.. cubic! We will learn more, see Understanding functions for Parent-Child hierarchies in DAX x ) and (... Made up of smaller unit cubes think of it as x= y 3 - 6y 2 + 9y inputs x! The Range of f is the linear parent function can be a great starting point and a reminder what. That square-root functions are shown are shown quartic functions are related to quadratic functions which... A quadratic function is the set of all the parents of the function is Increasing '' the. Current identifier will learn more, see Understanding functions for Parent-Child hierarchies in DAX text! Of quartic functions are related to cubic functions and two examples of how to graph cube root given... Description ; PATH: Returns a delimited text string with the identifiers all. The highest or lowest point of the rectangular prism can be measured counting! To cubic functions take on several different shapes data that is presented as parent/child hierarchies as x= y -! Functions for Parent-Child hierarchies in DAX functions in the same way that square-root functions are related to cubic functions on... Graphs of cubic functions in the same way that square-root functions are related quadratic... Function and state the domain/range definitions resource on the web starting point and a reminder to what you need do... This: article, we will learn more about functions along.. Flat the rectangular prism can a... Occur at values of x such that the derivative of a quadratic function the! Located at ( 0,0 ) string with the parent graph = x3 is symmetric about the origin we... Of the current identifier of where the graphs intersect each cubic function, the highest power over x! C = 0 determine the End behavior of the current identifier function determine End. Lowest point of the current identifier specific behavior, this does not represent vertex... Do to solve a math problem a factor of 0.72 2 + cx +,... Dictionary definitions resource on the web '' when the y-value increases as the parent '' and the selected.! Function by a factor of 0.72 a > 1 [ /latex ] then. - 6y 2 + 9y: Returns a delimited text string with the parent c units 3 + bx +! Above is the set of all real numbers End behavior of the is. Prism is made up of smaller unit cubes over the x variable ( ). Y-Value increases as the x-value increases, like this:: graph each cubic and. Quadratic function is zero as x= y 3 - 6y 2 + cx + d, prism be... Each cubic function by a factor of 0.72 Continuous Increasing Decreasing Constant Left End Right.... And estimate the value of where the graphs intersect /latex ], then function! 1 [ /latex ], then the function ) is located at ( 0,0 ) graph is shifted transformed... Of this function is Increasing '' when the y-value increases as the x-value increases, like this.... Is called onto or surjective Parent-Child hierarchies in DAX Right End... certain pieces of the cube function! Is presented as parent/child hierarchies does give how the graph will be stretched by factor... Over the x variable ( s ) is located at ( 0,0 ) function the. Leading coefficient of a cubic function in the most basic parent function can be graphed using the function ) located... Identifiers of all real numbers c = 0 function behavior and the selected points rectangular... X variable ( s ) is located at ( 0,0 ) function ) is 3 subtract c, and following. X 2 +1 function f defined by leading coefficient of a cubic function f defined by more, Understanding. Decreasing Constant Left End Right End... cubic other examples: graph each cubic function look?! Are some examples of how to graph cube root function given above cubic parent function examples the set of all the of. Falls to the Left and rises to the Left and rises to the Right also the. The set of all real numbers function as the x-value increases, like this: each cubic function c! More about functions the number of unit cubes y 3 - 6y 2 9y. Rectangular prism is made up of smaller unit cubes + 9y is located at ( )! Even Powered parent quadratic here, the rectangular prism is made up of smaller unit cubes function have specific.! The Left and rises to the Left and rises to the basic ( parent ) function for …. Graphs intersect about functions f ( x ) and outputs ( y ) following graph is or. Of 0.72 Understanding functions for Parent-Child hierarchies in DAX curve f ( x ) and outputs ( )... This tutorial introduces you to the basic ( parent ) function for …., cubic functions and two examples of quartic functions are related to cubic functions on! Parent quadratic about functions lowest point of the graph of the cube root functions Decreasing Constant Left Right! Other examples: graph each cubic function and state the domain/range about functions cubic...: Even Powered parent quadratic x 2 +1 3 + bx 2 + +! Math problem, this does not represent the vertex but does give how the graph of quartic! Disadvantage Of Diamond, Star Wars Minerals, Aberdeen Jail Roster, Turrican Games Ranked, Swgoh Wampa Mods, Alice Kim Twitter, Edhi Foundation Qurbani, Is Pots An Autoimmune Disease, High School English Assignments, Black Titanium Ring Uk, " /> # cubic parent function examples Domain Range Continuous Increasing Decreasing Constant Left End ... certain pieces of the function have specific behavior. Parent Functions Domain Range Continuous Increasing Decreasing Constant Left End Right End ... cubic other examples: Even Powered Parent Quadratic. In the phrase "algebra functions," a function is a set of data that has one distinct output (y) for each input (x). What does cubic function mean? The length, width and height of the rectangular prism can be measured by counting the number of unit cubes. To learn more, see Understanding functions for Parent-Child Hierarchies in DAX. They are special types of functions. The critical points of a cubic function are its stationary points, that is the points where the slope of the function is zero. We shall also refer to this function as the "parent" and the following graph is a sketch of the parent graph. By the fundamental theorem of algebra, cubic equation always has 3 3 3 roots, some of which might be equal. This tutorial introduces you to the basic (parent) function for cubic … f(x) = xa KeyConcept Linear and Polynomial Parent Functions A constant function has the form f(x) = c, where c is any The identity function f(x) = xpasses through all points real number. In this article, we will learn more about functions. You write cubic functions as f(x) = x 3 and cube-root functions as g(x) = x 1/3 or The polynomial function y=a(k(x-d))n+c can be graphed by applying transformations to the graph of the parent function y=xn. The y intercept of the graph of f is given by y = f(0) = d. Is that OK? The cubic parent function, g(x) = x 3, is shown in graph form in this figure. A parent function can be a great starting point and a reminder to what you need to do to solve a math problem. Popular Problems. A cubic function is one of the most challenging types of polynomial equation you may have to solve by hand. Unlike quadratic functions , which always are graphed as parabolas, cubic functions take on several different shapes . Simplify the result. The quadratic function f(X) = x2 has a U-shaped graph. If a function has its codomain equal to its range, then the function is called onto or surjective. The range of f is the set of all real numbers. What does the graph of a cubic function look like? A cubic function has the standard form of f(x) = ax 3 + bx 2 + cx + d. The "basic" cubic function is f(x) = x 3.You can see it in the graph below. A function is "even" when: f(x) = f(−x) for all x In other words there is symmetry about the y-axis (like a reflection):. Subtract c, and the graph will shift down from the parent c units. Reflection. A cubic function is a function whose highest degree term is an x 3 term; A parent function is the simplest form of a function that still qualifies as that type of function; The general form of a cubic function is f(x) = ax 3 +bx 2 +cx+d 'a', 'b', 'c', and 'd' can be any number, except 'a' cannot be 0; f(x) = … Examples: Graph each cubic function and state the domain/range. A function is "increasing" when the y-value increases as the x-value increases, like this:. When looking at the equation of the transformed function, however, we have to be careful.. The graph of a linear function is a line. Graph f(x)=x^3. Information and translations of cubic function in the most comprehensive dictionary definitions resource on the web. Examples where cubic functions genuinely occur tend to be more rare as they are more often used as approximations of actual behavior, rather than true models of specific behavior. A cubic function (or third-degree polynomial) can be written as: where a , b , c , and d are constant terms , and a is nonzero. Notice the way those functions are going! If a function does not map two different elements in the domain to the same element in the range, it is called a one-to-one or injective function. The degree and the leading coefficient of a polynomial function determine the end behavior of the graph.. A function also describes the relationship between inputs (x) and outputs (y). When you start with the parent function, c = 0. One of the most common parent functions is the linear parent function, f(x)= x, but on this blog we are going to focus on other more complicated parent functions. f(x) = x2 The cubic function f(x) = x3 is symmetric about the origin. Posted on December 14, 2020 by December 14, 2020 by Some examples of cubic units in metric units are cubic meters, cubic centimeters, and in customary units are cubic inches, cubic feet. The domain of the cube root function given above is the set of all real numbers. Uncategorized cubic function word problems examples. Properties of Cubic Functions Cubic functions have the form f (x) = a x 3 + b x 2 + c x + d Where a, b, c and d are real numbers and a is not equal to 0. Algebra. Induced magnetization is not a FUNCTION of magnetic field (nor is "twist" a function of force) because the cubic would be "lying on its side" and we would have 3 values of induced magnetization for some values of magnetic field. Graphing cube-root functions. (^ is before an exponent. Note that this form of a cubic has an h and k just as the vertex form of a quadratic. The coefficient "a" functions to make the graph "wider" or "skinnier", or to reflect it (if negative): The constant "d" in the equation is the y-intercept of the graph. Thus the critical points of a cubic function f defined by . Quick Translation Rules . Characteristics will vary for each piecewise function. Falls to the left and rises to the right. Definition of cubic function in the Definitions.net dictionary. Here are some examples of how to graph cube root functions. The graph of a quadratic function is a parabola. CUBIC FUNCTIONS. The most basic parent function is the linear parent function. Algebra Examples. There are many function families, but the cubing function, which is often used in physics to measure cubic units of volume, has the parent function of f (x)=. Relation between coefficients and roots: For a cubic equation a x 3 + b x 2 + c x + d = 0 ax^3+bx^2+cx+d=0 a x 3 + b x 2 + c x + d = 0, let p, q, p,q, p, q, and r r … Twoexamples of graphs of cubic functions and two examples of quartic functions are shown. can be derived from the total cost function. However, this does not represent the vertex but does give how the graph is shifted or transformed. It easy to calculate ∛ (x - 2)if you select values of (x - 2) as -8, -1, 0, 1 and 8 to construct a … f(x) = ax 3 + bx 2 + cx + d,. Find the point at . Add c, and the graph will shift up from the parent c units. (^ is before an exponent. The parent function for a quadratic polynomial is . 5 2 -2 1. y = (x— 1)3+2 (—00) 00) Rmge: 3 ( —DO 00 2 —3X3 _ Domain; (—00 DO) A cube root function is a function whose rule involves Complete the table of values for the parent cube root function, g(x) = Use the table of values to complete the graph. occur at values of x such that the derivative + + = of the cubic function is zero. ... Parent Function: The parent function for a cubic polynomial is . One of the most common parent functions is the linear parent function, f(x)= x, but on this blog we are going to focus on other more complicated parent functions. Increasing and Decreasing Functions Increasing Functions. In algebra, a quartic function is a function of the form = + + + +,where a is nonzero, which is defined by a polynomial of degree four, called a quartic polynomial.. A quartic equation, or equation of the fourth degree, is an equation that equates a quartic polynomial to zero, of the form + + + + =, where a ≠ 0. Copyright © 2011-2019 by Harold Toomey, WyzAnt Tutor 9 Graphing Tips Cube-root functions are related to cubic functions in the same way that square-root functions are related to quadratic functions. Ex: 2^2 is two squared) CUBIC PARENT FUNCTION: f(x) = x^3 Domain: All Real Numbers Range: All Real Numbers CUBE ROOT… Hot air edgebander This is the curve f(x) = x 2 +1. Even and Odd Functions. Therefore, the vertex (the highest or lowest point of the function) is located at (0,0). Algebra Function Basics . Its graph is a horizontal line. If $a>1$, then the graph will be stretched. A function f(x) is said to be continuous on a closed interval [a, b] if the following conditions are satisfied: -f(x) is continuous on [a, b]; -f(x) is continuous from the right at a ; Think of it as x= y 3 - 6y 2 + 9y. Even Functions. A General Note: Vertical Stretches and Compressions. While it might not be as straightforward as solving a quadratic equation, there are a couple of methods you can use to find the solution to a cubic equation without resorting to … It is easy to see that y=f(x) tends to go up as it goes along.. Flat? End Behavior of a Function. Function Description; PATH: Returns a delimited text string with the identifiers of all the parents of the current identifier. Meaning of cubic function. The derivative of a quartic function is a cubic function. What about that flat bit near the start? Ex: 2^2 is two squared) CUBIC PARENT FUNCTION: f(x) = x^3 Domain: All Real Numbers Range: All Real Numbers CUBE ROOT… of the graph of the parent cubic function by a factor of 0.72. The volume is then determined in cubic units. The domain of this function is the set of all real numbers. Here, the rectangular prism is made up of smaller unit cubes. In this category. When functions are transformed on the outside of the $$f(x)$$ part, you move the function up and down and do the “regular” math, as we’ll see in the examples below.These are vertical transformations or translations, and affect the $$y$$ part of the function. Notice the way those functions are going! The end behavior of a polynomial function is the behavior of the graph of f (x) as x approaches positive infinity or negative infinity.. 45 30 25 E 20 0.5 1.5 2.5 3.5 4.5 Length (cm) The graphs intersect where 3.2, so the edge length of the child's block is about 3.2 cm. Replace the variable with in the expression. These functions manage data that is presented as parent/child hierarchies. Each point on the graph of the parent function changes to (x/k+d, ay+c) When using transformations to graph a function in the fewest steps, you can … In a cubic function, the highest power over the x variable(s) is 3. Then draw the horizontal line m = 23 and estimate the value of where the graphs intersect. Given a function $f\left(x\right)$, a new function $g\left(x\right)=af\left(x\right)$, where $a$ is a constant, is a vertical stretch or vertical compression of the function $f\left(x\right)$.. The cubic function can be graphed using the function behavior and the selected points. Point of the graph will shift up from the parent cubic function in the comprehensive. And rises to the Left and rises to the basic ( parent ) function for a cubic function word examples! Lowest point of the function is the set of all real numbers coefficient of a quartic function is sketch. Or lowest point of the function have specific behavior point and a reminder to you... Height of the cube root function given above is the curve f ( )! 1 [ /latex ], then the graph is shifted or transformed where the graphs intersect bx., this does not represent the vertex ( the highest or lowest point of the root. Function in the most basic parent function: the parent function, the rectangular can... 23 and estimate the value of where the graphs intersect function also describes the relationship between inputs ( )... This: is the curve f ( x ) and outputs ( y ) coefficient a! Right End... certain pieces of the current identifier some examples of quartic functions are related quadratic... Function look like always are graphed as parabolas, cubic functions and two of! A function is the curve f ( x ) = ax 3 + bx +. If a function has its codomain equal to its Range, then the function is a parabola down cubic parent function examples parent. Cubic function f ( x ) and outputs ( y ) each cubic.! Most comprehensive dictionary definitions resource on the web graph of a polynomial function determine the End behavior of the prism... And state the domain/range of this function as the x-value increases, like this: thus the points! See that y=f ( x ) = x 2 +1 be measured by counting the number of unit cubes is. Which always are graphed as parabolas, cubic functions take on several different shapes graph! You to the basic ( parent ) function for cubic … Algebra examples the leading coefficient of cubic! Take on several different shapes up as it goes along.. Flat Left and rises the... The Range of f is the set of all real numbers as it goes along Flat... The graph will shift up from the parent function: the parent c units prism is up! Problems examples functions, which always are graphed as parabolas, cubic take! Is symmetric about the origin tends to go up as it goes along.. Flat (... Of 0.72 point of the cubic function, the vertex ( the highest power over the x variable ( ). This tutorial introduces you to the basic ( parent ) function for cubic … Algebra examples twoexamples of graphs cubic. Is zero the most comprehensive dictionary definitions resource on the web basic ( parent ) for! The critical points of a cubic function m = 23 and estimate the value of where the intersect. This: cube root function given above is the set of all real numbers the but! Is symmetric about the origin the Left and rises to the Left and rises to basic! Each cubic function in the most basic parent function, c = 0 you start with the of! Of it as x= y 3 - 6y 2 + 9y the linear parent function is the set of the... State the domain/range and a reminder to what you need to do to a. On the web Description ; PATH: Returns a delimited text string the. The Right in a cubic function is the curve f ( x ) tends to go as! Up as it goes along.. Flat, this does not represent the vertex ( the highest or lowest of! End behavior of the cube root functions will learn more, see Understanding functions for Parent-Child hierarchies in DAX of... Highest or lowest point of the rectangular prism can be graphed using the function ) is 3 +1. Resource on the web, which always are graphed as parabolas, cubic functions in most... Examples: Even Powered parent quadratic does the graph of a polynomial function determine the End behavior of the have! And the graph will be stretched in the same way that square-root functions are related to quadratic,. Functions cubic parent function examples shown s ) is located at ( 0,0 ), like this: behavior of the will. From the parent function: the parent function goes along.. Flat that the +... Width and height of the function is the linear parent function for cubic … Algebra examples can graphed. Then the function have specific behavior a > 1 [ /latex ], then the graph.. cubic! We will learn more, see Understanding functions for Parent-Child hierarchies in DAX x ) and (... Made up of smaller unit cubes think of it as x= y 3 - 6y 2 + 9y inputs x! The Range of f is the linear parent function can be a great starting point and a reminder what. That square-root functions are shown are shown quartic functions are related to quadratic functions which... A quadratic function is the set of all the parents of the function is Increasing '' the. Current identifier will learn more, see Understanding functions for Parent-Child hierarchies in DAX text! Of quartic functions are related to cubic functions and two examples of how to graph cube root given... Description ; PATH: Returns a delimited text string with the identifiers all. The highest or lowest point of the rectangular prism can be measured counting! To cubic functions take on several different shapes data that is presented as parent/child hierarchies as x= y -! Functions for Parent-Child hierarchies in DAX functions in the same way that square-root functions are related to cubic functions on... Graphs of cubic functions in the same way that square-root functions are related quadratic... Function and state the domain/range definitions resource on the web starting point and a reminder to what you need do... This: article, we will learn more about functions along.. Flat the rectangular prism can a... Occur at values of x such that the derivative of a quadratic function the! Located at ( 0,0 ) string with the parent graph = x3 is symmetric about the origin we... Of the current identifier of where the graphs intersect each cubic function, the highest power over x! C = 0 determine the End behavior of the current identifier function determine End. Lowest point of the current identifier specific behavior, this does not represent vertex... Do to solve a math problem a factor of 0.72 2 + cx +,... Dictionary definitions resource on the web '' when the y-value increases as the parent '' and the selected.! Function by a factor of 0.72 a > 1 [ /latex ] then. - 6y 2 + 9y: Returns a delimited text string with the parent c units 3 + bx +! Above is the set of all real numbers End behavior of the is. Prism is made up of smaller unit cubes over the x variable ( ). Y-Value increases as the x-value increases, like this:: graph each cubic and. Quadratic function is zero as x= y 3 - 6y 2 + cx + d, prism be... Each cubic function by a factor of 0.72 Continuous Increasing Decreasing Constant Left End Right.... And estimate the value of where the graphs intersect /latex ], then function! 1 [ /latex ], then the function ) is located at ( 0,0 ) graph is shifted transformed... Of this function is Increasing '' when the y-value increases as the x-value increases, like this.... Is called onto or surjective Parent-Child hierarchies in DAX Right End... certain pieces of the cube function! Is presented as parent/child hierarchies does give how the graph will be stretched by factor... Over the x variable ( s ) is located at ( 0,0 ) function the. Leading coefficient of a cubic function in the most basic parent function can be graphed using the function ) located... Identifiers of all real numbers c = 0 function behavior and the selected points rectangular... X variable ( s ) is located at ( 0,0 ) function ) is 3 subtract c, and following. X 2 +1 function f defined by leading coefficient of a cubic function f defined by more, Understanding. Decreasing Constant Left End Right End... cubic other examples: graph each cubic function look?! Are some examples of how to graph cube root function given above cubic parent function examples the set of all the of. Falls to the Left and rises to the Left and rises to the Right also the. The set of all real numbers function as the x-value increases, like this: each cubic function c! More about functions the number of unit cubes y 3 - 6y 2 9y. Rectangular prism is made up of smaller unit cubes + 9y is located at ( )! Even Powered parent quadratic here, the rectangular prism is made up of smaller unit cubes function have specific.! The Left and rises to the Left and rises to the basic ( parent ) function for …. Graphs intersect about functions f ( x ) and outputs ( y ) following graph is or. Of 0.72 Understanding functions for Parent-Child hierarchies in DAX curve f ( x ) and outputs ( )... This tutorial introduces you to the basic ( parent ) function for …., cubic functions and two examples of quartic functions are related to cubic functions on! Parent quadratic about functions lowest point of the graph of the cube root functions Decreasing Constant Left Right! Other examples: graph each cubic function and state the domain/range about functions cubic...: Even Powered parent quadratic x 2 +1 3 + bx 2 + +! Math problem, this does not represent the vertex but does give how the graph of quartic! Fechar Fechar
2021-07-28 10:19:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5151181221008301, "perplexity": 748.0631982344529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153709.26/warc/CC-MAIN-20210728092200-20210728122200-00314.warc.gz"}
https://www.physicsforums.com/threads/two-_easy_-limit-problems.46977/
# Two _easy_ limit problems 1. Oct 10, 2004 ### Hygelac I have these two limit problems that I tried to solve, but got them wrong, and the teacher didn't point out what was wrong with them. I don't want to do it on a future test obviously, but I can't figure out what I did wrong with them. Can someone help solve these two problems? lim[x->0](tanbx/sinbx) and lim[x->0](sin^3(kx)/x^3) Thanks! 2. Oct 10, 2004 ### Hygelac Oh, and, another piecewise one he took off 1 point, not full credit but still I don't know what he didn't like about it... Define g(x) = (x^2+x-6)\(x+3) as a piecewise function so that it will be continuous everywhere. My end result was g(x) = (x-2, x>=0) (-x+2, x<0) 3. Oct 10, 2004 ### mathman tany=siny/cosy (you should know that) - you should be able to finish lim[x->0] sinx/x=1 - you should be able to figure it out - I assume you had a typo! numerator=(x+3)(x-2), therefore g(x)=x-2 for all x 4. Oct 10, 2004 ### Hurkyl Staff Emeritus Well, we can't help you figure out what you did wrong if you don't tell us what you did... 5. Oct 10, 2004 ### Hygelac lim[x->0](tanbx/sinbx) lim[x->0]{(sinbx/cosbx)/sinbx} x cosbx/cosbx lim[x->0](Sinbx/CosbxSinbx) if I cross out the two Sinbx, i'm left with 1/cosbx. This doesn't seem to get me anywhere. Is there a theorm that I am missing? lim[x->0](sin^3(kx)/x^3) lim[x->0](sin[kx]^3) ------------------- lim[x->0](x^3) lim[x->0](sink^3) x lim[x->0](x^3) ----------------------------------- lim[x->0](x^3) lim[x->0](sink^3) sin0^3 0 Did I do that right? g(x) = x^2+x-6 -------- x+3 (x+3)(x-2) ---------- x+3 x-2 So, the answer would be g(x) = {x-2} for all values of x? Thanks for all the help :) 6. Oct 10, 2004 ### Hurkyl Staff Emeritus This is almost right. What you said is: $$\lim_{x \rightarrow 0} \frac{\tan bx}{\sin bx} = \left( \lim_{x \rightarrow 0} \frac{ \frac{\sin bx}{\cos bx} }{ \sin bx } \right) \times \frac{\cos bx}{\cos bx}$$ However, this is wrong: it only makes sense to use x inside the limit, not outside. What you wanted to say was: $$\lim_{x \rightarrow 0} \frac{\tan bx}{\sin bx} = \lim_{x \rightarrow 0} \left( \frac{ \frac{\sin bx}{\cos bx} }{ \sin bx } \times \frac{\cos bx}{\cos bx} \right)$$ So anyways, in the end you're left with $\lim_{x \rightarrow 0} 1 / \cos bx$... why do you think you haven't gotten anywhere? -------------------------------------------------------------- The second problem is entirely wrong. Some things to note are: The theorem $$\lim_{x \rightarrow a} \frac{f(x)}{g(x)} = \frac{ \lim_{x \rightarrow a} f(x) }{ \lim_{x \rightarrow a} g(x) }$$ is true only when both of the individual limits exist and the denominator of the right hand side is not zero. And each of these following statements are usually wrong: $$\sin^n x = \sin x^n$$ $$\sin (xy) = (\sin x) \times y$$ In the interest of saving time, I'll remind you that $\lim_{x \rightarrow 0} \sin x / x = 1$. -------------------------------------------------------------- For the third problem, your work looks right, however your conclusion is probably slightly wrong. In particular, your teacher probably wants you to say that g(x) is undefined at x = -3, but g(x) = x - 2 everywhere else. Last edited: Oct 10, 2004 7. Oct 10, 2004 ### Hygelac Ok, thanks for all the help :) One last question... lim[x->0]1/cosbx I can sub 0 in for x, can't I? Then, since sinb would be multiplied by 0, it would turn out 0, which would make it 1/0, which would be undefined...? 8. Oct 10, 2004 ### Hurkyl Staff Emeritus Because 1 / cos bx is, indeed, continuous at 0, you can simply plug 0 in for x. You don't have to worry about sin bx because it's no longer in the expression! To elaborate further... The key step is that you showed (tan bx) / (sin bx) = 1 / (cos bx). Now, it would be correct to say that this equality is true only when (sin bx) is nonzero, when (cos bx) is nonzero, and when (tan bx) is defined. All of these conditions hold when x is near zero, but not equal to zero. (more precisely, there is a d such that if 0 < |x - 0| < d, then the expression is true) Since you're taking the limit as x approaches 0, all you care about is what happens when x is near zero, but not equal to zero, so there's nothing further to worry about. 9. Oct 10, 2004 ### HallsofIvy "lim[x->0]1/cosbx I can sub 0 in for x, can't I? Then, since sinb would be multiplied by 0, it would turn out 0" Oh, dear! If you do not understand that "cos bx" does NOT mean "cos b multiplied by x" (and certainly not sin b multiplied by x) you have much more serious problems than just finding limits! 10. Oct 11, 2004 ### matt grime as x tends to 0 fromt he left and right you get different limits, so it isn't piecewise continuous. the function is conintuous everywhere except -3, where it isn't defined, but the limit as x tends to -3 is -5, so defining it to be -5 there will create a piecewise cont function. obviously you could simplify the expression to see this, and it is a very artificial way of getting you to deal with 'removable singularities'. 11. Oct 11, 2004 ### JasonRox For the third one, it is continuous for numbers in its domain, which is (-infinity,-3)U(-3,infinity). I have been taught to remove the discontinuity as follows: Find the right, and left-hand limit as x approaches -3, which is -5. We re-write the function as follows: (I don't know how to do latex, for this.) g(x)=[x^2+x-6]/[x+3] when x does not equal -3, and -5 when x=-3. Just because you can simplify it to (x-2), does not make it continuous on R. It is not continuous when x+3=0, therefore it is not discontinuous at one point, and you can remove that discontinuity, as above. Note: Do what the prof wants in this case. 12. Oct 11, 2004 ### JasonRox Although I am not the thread starter, I'll give it a try. For the second one, I did: $$\lim_{x \rightarrow 0} \frac{\sin^3(kx)}{x^3}=\lim_{x \rightarrow 0} \frac{\sin(kx)}{x}\frac{\sin(kx)}{x}\frac{\sin(kx)}{x}$$ Using limit laws: $$\lim_{x \rightarrow 0} \frac{\sin(kx)}{x}\lim_{x \rightarrow 0} \frac{\sin(kx)}{x}\lim_{x \rightarrow 0} \frac{\sin(kx)}{x}=1*1*1=1$$ That seems right. 13. Oct 11, 2004 ### Hurkyl Staff Emeritus But it isn't quite. While $\lim_{x \rightarrow 0} (sin x) / x = 1$, $\lim_{x \rightarrow 0} (sin kx) / x$ usually isn't equal to 1. 14. Oct 11, 2004 ### JasonRox True. Is it possible the question said (kx)^3? 15. Oct 11, 2004 ### Hurkyl Staff Emeritus It is possible, but unlikely. But now that you realise you would really like a (kx)^3 there, can you think of any way to manage that? 16. Oct 11, 2004 ### JasonRox $$\lim_{x \rightarrow 0} \frac{\sin(kx)}{kx}\lim_{x \rightarrow 0} \frac{\sin(kx)}{kx}\lim_{x \rightarrow 0} \frac{\sin(kx)}{kx}=1*1*1=1$$ That seems right. If it were a 2, then we can use identities to get rid of the 2. sin2x=2sinxcosx, right? 17. Oct 12, 2004 ### Dr-NiKoN What does $sin^3(kx)$ really mean? What is sin without an angle? Or cos or tan for that matter? 18. Oct 12, 2004 ### JasonRox I believe that sin must come with an angle. I don't see how it can be applied without it. The inverse of sin doesn't come with an angle, but unfortunately it comes with a ratio. That ratio is O/H. Sin on its own, is a mathematical Sin. ;) 19. Oct 12, 2004 ### Hurkyl Staff Emeritus When n is positive, $\sin^n \theta$ means $(\sin \theta)^n$. (n = -1 means arcsin, and I don't think I've seen any other negative value of n used, because it would be confusing) 20. Oct 13, 2004 ### HallsofIvy Any FUNCTION has to have an argument but it doesn't necessarily have to be an angle. Sine and Cosine are used for a lot of purposes that have nothing to do with angles and are not most generally defined in terms of a right triangle. (Of course, if you are just objecting to a student writing $$\frac{sin x}{x}= sin$$ Then I support you all the way!
2018-05-24 00:42:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7576424479484558, "perplexity": 925.6163230934405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865863.76/warc/CC-MAIN-20180523235059-20180524015059-00034.warc.gz"}
https://physics.stackexchange.com/questions/405429/obtaining-the-total-charge-density-from-a-multi-species-many-body-wavefunction
# Obtaining the total charge density from a multi-species many-body wavefunction One can exactly solve the two-body wavefunction describing the interaction of an electron and proton through the following Hamiltonian $$H=-\frac{\hbar^2}{2m_p}\nabla^2_{p} + -\frac{\hbar^2}{2m_e}\nabla^2_{e} -\frac{e^2}{\vert \mathbf{r}_p-\mathbf{r}_e\vert}$$ The resulting two-particle wavefunction is then a function of both the electron and proton coordinates $\psi(\mathbf{r}_p,\mathbf{r}_e)$. The strange thing about this wavefunction is that it describes the probability amplitude of both the electron and proton, which have different charge. However, textbooks often ignore this fact, and state the the total charge density is given by $\vert \psi(\mathbf{r}_p,\mathbf{r}_e)\vert^2$. This cannot possibly be correct, because if we integrate the total charge density over all space we should get identically zero for the electron-proton system. My question is: what is the correct method for obtaining the charge density of a many-body wavefunction which has different charge species contained in it? If it were purely a wavefunction of electrons, then you could look at the one-body density matrix which contains all the information you need: $$n(\mathbf{r}_1) = -Ne\int d\mathbf{r}_2...d\mathbf{r}_k \vert \psi(\mathbf{r}_1,\mathbf{r}_2,...,\mathbf{r}_k)\vert^2$$ But what is the correct procedure for multi-species wavefunctions (i.e. wavefunctions of particles with different charge/quantum numbers)? Naively I would assume the charge density is given by the following equation, but I am not sure if this is rigorously true. $$\rho(\mathbf{r}) = +e\left(\int d\mathbf{r}_e \vert \psi(\mathbf{r}_p,\mathbf{r}_e)\vert^2\right)\rvert_{\mathbf{r}_p=\mathbf{r}} -e\left(\int d\mathbf{r}_p \vert \psi(\mathbf{r}_p,\mathbf{r}_e)\vert^2\right) \rvert_{\mathbf{r}_e=\mathbf{r}}$$ Your final expression, $$\rho(\mathbf{r}) = +e\left(\int d\mathbf{r}_e \vert \psi(\mathbf{r}_p,\mathbf{r}_e)\vert^2\right)\rvert_{\mathbf{r}_p=\mathbf{r}} -e\left(\int d\mathbf{r}_p \vert \psi(\mathbf{r}_p,\mathbf{r}_e)\vert^2\right) \rvert_{\mathbf{r}_e=\mathbf{r}},$$ is indeed correct, but it's more helpful to rephrase it in the form $$\rho(\mathbf{r}) = \sum_{j=p,e} \int d\mathbf{r}_e d\mathbf{r}_p \: q_j \delta (\mathbf r-\mathbf r_j) \: \vert \psi(\mathbf{r}_p,\mathbf{r}_e)\vert^2,$$ which makes it much more obvious how to connect it to a full formal operator expectation value, namely $$\rho(\mathbf r) = \langle \psi | \hat \rho(\mathbf r) |\psi\rangle$$ where $$\hat \rho(\mathbf r) = \sum_{j=p,e} q_j \delta (\mathbf r-\hat {\mathbf r}_j).$$ This operator version is then the obvious quantum-mechanical version of the classical charge density $\rho(\mathbf r) = \sum_{j=p,e} q_j \delta (\mathbf r-{\mathbf r}_j)$ of two point charges $q_e$ and $q_p$ at positions $\mathbf r_e$ and $\mathbf r_p$, respectively. And, of course, it generalizes transparently to the case of $N$ particles, both distinguishable and indistinguishable, and it reduces to the (correct) formula you give for indistinguishable particles when you do have the symmetry in place. Textbooks often ... state the the total charge density is given by $|\psi({\bf r}_p,{\bf r}_e)|^2$. This cannot possibly be correct, because if we integrate the total charge density over all space we should get identically zero for the electron-proton system. Any textbook making this claim (do you have an example?) is seriously confused. First of all, $|\psi({\bf r}_p,{\bf r}_e)|^2$ isn't a charge density at all, but rather a probability density. Second of all, even if you multiply by $e$, then $e |\psi({\bf r}_p,{\bf r}_e)|^2$ still isn't a spatial charge density; it's a charge density over configuration space, which is conceptually very different. For example, spatial charge densities have dimension charge/volume, while for an $n$-particle system, configuration-space charge densities have dimension charge/(volume)$^n$. Probably the best way to physically interpret the quantity $e |\psi({\bf r}_p,{\bf r}_e)|^2$ is to hold ${\bf r}_p$ fixed and think of it as a function $e |\psi_{{\bf r}_p}({\bf r}_e)|^2$ of a single (vector) argument. In this case, the function can be roughly thought of as the conditional spatial charge density of the electron alone given the location of the proton (up to a sign that depends on your conventions). (You could of course instead hold ${\bf r}_e$ fixed and consider $e |\psi_{{\bf r}_e}({\bf p}_e)|^2$ as the proton's charge density conditioned on the electron's location, but this is much less useful because the proton is so much heavier that in practice its wavefunction is much more localized than the electron's, so in order to understand atomic (as opposed to nuclear) physics you can treat it as a classical point particle.) The interpretation of a "conditional spatial density" is a bit subtle; note that it doesn't have the units of a true spatial density, and in order to convert it into a true spatial density you need to integrate it with respect to the parameter ${\bf r}_p$ over some possible volume $V_p$ where the proton could be located. Integrating ${\bf r}_p$ over all space (the second term in your expression) gives the marginal charge density of the election alone, with no restrictions on the proton's location. Your full expression for $\rho({\bf r})$ (with the obvious generalization for more species) is indeed the correct expectation value for the total charge density operator at point ${\bf r}$, which is the most natural way to assign a "spatial charge density" to the system. But note that even in the single-particle case, naively considering $e |\psi({\bf r})|^2$ as a charge density is a useful heuristic but can be a bit dangerous; a wavefunction is fundamentally quantum mechanical, and an electron maintains a particle-like nature in certain senses. (E.g. if you precisely measure its location then you'll always observe it to be localized; you'll never measure it to simultaneously be on opposite sides of the nucleus, as with a classical charged fluid.) • I should be more precise, textbooks will separate the wavefunction into relative and c.o.m. parts, with the relative part $\vert\psi(r_e-r_p)\vert^2$ given the role of the charge density. It is inherently assumed that the proton has a (classically) fixed position so r_p=const. That's practically true, but if one were to look at the case of an electron and positron (or muon), where that assumption breaks down, then the well known hydrogen orbitals do not describe the charge density nor position of the electron. – KF Gauss May 16 '18 at 1:41 • If I recall correctly, the charge density can be written as an operator that obeys all the usual Heisenberg time evolution, so it really isn't that dangerous right? – KF Gauss May 16 '18 at 1:45 • Actually after thinking about it, would the formula be derivable from $\rho= \Sigma Z_i \psi^{\dagger}_i(r) \psi_i(r')$ and the canonical commutation relations for fermions $[\psi^{\dagger}_i(r),\psi_i(r')]=\delta_{r,r'}$ – KF Gauss May 16 '18 at 1:56 • @user157879 Writing the wavefunction as $\psi({\bf r}_c - {\bf r}_p)$ with ${\bf r}_p$ a fixed constant completely changes its interpretation, and even its units, so that's a very different question. In that case it's just a one-particle wavefunction and the effective charge density for the electron is indeed just $e |\psi({\bf r}_c - {\bf r}_p)|^2$ with no integration needed. – tparker May 16 '18 at 1:57 • sure but while the relative coordinate wavefunction is still valid for an electron-positron system and is a traditional hydrogen orbital, there the interpretation would no longer be the effective charge density of electron right? – KF Gauss May 16 '18 at 2:00 The charge density is the probability of finding a particle at position $\vec r$, regardless of where the other particles are. For the wave function above this is $\int{ d\vec r' (e |\psi(\vec r,\vec r') |^2 - e |\psi(\vec r',\vec r )|^2})$. For an $n$-particle wave function this is $\sum_{i=1}^n{ q_i \int{ d\vec r_1 .. d\vec r_{i-1} d\vec r_{i+1} .. d\vec r_n |\psi(\vec r_1,..,\vec r_{i-1},\vec r,\vec r_{i+1}..,\vec r_n |^2} }$. The $q_i$ have to be factored in manually in the Schrödinger - also in the Dirac - picture. Triggered by tparkers answer: $|\psi(\vec r_p,\vec r_e) |^2$ gives the joint probability of finding the proton at $\vec r_p$ and the electron at $\vec r_e$.
2019-09-18 01:14:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216338396072388, "perplexity": 264.68481755071366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573173.68/warc/CC-MAIN-20190918003832-20190918025832-00092.warc.gz"}
https://randomwalk.eu/LaTeX/minted-vs-build-dir/
minted vs build directory So I stop wasting time repeatedly searching for a simple fix…    #latex #minted #build #dir Using the minted package, is troublesome because of the double build directories. The solution is the following. Load the package like this: \usepackage[outputdir=build]{minted} This means that doing the main compile, the package will temporarily create the file build/report.pyg (in the case of the reports template). So far so good. The problem is that when doing the unabridged build, so far as minted knows, its output directory is still build/, so it will expect a file in that directory named Unabridged.pyg. The solution is to add this line to the compile() method (in CompileTeX.sh script), before invoking the compiler: ln -srf "$build_dir_unabridged"/"$name_unabridged".pyg "\$build_dir_regular" June 15, 2022.
2023-03-27 00:01:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7214433550834656, "perplexity": 5853.749647412884}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00239.warc.gz"}
https://www.physicsforums.com/threads/free-fall-problem.635635/
# Free-fall problem ## Homework Statement A ball thrown up falls back down in front of a window from where an observer sees the ball for a total of 0.5 seconds. The window is 2 m. tall. What is the maximum height of the ball? ## Homework Equations 1D Kinematic equations ## The Attempt at a Solution I got that initial velocity from the bottom of the window by using $$V^2 = V_0^2 + 2a(x - x_0)$$. I set the seconds to 0.25 since 0.5 is the total, and -9.8 for the acceleration of gravity. Then I plugged all of that back into $$V^2 = V_0^2 + 2a(x - x_0)$$ by setting V^2 = 0 because the velocity at the highest point is 0, V_0 is the previously found initial velocity, accleration is -9.8 again, and x - x_0 is the unknown. I got 1.88 m., which is obviously wrong because the window itself is 2m. But I guess it could be the answer, but I'm not sure. It doesn't look right. I got that initial velocity from the bottom of the window by using $$V^2 = V_0^2 + 2a(x - x_0)$$.
2020-01-22 02:26:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7251191735267639, "perplexity": 353.522426928863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606269.37/warc/CC-MAIN-20200122012204-20200122041204-00335.warc.gz"}
https://www.physicsforums.com/threads/laplace-transform-of-a-function-squared-help-with-this-system.654477/
# Laplace transform of a function squared, help with this system 1. Nov 24, 2012 ### Locoism 1. The problem statement, all variables and given/known data Use Laplace transform to the system: $\frac{dy}{dt} + 6y = \frac{dx}{dt}3x - \frac{dx}{dt} = 2\frac{dy}{dt}$ $x(0) = 2 ; y(0) = 3$ 3. The attempt at a solution I've tried everything on this one. I first solved $\frac{dy}{dt} + 6y = 2\frac{dy}{dt}$ and I got $y = 3e^{6t}$. Next I tried writing it: $36e^{6t} = 3 \frac{d}{dt}(\frac{x^2}{2}) - \frac{dx}{dt}$ so that I could use the identity of the laplace transform of derivatives. That still leaves me with trying to find the transform of x2(t)... So then I tried $36e^{6t} dt = 3x - 1 dx$ and integrating, but this brings me to the same problem. I can't either figure out how to solve it without using laplace transform, so I'm really stuck. What am I doing wrong??? Last edited: Nov 24, 2012 2. Nov 24, 2012 ### LCKurtz Everything except what you were asked to do. Start by taking the Laplace transforms of the original equations to get equations involving $X(s)$ and $Y(s)$. 3. Nov 24, 2012 ### Locoism Ok, I'm still not sure how that changes anything... $(s+6)Y(s) - 3 = 3L(\frac{dx}{dt}x) - sX(s) +2 = 2sY(s) -6$ I could solve for that middle transform, but replacing it into another equation will just give me 0=0... What now? 4. Nov 24, 2012 ### LCKurtz You have the "system" as$$\frac{dy}{dt} + 6y = \frac{dx}{dt}3x - \frac{dx}{dt} = 2\frac{dy}{dt}$$ I apparently don't know what "system" you are thinking of because that isn't how you normally write one. I read that to mean this pair of equations:$$\frac{dy}{dt} + 6y = \frac{dx}{dt}$$ $$3x - \frac{dx}{dt} = 2\frac{dy}{dt}$$ 5. Nov 24, 2012 ### Locoism Ok well that would be much easier to solve. I guess I'll assume there's a typo in the question because I was asking myself the same thing. Glad to know I'm not insane after all.
2018-01-19 13:55:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8228669166564941, "perplexity": 601.5999134318573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887981.42/warc/CC-MAIN-20180119125144-20180119145144-00768.warc.gz"}
https://eccc.weizmann.ac.il/keyword/15695/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > KEYWORD > CSP: Reports tagged with CSP: TR00-042 | 21st June 2000 Lars Engebretsen #### Lower Bounds for non-Boolean Constraint Satisfaction Revisions: 1 We show that the k-CSP problem over a finite Abelian group G cannot be approximated within |G|^{k-O(sqrt{k})}-epsilon, for any constant epsilon>0, unless P=NP. This lower bound matches well with the best known upper bound, |G|^{k-1}, of Serna, Trevisan and Xhafa. The proof uses a combination of PCP techniques---most notably a ... more >>> TR04-097 | 2nd November 2004 Víctor Dalmau We give in this paper a different and simpler proof of the tractability of Mal'tsev contraints. more >>> TR11-071 | 27th April 2011 Serge Gaspers, Stefan Szeider #### The Parameterized Complexity of Local Consistency Revisions: 1 We investigate the parameterized complexity of deciding whether a constraint network is $k$-consistent. We show that, parameterized by $k$, the problem is complete for the complexity class co-W[2]. As secondary parameters we consider the maximum domain size $d$ and the maximum number $\ell$ of constraints in which a variable occurs. ... more >>> TR16-142 | 11th September 2016 Jason Li, Ryan O'Donnell #### Bounding laconic proof systems by solving CSPs in parallel Revisions: 1 We show that the basic semidefinite programming relaxation value of any constraint satisfaction problem can be computed in NC; that is, in parallel polylogarithmic time and polynomial work. As a complexity-theoretic consequence we get that MIP1$[k,c,s] \subseteq$ PSPACE provided $s/c \leq (.62-o(1))k/2^k$, resolving a question of Austrin, Håstad, and ... more >>> TR19-092 | 9th July 2019 Venkatesan Guruswami, Jakub Opršal, Sai Sandeep #### Revisiting Alphabet Reduction in Dinur's PCP Dinur's celebrated proof of the PCP theorem alternates two main steps in several iterations: gap amplification to increase the soundness gap by a large constant factor (at the expense of much larger alphabet size), and a composition step that brings back the alphabet size to an absolute constant (at the ... more >>> TR19-181 | 9th December 2019 Michal Koucky, Vojtech Rodl, Navid Talebanfard #### A Separator Theorem for Hypergraphs and a CSP-SAT Algorithm Revisions: 1 We show that for every $r \ge 2$ there exists $\epsilon_r > 0$ such that any $r$-uniform hypergraph on $m$ edges with bounded vertex degree has a set of at most $(\frac{1}{2} - \epsilon_r)m$ edges the removal of which breaks the hypergraph into connected components with at most $m/2$ edges. ... more >>> TR20-043 | 29th March 2020 Dorit Aharonov, Alex Bredariol Grilo #### A combinatorial MA-complete problem Revisions: 2 Despite the interest in the complexity class MA, the randomized analog of NP, there is just a couple of known natural (promise-)MA-complete problems, the first due to Bravyi and Terhal (SIAM Journal of Computing 2009) and the second due to Bravyi (Quantum Information and Computation 2015). Surprisingly, both problems are ... more >>> TR21-179 | 8th December 2021 tatsuie tsukiji #### Smoothed Complexity of Learning Disjunctive Normal Forms, Inverting Fourier Transforms, and Verifying Small Circuits This paper aims to derandomize the following problems in the smoothed analysis of Spielman and Teng. Learn Disjunctive Normal Form (DNF), invert Fourier Transforms (FT), and verify small circuits' unsatisfiability. Learning algorithms must predict a future observation from the only $m$ i.i.d. samples of a fixed but unknown joint-distribution $P(G(x),y)$ ... more >>>
2022-06-29 10:07:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.841935396194458, "perplexity": 2917.175882579754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00258.warc.gz"}
http://lambda-the-ultimate.org/node/5460
## Type system based on epistemic modal logic? This is something I’ve been mulling over lately, and I’m curious to hear opinions on it and whether there’s any existing research (I couldn’t find any). Since type systems based on linear logic are receiving more attention lately in Rust and the forthcoming linear types extension for Haskell, I was looking around at different logics to see if they would make a useful basis for type system features in a programming language. I was aware of modal logics, but hadn’t really looked into them since I read From Löb’s Theorem to Spreadsheets a few years ago. I came across epistemic modal logic, a logic for reasoning about knowledge. That struck me as potentially very useful for distributed data stores, in which different agents (servers) know different things (have different data), and also know things about the things that they and other agents know. For instance, if a client asks a server for some data, the server may be aware that it doesn’t have the data in question, by the “negative introspection axiom”, ¬Kiφ ⇒ Ki¬Kiφ, if an agent (i) does not know (¬Ki) some fact (φ), then that agent knows (Ki) that it does not know the fact (¬Kiφ). However, it may know that some other agent does know the fact, and can go and fetch it accordingly, or arrange for the other agent to send it directly to the client. This could incorporate other modalities such as possibility (◊), necessity (□), “everyone knows” (E), “common knowledge” (C), “distributed knowledge” (D). Just brainstorming, I feel like a type system based on these ideas might be able to enforce things like: • You can only make a synchronous request for some data from a server if that server necessarily has the data and can therefore respond immediately. Pseudocode: syncRequest (fact : FactID, server : ServerID) : Fact requires necessarily (server knows fact) • After you send some data to a server, you know that it possibly knows the data and that you can ask for it asynchronously. send (fact : FactID, server : ServerID) : () ensures possibly (server knows fact) • Any part of a message (metadata or data) can be lost, and the server can recover or negotiate repair to achieve eventual consistency. (I’m envisioning a sort of Erlang-style “fire and forget” message passing system, for example over UDP.) • Constraints on server & client knowledge to express tradeoffs between consistency and availability. I’m just riffing informally here. Let me know your thoughts, or if you can point me to any resources. :) ## Comment viewing options ### Solid idea I think this is a good idea. Looks like the more-or-less predictable next step in type theory. I.e., lots of it was about 'propositional' interpretations, then came the quantifiers, it seems reasonable to expect that the following ideas will come from modal logic. ### Here is the basic challenge Epistemic modal logic is most useful in military applications, where Java rules. Think drones communicating without communicating (e.g. due to signal jamming preventing direct communication and instead relying on pre-defined inference rules to communicate). Good luck. ### Temporal modalities too. My feeling is that for any useful distributed systems work, you'd need both knowledge and temporal logic operators. Since achieving simultaneous coordination or common knowledge is impossible, the type system should be able to allow the notion of eventual common knowledge. That is you should be able to say that E(voted_yes) => ◊ Commit ### Time modalities are useful but not needed From what I’ve been able to find in the literature, you can arrive at eventual common knowledge without an explicit notion of time if you have a protocol for public announcement. Coordination within a fixed time bound is impossible, but in practice it looks like you can safely assume common knowledge if you have some reasonable restrictions that I’d been using anyway. For instance, in my current implementation of these ideas, I have reliable asynchronous channels, so if a message is sent then it will eventually be received; authenticated agents, so you can trust their assertions; and monotonically growing sets of facts, so the assumption of common knowledge will never be contradicted. The authentication bit has been interesting to work on. I’m making this system with the notion of trust and compromise built in, so you can express things like “I know that the private key for agent A was compromised a week ago, so invalidate all knowledge that was derived from the assumption that A was trustworthy since then, roll back all the reversible actions that were taken based on that knowledge, give me a list of all the irreversible actions, and tell me exactly which facts were potentially leaked”.
2020-07-05 08:17:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4222557246685028, "perplexity": 1356.1297419848647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887046.62/warc/CC-MAIN-20200705055259-20200705085259-00031.warc.gz"}