url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://theinfolist.com/php/SummaryGet.php?FindGo=oligopoly
oligopoly TheInfoList OR: An oligopoly (from Greek Greek may refer to: Greece Anything of, from, or related to Greece, a country in Southern Europe: *Greeks, an ethnic group. *Greek language, a branch of the Indo-European language family. **Proto-Greek language, the assumed last common ancestor ... ὀλίγος, ''oligos'' "few" and πωλεῖν, ''polein'' "to sell") is a market structure Market structure, in economics Economics () is the social science that studies the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goods and services. Economics foc ... in which a market or industry Industry may refer to: Economics * Industry (economics), a generally categorized branch of economic activity * Industry (manufacturing), a specific branch of economic activity, typically in factories with machinery * The wider industrial sect ... is dominated by a small number of large sellers or producers. Oligopolies often result from the desire to maximize profits, which can lead to collusion Collusion is a deceitful agreement or secret cooperation between two or more parties to limit open competition by deceiving, misleading or defrauding others of their legal right. Collusion is not always considered illegal. It can be used to att ... between companies. This reduces competition, increases prices for consumers, and lowers wages for employees. Many industries have been cited as oligopolistic, including civil aviation Civil aviation is one of two major categories of flying, representing all non-military and non-state aviation Aviation includes the activities surrounding mechanical flight and the aircraft industry. ''Aircraft'' includes airplane, fixed ... , electricity Electricity is the set of physics, physical Phenomenon, phenomena associated with the presence and motion of matter that has a property of electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagne ... providers, the telecommunications Telecommunication is the transmission of information by various types of technologies over wire, radio Radio is the technology of signaling and telecommunication, communicating using radio waves. Radio waves are electromagnetic waves of ... sector, Rail freight markets, food processing Food processing is the transformation of Agriculture, agricultural products into food, or of one form of food into other forms. Food processing includes many forms of processing foods, from grinding grain to make raw flour to home cooking to co ... , funeral services, sugar refining Sugar is the generic name for Sweetness, sweet-tasting, soluble carbohydrates, many of which are used in food. Simple sugars, also called monosaccharides, include glucose, fructose, and galactose. Compound sugars, also called disaccharides o ... , , pulp and paper making, Most countries have laws outlawing anti-competitive behavior. EU competition law European competition law is the competition law Competition law is the field of law that promotes or seeks to maintain market competition by regulating anti-competitive conduct by companies. Competition law is implemented through public an ... prohibits anti-competitive practices Anti-competitive practices are business or government practices that prevent or reduce Competition (economics), competition in a market. Antitrust laws differ among state and federal laws to ensure businesses do not engage in competitive practice ... such as price-fixing and manipulating market supply and trade among competitors. In the US, the United States Department of Justice Antitrust Division The United States Department of Justice Antitrust Division is a division of the United States Department of Justice, U.S. Department of Justice that enforces United States antitrust law, U.S. antitrust law. It has exclusive jurisdiction over U ... and the Federal Trade Commission The Federal Trade Commission (FTC) is an independent agency of the United States government whose principal mission is the enforcement of civil (non-criminal) antitrust law and the promotion of consumer protection. The FTC shares jurisdiction ... are tasked with stopping collusion''.'' However, corporations can evade legal consequences through tacit collusion, as collusion can only be proven through actual and direct communication between companies. It is possible for oligopolies to develop without collusion and in the presence of fierce competition among market participants. This is a situation similar to perfect competition In economics, specifically general equilibrium theory, a perfect market, also known as an atomistic market, is defined by several idealizing conditions, collectively called perfect competition, or atomistic competition. In Economic model, theoret ... , where oligopolists have their own market structure Market structure, in economics Economics () is the social science that studies the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goods and services. Economics foc ... . In this situation, each company in the oligopoly has a large share in the industry and plays a pivotal, unique role. With post-socialist economies, oligopolies may be particularly pronounced. For example in Armenia Armenia (), , group=pron officially the Republic of Armenia,, is a landlocked country in the Armenian Highlands of Western Asia.The UNbr>classification of world regions places Armenia in Western Asia; the CIA World Factbook , , and '' ... , where business elites enjoy oligopoly, 19% of the whole economy is monopolized ( BEEPS 2009 database), making it the most monopolized country in the region In geography Geography (from Ancient Greek, Greek: , ''geographia''. Combination of Greek words ‘Geo’ (The Earth) and ‘Graphien’ (to describe), literally "earth description") is a field of science devoted to the study of the lan ... . # Types of oligopoly Commodities in the oligopolistic market are divided into two categories: # Homogeneous commodities: In the oligopolistic market of a primary industry, such as agriculture or mining, the commodities produced by such oligopolistic enterprises will have strong homogeneity. # Differentiated commodities: The differentiation of goods in the manufacturing and service industries will be very obvious. For example, different clothing companies may appeal to different demographics, and different mobile phone brands have different functions and appearances, etc. With few sellers, each oligopolist is likely to be aware of the actions of their competition. According to game theory Game theory is the study of mathematical models of strategic interactions among rational agents. Myerson, Roger B. (1991). ''Game Theory: Analysis of Conflict,'' Harvard University Press, p.&nbs1 Chapter-preview links, ppvii–xi It has appl ... , the decisions of one firm influence and are influenced by the decisions of other firms. Strategic planning Strategic planning is an organization's business process, process of defining its strategy or direction, and making decision making, decisions on allocating its resources to attain strategic goals. It may also extend to control mechanisms for gu ... by oligopolists needs to take into account the likely responses of the other market participants. Entry barriers include high investment Investment is the dedication of money to purchase of an asset to attain an increase in value over a period of time. Investment requires a sacrifice of some present asset, such as time, money, or effort. In finance, the purpose of investing is ... requirements, strong consumer loyalty for existing brands, and economies of scale In microeconomics, economies of scale are the cost advantages that enterprises obtain due to their scale of operation, and are typically measured by the amount of Productivity, output produced per unit of time. A decrease in unit cost, cost per u ... . These barriers facilitate the formation and sustainability of collusion by stifling competition. Oligopolistic companies can form when several companies expand into large business groups by appreciating and increasing capital to buy smaller companies in the same markets, which increases the profit margins of the business.The fundamental reason oligopolies form is related to future retaliation (deviation). In a market with low entry barriers, price collusion between established sellers makes new sellers vulnerable to undercutting. Recognizing this vulnerability, the established sellers will reach a tacit understanding to raise entry barriers to prevent new companies from entering the market. Even if this requires cutting prices, all companies benefit because they reduce the risk In simple terms, risk is the possibility of something bad happening. Risk involves uncertainty about the effects/implications of an activity with respect to something that humans value (such as health, well-being, wealth, property or the environme ... of loss created by new competition. In other words, firms will lose less for deviation and thus have more incentive to undercut collusion prices (obtain short-term deviated profit) when more join the market.The rate at which firms interact with one another is also expected to affect the incentives for undercutting other firms as the short-term rewards for undercutting competitors will be short lived where interaction is frequent and a degree of 'punishment' can expected swifty by other firms, but longer-lived where interaction is infrequent.Ivaldi, M., Jullien, B., Rey, P., Seabright, P., & Tirole, J. (2003). The economics of tacit collusion. Resultingly greater market transparency, in this case pertaining to the knowledge other firms have of prices and quantities of sales in rival firms, would decrease collusion. As oligopolistic companies would expect retaliation sooner where changes in their prices and quantity of sales are clear to their rivals. In developed economies, oligopolies dominate, as the perfectly competitive model is of negligible importance for consumers. Specifically, oligopolists will price fix, a practice that lessens buyer choice and raises prices, to dominate these markets. Most new prosecuted oligopoly cases in the US in 2013 were based on price-fixing. As a quantitative description of oligopoly, the four-firm concentration ratio is often utilized and is the most preferable ratio for analyzing market concentration In economics, market concentration is a function (mathematics), function of the number of :wikt:firm, firms and their respective Market share, shares of the total production (alternatively, total capacity or total reserves) in a Market (economics ... . This measure expresses, as a percentage, the market share of the four largest firms in any particular industry. For example, as of fourth quarter 2008, if we combine the total market share of Verizon Wireless, AT&T, Sprint, and T-Mobile, we see that these firms, together, control 97% of the U.S. cellular telephone market. These four cellular telephone firms have become the top-tier in US carriers and were protected by the US government that acted as an intervention for other firms entering the market. Oligopolistic competition Competition is a rivalry where two or more parties strive for a common goal which cannot be shared: where one's gain is the other's loss (an example of which is a zero-sum game). Competition can arise between entities such as organisms, ind ... can give rise to a wide range of outcomes. In some situations, particular companies may employ restrictive trade practices ( collusion Collusion is a deceitful agreement or secret cooperation between two or more parties to limit open competition by deceiving, misleading or defrauding others of their legal right. Collusion is not always considered illegal. It can be used to att ... , market sharing etc.) in order to inflate prices and restrict production in much the same way that a monopoly A monopoly (from Greek language, Greek el, μόνος, mónos, single, alone, label=none and el, πωλεῖν, pōleîn, to sell, label=none), as described by Irving Fisher, is a market with the "absence of competition", creating a situati ... does. Whenever there is a formal agreement for such collusion between companies that usually compete with one another, this practice is known as a cartel A cartel is a group of independent market participants who Collusion, collude with each other in order to improve their profits and dominate the market. Cartels are usually associations in the same sphere of business, and thus an alliance of ... . A prime example of such a cartel is OPEC The Organization of the Petroleum Exporting Countries (OPEC, ) is a cartel of countries. Founded on 14 September 1960 in Baghdad by the first five members (Iran, Iraq, Kuwait, Saudi Arabia, and Venezuela), it has, since 1965, been headquart ... , where oligopolistic countries manipulate the worldwide oil supply and ultimately leaves a profound influence on the international price of oil. There are legal restrictions on such collusion in most countries and relevant regulations or enforcements against cartels (anti-competitive behaviours) enacted since the late of 1990s. For example, EU competition law European competition law is the competition law Competition law is the field of law that promotes or seeks to maintain market competition by regulating anti-competitive conduct by companies. Competition law is implemented through public an ... has prohibited some unreasonable anti-competitive practises such as directly or indirectly fix selling prices, manipulate market supply or control trade among competitors etc., either by means of formal contracts or oral agreements. In the US, the ''Antitrust Division of the Justice Department and Federal Trade Commission'' was created to fight collusion among cartels''.'' However, a formal agreement is not a requirement for collusion to take place, as tacit collusion can be achieved through mutual understanding among firms. For the collusion to be prosecuted as a crime there must be actual and direct communication between companies. For example, in some industries there may be an acknowledged market leader that informally sets prices to which other producers respond, (known as price leadership). Tacit collusion is becoming a more popular topic in the development of anti-trust law in most countries. In other situations, competition between sellers in an oligopoly can be fierce, with relatively low prices and high production. Hypothetically, this could lead to an efficient outcome approaching perfect competition In economics, specifically general equilibrium theory, a perfect market, also known as an atomistic market, is defined by several idealizing conditions, collectively called perfect competition, or atomistic competition. In Economic model, theoret ... . The competition in an oligopoly can be greater when there are more competitors in an industry. Theoretically, it is harder to sustain cartels (anti-competitive behaviors) in an industry with a larger number of firms in that it will yield less collusive profit for each firm.Choi, J. P., & Gerlach, H. Forthcoming. Cartels and Collusion: Economic Theory and Experimental Economics. ''Oxford Handbook on International Antitrust Economics (Oxford University Press, Oxford, England)''. Consequently, existing firms may have more incentive to deviate. However, this conclusion is a bit more intuitive and empirical evidence has shown this conclusion or relationship is a bit more ambiguous and mixed. Thus the welfare Welfare, or commonly social welfare, is a type of government support intended to ensure that members of a society can meet Basic needs, basic human needs such as food and shelter. Social security may either be synonymous with welfare, or refe ... analysis of oligopolies is sensitive to the parameter values used to define the market's structure. In particular, the level of dead weight loss is hard to measure. The study of product differentiation In economics and marketing, product differentiation (or simply differentiation) is the process of distinguishing a product (business), product or Service (economics), service from others to make it more Demand (economics), attractive to a particul ... indicates that oligopolies might also create excessive levels of differentiation in order to stifle competition, as they could gain certain marker power by offering somewhat differentiated products. Oligopoly theory makes heavy use of game theory Game theory is the study of mathematical models of strategic interactions among rational agents. Myerson, Roger B. (1991). ''Game Theory: Analysis of Conflict,'' Harvard University Press, p.&nbs1 Chapter-preview links, ppvii–xi It has appl ... to model the behavior of oligopolies: * Stackelberg's duopoly A duopoly (from Ancient Greek, Greek δύο, ''duo'' "two" and πωλεῖν, ''polein'' "to sell") is a type of oligopoly where two firms have dominant or exclusive control over a market. It is the most commonly studied form of oligopoly due to ... . In this model, the firms move sequentially (see Stackelberg competition). * 's duopoly. In this model, the firms simultaneously choose quantities (see Cournot competition Cournot competition is an economic model used to describe an industry structure in which companies compete on the amount of output they will produce, which they decide on independently of each other and at the same time. It is named after Antoine Au ... ). * Bertrand's oligopoly. In this model, the firms simultaneously choose prices (see Bertrand competition Bertrand competition is a model of competition used in economics, named after Joseph Louis François Bertrand (1822–1900). It describes interactions among firms (sellers) that set prices and their customers (buyers) that choose quantities at the ... ). When compared with Cournot and Bertrand's model, it can be seen that price competition is more aggressive and competitive, and also it is easier to sustain collusion under price competition. # Characteristics Characteristics of oligopolies include: ; Profit maximization : An oligopoly will maximize its profits. ; Price setting : Oligopolies are price setters rather than prices takers.Perloff, J. ''Microeconomics Theory & Applications with Calculus''. page 445. Pearson 2008. ; High barriers to entry and exitHirschey, M. ''Managerial Economics''. Rev. Ed, page 451. Dryden 2000. : The most important barriers are government licenses, economies of scale In microeconomics, economies of scale are the cost advantages that enterprises obtain due to their scale of operation, and are typically measured by the amount of Productivity, output produced per unit of time. A decrease in unit cost, cost per u ... , patents, access to expensive and complex technology, and strategic actions by incumbent firms designed to discourage or destroy nascent firms. Additional sources of barriers to entry often result from government regulation favoring existing firms making it difficult for new firms to enter the market.Negbennebor, A: Microeconomics, The Freedom to Choose CAT 2001 ; Few firms : There are so few firms that the actions of one firm can influence the actions of the other firms. ; Abnormal long run profits : Oligopolies retain abnormal long run profits. High barriers of entry prevent sideline firms from entering the market to capture excess profits. ; Product differentiation : It can be homogeneous (steel) or differentiated (automobiles). ; Perfect knowledge : Assumptions about perfect knowledge vary, but the knowledge of various economic factors can be generally described as selective. Oligopolies have perfect knowledge of their own cost and demand functions, but their inter-firm information may be incomplete. Buyers have only imperfect knowledge as to price, cost, and product quality. ## Interdependence The distinctive feature of an oligopoly is interdependence Systems theory is the Interdisciplinarity, interdisciplinary study of systems, i.e. cohesive groups of interrelated, interdependent components that can be natural or man-made, human-made. Every system has causal boundaries, is influenced by its c ... . Oligopolies are typically composed of a few large firms. Each firm is so large that its actions affect market conditions. Therefore, the competing firms will be aware of a firm's market actions and will respond appropriately. This means that in contemplating a market action, a firm must take into consideration the possible reactions of all competing firms and the firms' countermoves.Colander, David C. Microeconomics 7th ed. Page 288 McGraw-Hill 2008. It is very much like a game of chess Chess is a board game between two Player (game), players. It is sometimes called international chess or Western chess to distinguish it from chess variant, related games, such as xiangqi (Chinese chess) and shogi (Japanese chess). The current ... , in which a player must anticipate a whole sequence of moves and countermoves in order to determine how to achieve his or her objectives; this is known as game theory Game theory is the study of mathematical models of strategic interactions among rational agents. Myerson, Roger B. (1991). ''Game Theory: Analysis of Conflict,'' Harvard University Press, p.&nbs1 Chapter-preview links, ppvii–xi It has appl ... . For example, an oligopoly considering a price reduction may wish to estimate the likelihood that competing firms would also lower their prices for retaliation and possibly trigger a ruinous price war A price is the (usually not negative) quantity of payment or Financial compensation, compensation given by one Party (law), party to another in return for Good (economics), goods or Service (economics), services. In some situations, the pr ... . Or if the firm is considering a price increase, it may want to know whether other firms will also increase prices or hold existing prices constant. This anticipation leads to price rigidity, as firms will only be willing to adjust their prices and quantity of output in accordance with a "price leader" in the market. An example for this interdependence among oligopolists such that Texaco Texaco, Inc. ("The Texas Company") is an American Petroleum, oil brand owned and operated by Chevron Corporation. Its flagship product is its Gasoline, fuel "Texaco with Techron". It also owned the Havoline motor oil brand. Texaco was an Indepe ... needs to take into consideration whether its own price cut will trigger Shell's incentive to match, and so that the benefit or privilege gained by low price would be eliminated. This high degree of interdependence and need to be aware of what other firms are doing or might do stands in contrast with the lack of interdependence in other market structures. Simply put, every oligopolistic company that appears in companies with strong commodity homogeneity is reluctant to raise or lower prices. For example, if company A increases its price but B does not, A will lose all the market in an instant; if A decreases its price, B will inevitably decrease its price, which will lead to a price war for both parties and ultimately lose both sides. Therefore, raising or lowering the price does not do itself any good, and the best strategy is to keep the price the same. The price rigidity caused by the mutual game between oligopolistic enterprises is called interdependence. In a perfectly competitive (PC) market there is zero interdependence because no firm is large enough to affect market price. All firms in a ''PC'' market are price takers, as the current market selling price can be followed predictably to maximize short-term profits. In a monopoly A monopoly (from Greek language, Greek el, μόνος, mónos, single, alone, label=none and el, πωλεῖν, pōleîn, to sell, label=none), as described by Irving Fisher, is a market with the "absence of competition", creating a situati ... , there are no competitors to be concerned about. In a monopolistically-competitive market, each firm's effects on market conditions are so negligible as to be safely ignored by competitors. ## Non-price competition Generally speaking, the oligopolistic enterprise with the largest scale and the lowest cost will become the price setter in this market, and the price set by it will maximize its own interests and ensure that other small-scale enterprises also benefit. Oligopolies tend to compete on terms other than price. Loyalty schemes, advertisement, and product differentiation are all examples of non-price competition Non-price competition is a marketing strategies, marketing strategy "in which one firm tries to distinguish its product (business), product or Service (economics), service from Competition, competing products on the basis of attributes like design ... , which is perceived less risky and brings less disastrous impacts to business. In other words, oligopolists are able to extract more rents (charge prices above normal competition level without losing large consumers) by offering differentiated products or initiating promotion efforts. However, collusion among oligopolists is harder or more difficult to sustain along such non-price dimensions such as differentiation, marketing, product design. For fighting collusion and cartels in an oligopoly market, competition authorities have taken measures or practices to effectively discover, prosecute and penalize them.Harrington, J. E. (2006). Behavioral screening and the detection of cartels. ''European competition law annual'', 51-68.Leniency program and economic analysis (screening) are currently two popular mechanisms. ## Leniency program Competition authorities prominently have roles and responsibilities on prosecuting and penalizing existing cartels and desisting new ones. Thus, authorities have created an effective tool called the leniency program, which makes antitrust firms to be more proactive participants in confessing their collusion behaviors in that they will be granted immunity from fines and still have a right to plea bargaining if not receive a full reduction. Nowadays, leniency program has been implemented by several countries like US, Japan and Canada. However, it causes negative impacts to competition authorities themselves in the wake of abusing of leniency program that there are still many cartels in society and the expected sanctions for colluded firms will experience a sharp drop. As a result, the total effect of the leniency program is ambiguous and an optimal leniency program is required. ## Economic analysis There are two screening methods that are currently available for competition authorities: structural and behavioral. In terms of structural screening, it refers to identify industry traits or characteristics, such as homogeneous goods, stable demand, less existing participants, which are prone to cartel formation. While regarding behavioral one, is mainly implemented when a cartel formation or agreement has reached and subsequently authorities start to look into firms' data and figure out whether their price variance is low or has a significant price increase or decrease. # Oligopolies in countries with competition laws Oligopolies become "mature" when competing entities realize they can maximize profits through joint efforts designed to maximize price control by minimizing the influence of competition. As a result of operating in countries with enforced antitrust laws Competition law is the field of law that promotes or seeks to maintain market competition by regulating anti-competitive conduct by companies. Competition law is implemented through public and private enforcement. It is also known as antitrust l ... , oligopolists will operate under tacit collusion, which is collusion through a mutual understanding among the competitors of a market without any direct communication or contact that by collectively raising prices, each participating competitor can achieve economic profits comparable to those achieved by a monopolist while avoiding the explicit breach of market regulations. Hence, the kinked demand curve for a joint profit-maximizing oligopoly industry can model the behaviors of oligopolists' pricing decisions other than that of the price leader (the price leader being the entity that all other entities follow in terms of pricing decisions). This is because if an entity unilaterally raises the prices of their good/service and competing entities do not follow, the entity that raised their price will lose a significant market as they face the elastic upper segment of the demand curve. As the joint profit-maximizing efforts achieve greater economic profits for all participating entities, there is an incentive for an individual entity to "cheat" by expanding output to gain greater market share and profit. In the case of oligopolist cheating, when the incumbent entity discovers this breach in collusion, competitors in the market will retaliate by matching or dropping prices lower than the original drop. Hence, the market share originally gained by having dropped the price will be minimized or eliminated. This is why on the kinked demand curve model the lower segment of the demand curve is inelastic. As a result, in such markets price rigidity prevails. Oligopoly in international trade International trade has increased from $5 trillion USD in 1994 to$24 trillion USD in 2014. Following current trends, this number will only increase in the future as an increasing of firms are now competing internationally. Different from domestic oligopolies, international oligopolies have to consider importing and exporting tariffs as countries have different international policies. This is described as “strategic trade policy” and uses both the Bertrand and Cournot models as examples of interdependence. Game theory is used when theorizing international trade theory. The added features are “That oligopolistic firms would treat markets in each country as segmented rather than integrated and the second, that countries had a motive to raise domestic welfare by shifting rents from foreign firms to the domestic economy in the form of higher domestic profits, increased government revenue or above-normal wages.” (Head & Spencer, 2017). # Modeling There is no single model describing the operation of an oligopolistic market. The variety and complexity of the models exist because two to 10 firms can compete on the basis of price, quantity, technological innovations, marketing, and reputation. However, there are a series of simplified models that attempt to describe market behavior by considering certain circumstances. Some of the better-known models are the dominant firm model, the Cournot–Nash model, the Bertrand model and the model. As different industries have different characteristics, it is important to know which oligopoly model is more applicable for each industry. In reality, one main difference between industries is the capacity constraints. As both Cournot model and Bertrand model consist of the two-stage game, the Cournot model is more suitable for firms in industry that face capacity constraints, where firms set their quantity of production first, then set their prices. The Bertrand model is more applicable for industries with low capacity constraints, such as banking and insurance. Game theory “How to Nash equilibria in a game: 1. checking for every outcome whether at least one player could benefit from deviating; if not, NE found! 2. deriving best-response (or reaction) functions: Find best action of player for ALL feasible actions of rivals; NE at outcome where players actions are best responses to each other (i.e. where BR intersect)” (Gerlach, 2022). “Examples: - best response to confess is confess, best response to not confess is also confess - unique Nash equilibrium is outcome (confess, confess)” (Gerlach, 2022). “Multiple Nash equilibria and Pareto Dominance Criterion Definition: An Nash equilibrium Pareto dominates another equilibrium if at least one player would be better off in this equilibrium and no other player worse off. Example: Battle of the Sexes” (Gerlach, 2022). “- for both players: best response to opera is opera, best response to football is football - Nash equilibria: (football, football), (opera, opera) - (opera, opera) Pareto dominates” (Gerlach, 2022). ## Cournot–Nash model The Nash model is the simplest oligopoly model. The model assumes that there are two "equally positioned firms"; the firms compete on the basis of quantity rather than price and each firm makes an "output of decision assuming that the other firm's behavior is fixed." The market demand curve is assumed to be linear and marginal costs are constant. To find the Nash equilibrium In game theory, the Nash equilibrium, named after the mathematician John Forbes Nash Jr., John Nash, is the most common way to define the solution concept, solution of a non-cooperative game involving two or more players. In a Nash equilibrium, ea ... one determines how each firm reacts to a change in the output of the other firm. The path to equilibrium is a series of actions and reactions. The pattern continues until a point is reached where neither firm desires "to change what it is doing, given how it believes the other firm will react to any change." The equilibrium is the intersection of the two firm's reaction functions. The reaction function shows how one firm reacts to the quantity choice of the other firm. For example, assume that the firm $1$'s demand function is $P = \left(M - Q_2\right) - Q_1$ where $Q_2$ is the quantity produced by the other firm and $Q_1$ is the amount produced by firm $1$, and $M=60$ is the market. Assume that marginal cost is $C_M=12$. Firm $1$ wants to know its maximizing quantity and price. Firm $1$ begins the process by following the profit maximization rule of equating marginal revenue to marginal costs. Firm $1$'s total revenue function is $R_T = Q_1 P = Q_1 \left(M - Q_2 - Q_1\right) = MQ_1 - Q_1 Q_2 - Q_1^2$. The marginal revenue function is $R_M = \frac = M - Q_2 - 2 Q_1$.$R_M = M - Q_2 - 2Q_1$. can be restated as $R_M = \left(M - Q_2\right) - 2Q_1$. :$R_M = C_M$ :$M - Q_2 - 2Q_1 = C_M$ :$2Q_1 = \left(M - C_M\right) - Q_2$ :$Q_1 = \frac - \frac = 24 - 0.5 Q_2$ .1:$Q_2 = 2\left(M - C_M\right) - 2Q_1 = 96 - 2Q_1$ .2 Equation 1.1 is the reaction function for firm $1$. Equation 1.2 is the reaction function for firm $2$. To determine the Nash equilibrium you can solve the equations simultaneously. The equilibrium quantities can also be determined graphically. The equilibrium solution would be at the intersection of the two reaction functions. Note that if you graph the functions the axes represent quantities. The reaction functions are not necessarily symmetric. The firms may face differing cost functions in which case the reaction functions would not be identical nor would the equilibrium quantities. ## Bertrand model The Bertrand model is essentially the Cournot–Nash model, except the strategic variable is price rather than quantity.Samuelson, W. & Marks, S. ''Managerial Economics''. 4th ed. page 415 Wiley 2003. Bertrand's Model can be used to explain oligopoly. Bertrand's Model thinks competition as two firms compete in the market, such as firm one and your competitors(=the rest of the market as another firm). The model assumes that firms are selling homogeneous products and therefore have the same marginal production costs, and firms will focus on competing in prices simultaneously. The idea is that after competing in prices for a while, they would eventually reach an equilibrium where the price both charge would be the same as their marginal cost of production. The mechanism behind this model is that even by undercutting just a small increment of its price, a firm would be able to capture the entire market share. The attempetion is very high and firms will have strong incentives to undercut their competitors in prices to grab the whole market profits. Even though empirical studies suggest that firms can easily make much higher profits by agreeing on charging a price that is higher than marginal costs, highly rational selfish firms would still not be able to stay at a price higher than marginal cost. It is worth noting that, Bertrand price competition is a useful abstraction of markets in many settings. Amongst many different prediction approaches, the Nash equilibrium approach has been recognised by some studies as an relatively efficient analytic tool. However, due to its lack of ability to capture human behavioural patterns, the approach has been criticised for being inaccurate in predicting prices. The model assumptions are: * There are two firms in the market * They produce a homogeneous product * They produce at a constant marginal cost * Firms choose prices $P_A$ and $P_B$ simultaneously * Firms outputs are perfect substitutes * Sales are split evenly if $P_A = P_B$ The only Nash equilibrium is $P_A = P_B = \text$. Neither firm has any reason to change strategy. If the firm raises prices, it will lose all its customers. If the firm lowers price $P < \text$ then it will be losing money on every unit sold. The Bertrand equilibrium is the same as the competitive result. Each firm will produce where $P = \text$ and there will be zero profits. A generalization of the Bertrand model is the Bertrand–Edgeworth model In microeconomics Microeconomics is a branch of mainstream economics that studies the behavior of individuals and Theory of the firm, firms in making decisions regarding the allocation of scarcity, scarce resources and the interactions among ... that allows for capacity constraints and a more general cost function. ## Cournot-Bertrand model The Cournot model and Bertrand model are the most well-known models in oligopoly theory, and have been studied and reviewed by numerous economists. The Cournot-Bertrand model is a hybrid of these two models and was first developed by Bylka and Komar in 1976. This model allows the market to be split into two groups of firms. The first groups’ aim is to optimally adjust their output to maximise profits while the second groups’ aim is to optimally adjust their prices. This model is not accepted by some economists who believe that firms in the same industry cannot compete with different strategic variables. However, this model has been applied and observed in both real-world examples and theoretical contexts. In the Cournot model and Bertrand model, it is assumed that all the firms are competing with the same choice variable, either output or price. However, this does not always apply in real world contexts. If each firm is able to choose their own strategic variable, there would be a total of four modes of competition. The possibility of firms competing with different strategic variables is important to consider when assessing all potential market outcomes. Economists Kreps and Scheinkman's research demonstrates that varying economic environments are required in order for firms to compete in the same industry while using different strategic variables. An example of the Cournot-Bertrand model in real life can be seen in the market of alcoholic beverages. The production times of alcoholic beverages differ greatly creating different economic environments within the market. The fermentation of distilled spirits takes a significant amount of time; therefore, output is set by producers, leaving the market conditions to determine price. Whereas, the production of brandy requires minimal time to age, thus the price is set by the producers and the supply is determined by the quantity demanded at that price. ## Oligopolistic market: Kinked demand curve model According to this model, each firm faces a demand curve kinked at the existing price.Pindyck, R. & Rubinfeld, D. ''Microeconomics'' 5th ed. page 446. Prentice-Hall 2001. The conjectural assumptions of the model are; if the firm raises its price above the current existing price, competitors will not follow and the acting firm will lose market share and second, if a firm lowers prices below the existing price then their competitors will follow to retain their market share and the firm's output will increase only marginally. In other words, oligopolist's pricing logic is that competitors will match and respond to any price cut - retaliating to obtain more market share, while they will stick with the current or initial price for any price rising among competitors. If the assumptions hold, then: * The firm's marginal revenue curve is discontinuous (or rather, not differentiable), and has a gap at the kink * For prices above the prevailing price the curve is relatively elasticNegbennebor, A. ''Microeconomics: The Freedom to Choose''. page 299. CAT 2001 * For prices below the point the curve is relatively inelastic The gap in the marginal revenue curve means that marginal costs can fluctuate without changing equilibrium price and quantity, thus, prices tend to be rigid. # Examples Many industries have been cited as oligopolistic, including civil aviation Civil aviation is one of two major categories of flying, representing all non-military and non-state aviation Aviation includes the activities surrounding mechanical flight and the aircraft industry. ''Aircraft'' includes airplane, fixed ... ,Adriana Gama, Review of Regulating the Polluters: Markets and Strategies for Protecting the Global Environment by Alexander Ovodenko, ''Global Environmental Politics'', MIT Press, Vol. 19, No. 3, August 2019, pp. 143-145. agricultural pesticide Pesticides are substances that are meant to control pests. This includes herbicide, insecticide, nematicide, molluscicide, piscicide, avicide, rodenticide, bactericide, insect repellent, animal repellent, microbicide, fungicide, ... s, electricity Electricity is the set of physics, physical Phenomenon, phenomena associated with the presence and motion of matter that has a property of electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagne ... ,Woohyung Lee, Tohru Naito & Ki-Dong Lee Effects of Mixed Oligopoly and Emission Taxes on the Market and Environment ''Korean Economic Review'', Vol. 33, No. 2, Winter 2017, pp. 267-294: "we have witnessed mixed oligopolistic markets in a broad range of industries, such as oil, electricity, telecommunications, and power plants that emit pollutants during their respective production processes." and platinum group metal mining.Magnus Ericsson & Andreas Tegen Brief Report: Global PGM mining during 40 years—a stable corporate landscape of oligopolistic control ''Mineral Economics'', Vol. 29, pp. 29–36 (2016). In most countries, the telecommunications Telecommunication is the transmission of information by various types of technologies over wire, radio Radio is the technology of signaling and telecommunication, communicating using radio waves. Radio waves are electromagnetic waves of ... sector is characterized by an oligopolistic market structure. Rail freight markets in the European Union The European Union (EU) is a supranational union, supranational political union, political and economic union of Member state of the European Union, member states that are located primarily in Europe, Europe. The union has a total area of ... have an oligopolistic structure. In the United States, industries that have identified as oligopolistic include food processing Food processing is the transformation of Agriculture, agricultural products into food, or of one form of food into other forms. Food processing includes many forms of processing foods, from grinding grain to make raw flour to home cooking to co ... ,Rigoberto A. Lopez, Xi He & Azzeddine Azzam Stochastic Frontier Estimation of Market Power in the Food Industries ''Journal of Agricultural Economics'', Vol. 69, Issue 1 (Feb. 2018), pp. 3-17. funeral service A funeral is a ceremony A ceremony (, ) is a unified ritualistic event with a purpose, usually consisting of a number of artistic components, performed on a special occasion. The word may be of Etruscan language, Etruscan origin, via the Lat ... s, sugar refining Sugar is the generic name for Sweetness, sweet-tasting, soluble carbohydrates, many of which are used in food. Simple sugars, also called monosaccharides, include glucose, fructose, and galactose. Compound sugars, also called disaccharides o ... , , pulp and paper making, Market power In economics Economics () is the social science that studies the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goods and services. Economics focuses on the behav ... and market concentration In economics, market concentration is a function (mathematics), function of the number of :wikt:firm, firms and their respective Market share, shares of the total production (alternatively, total capacity or total reserves) in a Market (economics ... can be estimated or quantified using several different tools and measurements, including the Lerner index The Lerner index, formalized in 1934 by British economist of Russian origin Abba Lerner, is a measure of a firm's market power. Definition The Lerner index is defined by: L=\frac where P is the market price set by the firm and MC is the firm's ... , stochastic frontier analysis Stochastic frontier analysis (SFA) is a method of economic modeling. It has its starting point in the stochastic Stochastic (, ) refers to the property of being well described by a random probability distribution. Although stochasticity and ... , and New Empirical Industrial Organization (NEIO) modeling, as well as the Herfindahl-Hirschman index. # Possible outcomes of Oligopoly One possible outcome of oligopoly is the maintaining of a steady price as a result of a curve. Firms in this situation concentrate their efforts on non-price competition Non-price competition is a marketing strategies, marketing strategy "in which one firm tries to distinguish its product (business), product or Service (economics), service from Competition, competing products on the basis of attributes like design ... . The curve model suggests that prices would be relatively stable, and that firms will have little motivation to adjust their pricing in the near future. As a result, firms compete using strategies other than price competition. The firms participating in this market system are motivated by the desire to maximize their profits. Profit would be maximized at $\text = \text$. Firms would earn a significant rise in market share if they reduced their prices. Although it is possible, it is doubtful that firms will accept this. As a result, other firms follow suit and reduce their prices as well. Because of this, demand will only grow by a marginal amount. As a result, demand for a price reduction is inelastic. It is likely that they will lose a significant portion of the market if they raise the price, since they would become uncompetitive when compared to other firms. As a result, demand is very elastic in response to price increases. Rather than assuming price rigidity, kinked demand strategies serve as a mechanism for enforcing compliance with a collusive price leadership strategy. Another possible outcome of oligopoly is the price war A price is the (usually not negative) quantity of payment or Financial compensation, compensation given by one Party (law), party to another in return for Good (economics), goods or Service (economics), services. In some situations, the pr ... . However, despite suggestions that pricing wars might be unproductive for the business, Schendel and Balestra contend that at least some players in a price war A price is the (usually not negative) quantity of payment or Financial compensation, compensation given by one Party (law), party to another in return for Good (economics), goods or Service (economics), services. In some situations, the pr ... can profit from their participation. Oligopolies can nevertheless have fierce pricing competition among their members, especially if they want to expand their market share. Oligopolies exist when firms compete with one another to reduce costs and gain market share. A common aspect of oligopolies is the ability to engage in price competition selectively. When it comes to bread and special offers, supermarkets often fight on price, but when it comes to product such as yogurt, they charge a premium. # Demand curve In an oligopoly, firms operate under imperfect competition In economics, imperfect competition refers to a situation where the characteristics of an economic market do not fulfil all the necessary conditions of a perfectly competitive market. Imperfect competition will cause market inefficiency when it happ ... . With the fierce price competitiveness created by this sticky-upward demand curve In economics, a demand curve is a Graph of a function, graph depicting the relationship between the price of a certain commodity (the ''y''-axis) and the quantity of that commodity that is demanded at that price (the ''x''-axis). Demand curves can ... , firms use non-price competition Non-price competition is a marketing strategies, marketing strategy "in which one firm tries to distinguish its product (business), product or Service (economics), service from Competition, competing products on the basis of attributes like design ... in order to accrue greater revenue and market share. "Kinked" demand curves are similar to traditional demand curves, as they are downward-sloping. They are distinguished by a hypothesized convex bend with a discontinuity at the bend–"kink". Thus, the first derivative In mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented ... at that point is undefined and leads to a jump discontinuity in the marginal revenue curve. Classical economic theory Economics () is the social science that studies the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goods and services. Economics focuses on the behaviour and intera ... assumes that a profit-maximizing producer with some market power (either due to oligopoly or monopolistic competition Monopolistic competition is a type of imperfect competition such that there are many producers competing against each other, but selling products that are differentiation (economics), differentiated from one another (e.g. by branding or qualit ... ) will set marginal costs equal to marginal revenue. This idea can be envisioned graphically by the intersection of an upward-sloping marginal cost curve and a downward-sloping marginal revenue curve (because the more one sells, the lower the price must be, so the less a producer earns per unit). In classical theory, any change in the marginal cost structure (how much it costs to make each additional unit) or the marginal revenue structure (how much people will pay for each additional unit) will be immediately reflected in a new price and/or quantity sold of the item. This result does not occur if a "kink" exists. Because of this jump discontinuity in the marginal revenue curve, marginal cost In economics, the marginal cost is the change in the total cost that arises when the quantity produced is incremented, the cost of producing additional quantity. In some contexts, it refers to an increment of one unit of output, and in others it r ... , s could change without necessarily changing the price or quantity. The motivation behind this kink is the idea that in an oligopolistic or monopolistic competitive market, firms will not raise their prices because even a small price increase will lose many customers. This is because competitors will generally ignore price increases, with the hope of gaining a larger market share as a result of now having comparatively lower prices (price rigidity). However, even a large price decrease will gain only a few customers because such an action will begin a price war A price is the (usually not negative) quantity of payment or Financial compensation, compensation given by one Party (law), party to another in return for Good (economics), goods or Service (economics), services. In some situations, the pr ... with other firms. The curve is, therefore, more price-elastic for price increases and less so for price decreases. Theory predicts that firms will enter the industry in the long run since market price for oligopolists is more stable or 'focal' in the long run under this kinked demand curve situation. # See also * Big business Big business involves large-scale corporate-controlled financial market, financial or business sector, business activities. As a term, it describes activities that run from "huge transactions" to the more general "doing big things". In corporate ... * Conjectural variation In oligopoly theory, conjectural variation is the belief that one firm has an idea about the way its competitors may react if it varies its output or price. The firm forms a conjecture about the variation in the other firm's output that will accompa ... * Market failure In neoclassical economics, market failure is a situation in which the allocation of goods and services by a free market is not Pareto efficient, often leading to a net loss of economic value. Market failures can be viewed as scenarios where indiv ... * Monopoly A monopoly (from Greek language, Greek el, μόνος, mónos, single, alone, label=none and el, πωλεῖν, pōleîn, to sell, label=none), as described by Irving Fisher, is a market with the "absence of competition", creating a situati ... * Monopsony In economics, a monopsony is a market structure in which a single buyer substantially controls the market as the major purchaser of goods and services offered by many would-be sellers. The Microeconomics, microeconomic theory of monopsony assumes ... * Oligopolistic reaction * Oligopsony An oligopsony (from Greek Greek may refer to: Greece Anything of, from, or related to Greece, a country in Southern Europe: *Greeks, an ethnic group. *Greek language, a branch of the Indo-European language family. **Proto-Greek language, the a ... * Perfect competition In economics, specifically general equilibrium theory, a perfect market, also known as an atomistic market, is defined by several idealizing conditions, collectively called perfect competition, or atomistic competition. In Economic model, theoret ... * Planned obsolescence In economics and industrial design, planned obsolescence (also called built-in obsolescence or premature obsolescence) is a policy of planning or designing a good (economics), product with an artificially limited Product lifetime, useful life o ... * Prisoner's dilemma The Prisoner's Dilemma is an example of a game analyzed in game theory. It is also a thought experiment that challenges two completely Rationality#Economics, rational Agent-based model, agents to a dilemma: cooperate with their partner for mutu ... * Simulations and games in economics education * Swing producer A swing producer or swing supplier is a supplier or a close oligopoly, oligopolistic group of suppliers of any commodity, controlling its global deposits and possessing large spare production capacity. A swing producer is able to increase or decreas ... * Unfair competition Unfair may refer to: * Double Taz and Double LeBron James in multiverses ''wikt:fair, fair''; wikt:unfairness, unfairness or injustice * Unfair (TV series), ''Unfair'' (drama), Japanese television series * ''Unfair: The Movie'' * Unfair (song), a s ... # References Gerlach, H. (2022). Section 2 - Basic Concepts. Lecture, Brisbane; University Queensland. # Further reading * Bayer, R. C. (2010) Intertemporal price discrimination and competition ''Journal of economic behavior & organization'', ''73''(2), 273–293. *Harrington, J. (2006). Corporate leniency programs and the role of the antitrust authority in detecting collusion. ''Competition Policy Research Center Discussion Paper, CPDP-18-E''. *Ivaldi, M., Jullien, B., Rey, P., Seabright, P., & Tirole, J. (2003). The economics of tacit collusion. {{DEFAULTSORT:Oligopoly Market structure
2023-01-31 06:11:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 28, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3356896936893463, "perplexity": 3665.255273170273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00354.warc.gz"}
http://math.hawaii.edu/wordpress/graduate-application/
The typical requirement for admission to the graduate program is the completion of a standard undergraduate program in mathematics. The candidate will generally be expected to know linear algebra, the elements of abstract algebra, and elementary real analysis. A student whose degree has been awarded in some other field may be considered if they have had the appropriate background courses. Students should also have current GRE scores and are strongly encouraged to submit a personal statement describing their reasons for pursuing a graduate degree in math. Applicants for graduate study should have three letters of recommendation sent to the Department of Mathematics, 2565 McCarthy Mall, Honolulu, HI 96822; the letters may also be emailed to the Graduate Chair at gradchair@math.hawaii.edu. There is no recommendation form required besides the letters. ## The diagnostic examination Incoming graduate students will be required to take a diagnostic examination in undergraduate mathematics. This exam, given by the Mathematics Department, consists of two written parts — linear and abstract algebra, and elementary real analysis (not including measure theory). The exam is used to help plan the student’s graduate program, and is usually given over the summer. Graduate Assistantships are available and provide tuition waiver and a stipend ranging from approximately \$17,500 to \$19,000 for the academic year. At any given time, about three quarters of our graduate students are supported by teaching assistantships. Some faculty members also have grants that have funds to support students with Research Assistantships. Most graduate assistants teach recitation sections for pre-calculus and calculus courses though other options exist: tutoring, grading, teaching a class, or assisting a professor. Note that tuition is waived for graduate assistants, but they are not exempt from other fees listed in the Graduate Bulletin. A separate application for graduate assistantships may be downloaded and should be returned to the Mathematics Graduate Chair before February 1 (for Fall admission) or August 1 (for Spring admission). The Graduate Division requires that Graduate Assistants carry a minimum of six credits per semester, which may include research and seminar courses. ### Tuition and fees for graduate students (subject to change without notice) Tuition and fees are charged according to the number of credit hours carried by the student; auditors (those enrolled in a course for no credit) pay the same fees as students enrolled for credit. For tuition purposes only, a full-time student is any student enrolled for 8 or more credit hours. However, many mathematics students take only 6 credit hours per semester, to allow time for teaching and research. ### Housing Almost all room assignments to on-campus residence halls go to Hawaii residents who have priority. There are limited facilities on campus for married students. website. ### For Information Register Online University of Hawaii Department of Mathematics 2565 McCarthy Mall Honolulu, HI 96822-2273 Phone (808) 956-7951 Fax (808) 956-9139
2017-01-22 20:15:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32832664251327515, "perplexity": 2971.808612225263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00500-ip-10-171-10-70.ec2.internal.warc.gz"}
https://kevinkotze.github.io/ts-6-unit-roots/
Many economic and financial time series exhibit some form of trending behaviour. Typical examples in economics include measures of output and employment, while in finance, examples consist of asset prices, dividends and financial market indices. The presence of deterministic or stochastic trends may induce nonstationary behaviour in a variable, which has important consequences for the construction and estimation of time series models. For example, a simple plot of the FTSE/JSE stock market index would suggest that the time series is not stationary as the mean would appear to depend on time. If either the data or the models are not conditioned to account for this phenomena, standard classical regression techniques (such as the use of ordinary least squares) would be inappropriate. Many early approaches to contain the effects of trends in macroeconomic data would augment the specification of the model with a deterministic time trend. However, Nelson and Plosser (1982), show that this strategy could represent a misspecification of the dynamics of the model and argue that accounting for the stochastic trend in many macroeconomic variables would be more appropriate. As we will see, it is important to distinguish between those variables that may contain a deterministic or stochastic trend, as the transformations that are required to induce stationarity differ from one another. The identification of a unit root process has also attracted much interest in the empirical literature, as a random walk process may be regarded as a prototype for various economic and financial hypotheses (i.e. it can be used to test the efficient market hypothesis or exchange rate overshooting). In addition, the use of Bayesian estimation techniques has also allowed for many interesting estimation strategies that could be used to describe the long-run behaviour of variables. # 1 An intuitive example Early studies that consider the effects of incorporating variables that contain a stochastic trend in a regression model include the work of Yule (1926), which considers the possible relationship between mortality and marriage. The data for this study is sampled at an annual frequency for England & Wales over the period 1866 to 1911 and is displayed in Figure 1. The trends in these variables would suggest that both mortality rates and the number of marriages have decreased over time. After regressing the data for marriages on mortality, we are able to produce the results in Table 1, where we note that the regressors have extremely large $$t$$-values. In addition, the joint explanatory power of the regression is relatively high, as the coefficient of determination is 0.9. This would suggest that there is a strong relationship between these two variables.1 However, after taking the first difference of the variables, which may be used to describe the change in rate of mortality and the total number of marriages, the results from the regression would suggest that there is no relationship between these variables. Table 1, 2 includes the results of this regression, where we note that the measures for the significance of the regressor and the coefficient of determination are extremely small.2 Figure 1: Mortality and Marriage Dependent Variable: Mortality Coefficient Std. Error t-value prob constant -13.88 1.57 -8.82 0 marriage 0.04 0 20.51 0 R2 0.91 Table 1: Regression results for mortality and marriage - Levels Dependent Variable: $$\Delta$$ Mortality Coefficient Std. Error t-value prob constant -0.133 0.21 -0.63 0.531 $$\Delta$$ marriage 0.011 0.043 0.27 0.788 R2 0.001 Table 2: Regression results for mortality and marriage - First difference In what follows we describe the differences that may exist in variables that have either a deterministic or a stochastic trend. Thereafter, we consider some of the consequences for time series analysis if these features of the data are ignored or misinterpreted. We also consider the use of autocorrelation functions that are used to describe the degree of persistence, before we consider more formal tests for the presence of a unit root. The tests that are described in this chapter are by no means exhaustive, and we refer the reader to Perron (2006) and Haldrup and Jansen (2006) for surveys. # 2 Deterministic trend As was noted in the introduction, many time series variables contain a trend, which may be either deterministic or stochastic. Hence, if we are to ignore the effect of a seasonal component, the variable $$y_t$$ is comprised of the following dynamic components, $\begin{eqnarray} \nonumber y_t = \text{trend} + \text{stationary component} + \text{irregular} \end{eqnarray}$ When the trend is deterministic, we would know the value of this component at each and every point in time with absolute certainty. To remove the deterministic trend from such a variable we would need to regress $$y_t$$ on time, where $$t = \{1, 2, 3, \dots, T \}$$. This type of regression model could be structured as, $\begin{eqnarray} \nonumber y_t = \alpha t + \varepsilon_t \end{eqnarray}$ Note that the residuals in this case, $$\varepsilon_t$$, would be free of the deterministic component and could be used for further analysis. To show that a variable with a deterministic trend is non-stationary, we note that a stationary univariate time series process could be written as the moving average, $\begin{eqnarray} \nonumber y_{t}=\varepsilon_{t}+\theta_{1}\varepsilon_{t-1}+\theta_{2} \varepsilon_{t-2}+\theta_{3}\varepsilon_{t-3}+ \ldots \end{eqnarray}$ where $$\varepsilon_{t} \sim \mathsf{i.i.d.} \mathcal{N}\left( 0,\sigma^{2}\right)$$ and $$t=\{1,2,\ldots,T\}$$. As noted previously, such a variable has a constant mean and variance, which do not depend on time. After introducing a deterministic time trend, the variable $$y_t$$ may be expressed as $\begin{eqnarray} y_{t}=\alpha t+\theta(L)\varepsilon_{t} \tag{2.1} \end{eqnarray}$ where the lag polynomial takes the form, $$\theta(L)=1+\theta_{1}L$$ $$+\theta_{2}L^{2}+\theta_{3}L^{3}+ \ldots$$, and the deterministic trend is simply the time index, $$t$$, with a slope parameter, $$\alpha$$. Since the expected value of all the white noise errors is zero, the expected mean of the variable would be, $\begin{eqnarray} \nonumber \mathbb{E}\left[ y_{t}\right] =\alpha t \end{eqnarray}$ which would clearly depend on time. However, if we remove the expected time-varying mean from $$y_{t}$$, then we are able to show that deviations from the expected mean are stationary $\begin{eqnarray*} y_{t}-\mathbb{E}\left[ y_{t}\right] & =& \alpha t+\theta(L)\varepsilon_{t}-\left( \alpha t\right) \\ & =& \theta(L)\varepsilon_{t} \end{eqnarray*}$ Therefore, this time series variable includes a stationary component and a deterministic trend. We could show that this variable would return to a point on the deterministic trend after a stochastic shock (i.e. an irregular innovation). For this reason we call these variables trend-stationary (TS). In addition to linear trends, an economic process may include a nonlinear trend. For example, we may wish to include a quadratic polynomial for the trend to describe a variable that characterises increasing returns to scale. A model that takes various orders of nonlinear deterministic trends could then take the form, $\begin{eqnarray} \nonumber y_t = \mu + \alpha_1 t + \alpha_2 t^2 + \alpha_3 t^3+ \ldots + \alpha_n t^n + \varepsilon_t \end{eqnarray}$ To test for the inclusion of these nonlinear trends we would usually estimate a selection of different models and then compare the goodness-of-fit with the aid of various information criteria. # 3 Stochastic trend The counterpart to a deterministic trend is a stochastic trend, and as we will see below, the time series properties of a variable that has a deterministic trend are very different to those that have a stochastic trend. There are many examples of economic variables that have stochastic trends, where the values of variables are permanently affected by a shock (or innovation).3 ## 3.1 Random walk The simplest model of a variable with a stochastic trend is the random walk, which depends on past values of itself and Gaussian white noise errors, $\begin{eqnarray} y_{t}=y_{t-1}+\varepsilon_{t} \;\;\; \text{where } \; \varepsilon_{t}\sim \mathsf{i.i.d.} \mathcal{N}\left(0,\sigma^{2}\right) \tag{3.1} \end{eqnarray}$ These random walk processes have a number of interesting features, where the best forecast of $$y_{t+1}$$ at time $$t$$, is given by $\begin{eqnarray}\nonumber \mathbb{E}\left[y_{t+1}|\;\;y_{t}\right] =y_{t} \end{eqnarray}$ The mean square error (MSE), which describes the forecast error variance, of this process would grow with the forecast horizon, $\begin{eqnarray}\nonumber \acute{\sigma}_{y}\left(h\right)=\mathsf{var}\left( y_{t+1}-\mathbb{E}\left[ y_{t+1}| y_{t}\right] \right) =\sigma^{2}h \end{eqnarray}$ Note that in this instance, the forecasting horizon may be used to denote the progression of time, where the difference between a two and a one step-ahead forecast is one period of time. With the aid of this expression, we would suggest that the variance depends on time, which would imply that it is nonstationary. This may be confirmed with the aid of a recursive substitution exercise. For the random walk model, $$y_{t}=y_{t-1}+\varepsilon_{t}$$, we may use substitute recursive lag values of $$y_t$$ to describe the evolution of the process, $\begin{eqnarray}\nonumber y_{t} & =& y_{t-1}+\varepsilon_{t}\\ \nonumber & =& y_{t-2}+\varepsilon_{t-1}+\varepsilon_{t}\\ \nonumber & =& y_{t-3}+\varepsilon_{t-2}+\varepsilon_{t-1}+\varepsilon_{t}\\ \nonumber & \vdots & \\ \nonumber y_{t} & =& \overset{t-1}{\underset{j=0}{\sum}}\varepsilon_{t-j}+y_{0} \end{eqnarray}$ Therefore each shock, $$\varepsilon_{t-j}$$, will influence subsequent values of $$y_{t}$$. This would imply that a shock to a random walk has a permanent effect on the time series variable. Alternatively, we may infer that these variables have infinite memory. If we assume that $$y_{0}$$ is equal to zero, without any loss of generality, we can write the random walk as, $\begin{eqnarray} \nonumber y_{t}=\overset{t-1}{\underset{j=0}{\sum}}\varepsilon_{t-j} \end{eqnarray}$ This would allow us to write the mean and variance of the random walk as $\begin{eqnarray}\nonumber \mathbb{E}\left[y_{t}\right]=0 \;\;\; \text{and } \;\; \mathsf{var}\left( y_{t}\right) =\sigma^{2}t \end{eqnarray}$ The covariance, $$\gamma_{t-j}$$, between $$y_t$$ and $$y_{t-j}$$ with $$y_0=0$$, would then be $\begin{eqnarray} \nonumber \mathbb{E}\big[(y_t - y_0)(y_{t-j}-y_0)\big] & = & \mathbb{E} \big[(\varepsilon_t + \varepsilon_{t-1} + \ldots + \varepsilon_1) \ldots \\ \nonumber & & (\varepsilon_{t-j} + \varepsilon_{t-j-1} + \ldots + \varepsilon_{1})\big]\\ \nonumber & = & \mathbb{E}\big[(\varepsilon_{t-j})^2 + (\varepsilon_{t-j-1})^2 + \ldots + (\varepsilon_1)^2\big]\\ \nonumber & = & (t-j)\sigma^2 \end{eqnarray}$ which also depends on time. Hence, since the variance and covariance of the process depend on time, the random walk is certainly nonstationary. With such a process, the effect of a change in the error term in $$t-j$$ will continue to effect $$y_{t}$$, and the roots of the linear difference equation would contain a unitary element, which infers that such a process is a unit root. These processes are also termed difference-stationary (DS), as the first difference of the random walk would yield, $\begin{eqnarray}\nonumber y_{t} & =&y_{t-1} + \varepsilon_{t}\\ \nonumber \Delta\ y_{t} & =&\varepsilon_{t} \end{eqnarray}$ where $$\Delta y_{t}$$ clearly is stationary as the expected mean and variance of the white noise error are stationary. If a variable, $$y_{t}$$, could be made stationary after differencing it once, it is integrated of the first order. We use the notation $$I(1)$$, to describe such a process. Stationary random variables, such as $$\Delta y_{t}$$ are thus integrated of order zero (i.e. $$\Delta y_{t}$$ is $$I(0)$$). If it is necessary to take the second difference to achieve stationarity, where $$\Delta^2 y_t$$ is $$I(0)$$. Such a process is integrated of the second order, where we would use the notation, $$I(2)$$. ## 3.2 Random walk with drift Adding a constant term to the random walk model in equation (3.1) results in a random walk with drift, which may be expressed as, $\begin{eqnarray}\nonumber y_{t}= \mu + y_{t-1}+\varepsilon_{t} \end{eqnarray}$ Using recursive substitution we can show that the random walk with drift can be written as a function of a deterministic trend and stochastic term, $\begin{eqnarray} \nonumber y_{t} & =&\mu+y_{t-1}+\varepsilon_{t}\\ & =&\mu+(y_{t-2}+\mu+\varepsilon_{t-1})+\varepsilon_{t}\nonumber\\ & =&2\mu+(y_{t-3}+\mu+\varepsilon_{t-2})+\varepsilon_{t-1}+\varepsilon_{t}\nonumber\\ & \vdots & \nonumber\\ \ y_{t} & =&\mu \cdot t+\overset{t-1}{\underset{j=0}{\sum}}\varepsilon_{t-j} \tag{3.2} \end{eqnarray}$ where we again assume that the starting value, $$y_{0}$$, is equal to zero. In contrast with the random walk model in equation (3.1), a random walk with drift now also contains a deterministic trend, which results from the inclusion of the constant term, $$\mu$$, that influences the slope of the of the deterministic trend. However, in contrast with the trend stationary model, the deviations from the deterministic trend are not stationary. This would imply that each $$\varepsilon_{t-j}$$ will influence the value of $$y_{t}$$, even after removing the deterministic trend from the series. # 4 Implications of nonstationarity When a time series process is stationary we noted that one is able to recover the infinite moving average form of an autoregressive process. In addition, a shock to such a process would only have a temporary effect on future values of the process, where the expected mean, variance and covariance do not depend on time. In contrast with these properties, a shock to a nonstationary time series process would have a permanent effect on the future values of the process. In addition, it was also noted that the expected variance, covariance and/or mean, would depend on time. This finding may be substantiated with the results of the impulse response functions in Figure 2, where we have included the results from several autoregressive processes. Figure 2: Impulse response functions for autoregressive processes If a time series is trend-stationary, the expected mean value will depend on time, which would imply that it is nonstationary. We can simply remove the effects of this trend by regressing it on time, and as a result the residuals will be stationary. However, after removing a deterministic trend from a random walk with drift, we are left with a random walk process, which will continue to display nonstationary behaviour. Examples of all of these processes are contained in Figure 3, where we have also included the results from a stationary AR(1) processes that have a coefficients of $$\phi= 0.8$$ and $$\phi= 0.4$$. Figure 3: Simulated time series processes Time series variables that have a unit root can be transformed into stationary variables, by taking the first difference of the data. At this point it is worth noting that when taking the first difference of a process that has a deterministic trend, we could introduce a unit root. For example, consider the following trend stationary process, $\begin{eqnarray} \nonumber y_t = \alpha t + \varepsilon_t \end{eqnarray}$ where the lag could be represented by, $$y_{t-1} = \alpha (t-1) + \varepsilon_{t-1}$$. The first difference of the above trend stationary process could then take the form, $\begin{eqnarray}\nonumber \Delta y_t = \alpha + \varepsilon_t - \varepsilon_{t-1} \end{eqnarray}$ where the full effect of the previous shock is now incorporated in the solution. Hence, the process is nonstationary, as we have introduced a unit root in the moving average component, as the effects of previous shocks do not dissipate with time. It is worth noting that this result is very different to the one that would arise when the underlying time series has both a unit root and a deterministic trend. Consider, by way of example, the following process that has both deterministic and stochastic components: $\begin{eqnarray} \nonumber y_t = \alpha t + y_{t-1} + \varepsilon_t \end{eqnarray}$ To make this process stationary, we would need to subtract $$y_{t-1}$$ from both sides, which would ensure that we are left with the following time series process, $\begin{eqnarray}\nonumber \Delta y_t = \alpha t + \varepsilon_t \end{eqnarray}$ To then remove the deterministic trend we could regress $$\Delta y_t$$ on a variable that has a deterministic trend (i.e. $$x = 1,2,3,\ldots$$), which would provide us with a stationary residual, which in this case would be white noise as there was no other stationary component in the original $$y_t$$ process. We are then able to conclude that when a process only has both a deterministic trend and a stochastic trend, then it would be appropriate to take the first difference if we are looking to transform the variable into a stationary process. However, if such a process only has a deterministic trend (and not a unit root) then we would induce alternative form of nonstationarity, through the lag of the MA term when taking the first difference of such a process. ## 4.1 The autocorrelation function As has been noted previously, the autocorrelation function may be used to describe the persistence in a process. When we are modelling a stationary AR(1) process, the first correlation coefficient, $$\rho_1$$, is equivalent to the coefficient in the AR(1) model, $$\phi$$. Similarly, the second correlation coefficient, $$\rho_2$$, is equivalent to $$\phi^2$$. The subsequent values of the correlation coefficient, $$\rho_j$$, may be derived from the more general expression that considers the value of the covariance function, which is divided by the product of the standard deviation of $$y_t$$ and the standard deviation of $$y_{t-j}$$. Hence, for a random walk the standard deviation of $$y_t$$ may be derived from $$\sqrt{\mathsf{var}(y_t)} = \sqrt{ t\sigma^2}$$. In addition, the standard deviation of $$y_{t-j}$$ may be similarly derived from, $$\sqrt{\mathsf{var}(y_{t-j})} = \sqrt{(t-j)\sigma^2}$$. The autocorrelation coefficient may then be derived from. $\begin{eqnarray} \nonumber \rho_s & = & (t-j)\sigma^2 / \sqrt{(t-j)\sigma^2} \sqrt{(t)\sigma^2} \\ \nonumber & = & (t-j) / \sqrt{(t-j)t}\\ \nonumber & = & \sqrt{(t-j) / t} \;\;\;\; < 1 \end{eqnarray}$ In most cases, where the sample size $$t$$ is large, when compared with the value for $$j$$, the ratio $$(t-j)/t$$ is approximately equal to unity. However, it will in all instances be less than 1. This is rather unfortunate as it would infer that we are not able to use the autocorrelation function to distinguish between a process that has a unit root and an AR(1) process that is stationary, but has a high degree of persistence. As such a slowly decaying autocorrelation function indicates that the process has a large characteristic root, where the process may possibly include a true unit root, a deterministic trend, or both of these features. In addition, such a slowly decaying autocorrelation function could also suggest that the process is stationary, but somewhat persistent. Furthermore, as we previously noted that as the value of $$\rho_1$$, is equivalent to the $$\hat{\phi}$$ coefficient estimate in the AR(1) model, this would imply that the parameter estimate would be biased, as it generate a value that is less than unity. Figure 4: Autocorrelation functions for simulated processes Examples of these processes are included in Figure 4, where we note that it would be difficult to use the autocorrelation function to distinguish between the various processes. Formal tests would therefore be required to determine whether the series contains a deterministic or a stochastic trend, both of these features, or neither of them. # 5 Tests for unit root Several tests have been developed to test the order of integration of a time series. In what follows, these tests have been separated into three groups. The first group of tests investigate the null hypothesis of a unit root, against the alternative of stationarity, where the alternative could be stationarity in levels or around a deterministic trend (trend-stationarity). The second group of tests, also consider the null hypothesis that there is a unit root, but allow for structural breaks that may prevail at a given point in time, or where the existence of such a break is unknown. The final group of test statistics investigate the null hypothesis that the process is stationary, against the alternative that the process has a unit root. ## 5.1 Dickey-Fuller & Augmented Dickey-Fuller test The most widely used test for a presence of a unit root was originally proposed by Dickey and Fuller (1979), which tests the null hypothesis of whether a series is a random walk against the alternative that it is stationary. To perform this test, we assume that we have an AR(1) process, $\begin{eqnarray} \nonumber y_{t}=\phi y_{t-1}+\varepsilon_{t} \;\;\; \text{where } \; \varepsilon_{t}\sim\mathsf{i.i.d.} \mathcal{N}\left(0,\sigma^{2}\right) \end{eqnarray}$ With the use of this equation, we would want to determine whether $$|\phi|=1$$, against the alternative that $$|\phi|<1$$. If $$|\phi|=1$$ then the above model would represent a random walk process, while if $$|\phi|<1$$, the above process is stationary. As noted above, the value of the autocorrelation coefficient, $$\rho_1$$ and the estimated value of $$\hat{\phi}$$ would be biased towards a value that is less than one, when the underlying data generating process contains a unit root.^[A simulation study that is used to illustrate this property of integrated data is provided in the appendix to this chapter. Hence, when comparing the results of a near unit root with that of a true unit root, we are primarily interested in determining the degree of certainty with which this coefficient has been estimated. This information may be obtained from the $$t$$-statistic that is associated with the coefficient estimate. Dickey and Fuller (1979) make use of the following test regression that is derived from the AR(1) model, $\begin{eqnarray} \nonumber y_{t}&=&\phi y_{t-1}+\varepsilon_{t}\\ \nonumber y_{t} - y_{t-1} &=&\phi y_{t-1} - y_{t-1}+\varepsilon_{t}\\ \nonumber \Delta y_{t}&=& (\phi -1) y_{t-1}+\varepsilon_{t}\\ \Delta y_{t}&=&\pi y_{t-1}+\varepsilon_{t} \tag{5.1} \end{eqnarray}$ where $$\pi=\hat{\phi}-1$$. Thus, using equation (5.1), the test for a unit root would simply involve an investigation into the value of the $$\pi$$ parameter, where $\begin{eqnarray} \nonumber H_{0}\; :\pi=0 \end{eqnarray}$ If the null hypothesis is satisfied, this would imply that $$y_{t}$$ is integrated of order one, such that $$y_{t}\sim I(1)$$. The alternative hypothesis would then take the form, $\begin{eqnarray} \nonumber H_{1}\; :\pi<0 \end{eqnarray}$ which implies that $$y_{t}$$ is stationary, such that $$y_{t}\sim I(0)$$. This testing procedure would imply that we would be looking to derive the $$t$$-statistic that is associated with the $$\pi$$ parameter, which considers whether this parameter is significantly different from zero. Therefore, the test for the null hypothesis, $$H_{0}$$ may be expressed as, $\begin{eqnarray} \nonumber \hat{t}_{DF}=\frac{\hat {\pi}}{SE\left(\hat{\pi}\right)} =\frac{{\phi}-1}{SE\left({\phi}\right)} \end{eqnarray}$ where $$SE$$ denotes the standard error that is associated with the coefficient estimate. The Dickey-Fuller test is one-sided test, since the relative alternative to the null hypothesis is that $$y_{t}$$ is stationary (i.e. $$\phi \ne 1$$).4 Note however, that the asymptotic distribution for this $$t$$-statistic is non-Gaussian, owing to the possible inclusion of bias in the parameter estimate. This would imply that we cannot use the critical values from the standard $$t$$-distribution. The relevant critical values are included in the work of Dickey and Fuller (1979), Dickey and Fuller (1981) and MacKinnon (1991).5 The above test describes the procedure for investigating whether the null hypothesis assumes a unit root, while the alternative hypothesis is that of stationarity. This is appropriate for time series that do not drift systematically in any direction. However, if the time series is either increasing or decreasing over the sample, we would like to include a deterministic trend in the alternative hypothesis.6 This testing procedure would consider the use of the regression model, $\begin{eqnarray} \nonumber y_{t}=\beta_1 + \beta_2 t+\phi y_{t-1}+\varepsilon_{t} \end{eqnarray}$ which can be rewritten as, $\begin{eqnarray} \Delta y_{t}=\beta_1 + \beta_2 t+\pi y_{t-1}+\varepsilon_{t} \tag{5.2} \end{eqnarray}$ where $$\pi=\hat{\phi}-1$$, once again. This test for a unit root would still consider whether or not $$\pi=0$$. However, in this case the implications are somewhat different, since $\begin{eqnarray} \nonumber H_{0}\; :\;\; \pi=0 \end{eqnarray}$ which implies that $$y_{t}\sim I(1)$$ with drift, against the alternative, $\begin{eqnarray} \nonumber H_{1}\; :\;\; \pi<0 \end{eqnarray}$ which implies that $$y_{t}\sim I(0)$$, but with a deterministic time trend (i.e. the process is trend-stationary). Note that the properties of the asymptotic distribution of the $$t$$-statistic will change if either a constant or a time trend are included in the estimated regression model. As such, the critical values would differ to those that are provided in the previous case. Since the alternative hypotheses in both of the above tests do not allow for any persistence in the underlying process, the residuals may be autocorrelated. This lead to the development of the augmented Dickey-Fuller (ADF) test, which is describe in Dickey and Fuller (1981). It controls for residual autocorrelation by including lagged values of $$\Delta y_{t}$$, which are allowed to follow a higher order AR($$p$$) process. To see how this works, consider an AR(2) representation, $\begin{eqnarray} \nonumber y_{t}=\beta_1 + \beta_2 t+\phi_{1}y_{t-1}+\phi_{2}y_{t-2}+\varepsilon_{t} \end{eqnarray}$ which is the same as, $\begin{eqnarray} \nonumber y_{t}=\beta_1 + \beta_2 t+(\phi_{1}+\phi_{2})y_{t-1}-\phi_{2}(y_{t-1}-y_{t-2})+\varepsilon_{t} \end{eqnarray}$ Subtracting $$y_{t-1}$$ from both sides gives $\begin{eqnarray} \nonumber \Delta y_{t}=\beta_1+\beta_2 t+\pi y_{t-1}+\gamma_{1}\Delta y_{t-1}+\varepsilon_{t} \end{eqnarray}$ where we have defined $$\pi=\phi_{1}+\phi_{2}-1$$ and $$\gamma_{1}=-\phi_{2}$$. Hence, if we allowed for $$p$$ lags in the autoregressive process, we would have $\begin{eqnarray}\nonumber \Delta y_{t}=\beta_1 +\beta_2 t+\pi y_{t-1}+\overset{p}{\underset{j=1}{\sum}}\gamma_{j}\Delta y_{t-j}+\varepsilon_{t} \end{eqnarray}$ where $$\pi=\sum_{j=1}^{p}\phi_{j}-1$$ and $$\gamma_{j}=\sum_{k=j+1}^{p}\phi_{k}$$, for $$j=\{1,2,3,\ldots, p\}$$. The lags length of $$p$$ can be estimated using information criteria, such as BIC or AIC. This allows us to isolate the persistence from other stationary components and this particular test may also be used to isolate the effects of intercepts and linear time trends, where we essentially have three test equations, $\begin{eqnarray} \Delta y_t = \pi y_{t-1} + \sum_{i=2}^{p}\gamma_i \Delta y_{t-i+1} + \varepsilon_t \tag{5.3} \\ \Delta y_t = \beta_1 + \pi y_{t-1} + \sum_{i=2}^{p}\gamma_i \Delta y_{t-i+1} + \varepsilon_t \tag{5.4} \\ \Delta y_t = \beta_1 + \beta_2 t + \pi y_{t-1} + \sum_{i=2}^{p}\gamma_i \Delta y_{t-i+1} + \varepsilon_t \tag{5.5} \end{eqnarray}$ The differences between these regressions concerns the inclusion of $$\beta_1$$ and $$\beta_2$$, where equation (5.3) refers to a pure random walk model, equation (5.4) includes an intercept or drift term and equation (5.5) includes both a drift and linear time trend. In each case, the parameter of interest is $$\pi$$, where if $$\pi =0$$ then the process $$y_t$$ contains a unit root. Comparing the calculated $$t$$-statistic with the critical values from the Dickey-Fuller tables determines whether or not we should reject the null hypothesis, $$H_{0}: \; \pi =0$$. Although the method is the same regardless of equations, the critical values of the $$t$$-statistics depend on whether the intercept or time trend is included and these critical values will also depend on the sample size. Dickey and Fuller (1981) include three additional $$F$$-statistics, which we denote $$\varphi_1 , \varphi_2$$ and $$\varphi_3$$. These statistics are used to test joint hypotheses on the coefficients and may be used to determine whether (5.3), (5.4) or (5.5) are appropriate for the underlying data generating process. This is of importance as these test equations have different critical values. The null hypothesis for equation (5.4), where $$\pi = \beta_1 = 0$$ is tested using $$\varphi_1$$, to determine whether the process could possibly include a constant. If we are not able to reject this null hypothesis, then we should make use of equation (5.3). The null hypothesis for equation (5.5), where $$\pi = \beta_1 = \beta_2 = 0$$ is tested using $$\varphi_2$$, to determine whether the process could possibly include a constant and a time trend. If we are not able to reject this null hypothesis, then we should make use of equation (5.4). The joint hypothesis $$\pi = \beta_2 = 0$$ may also be tested with the aid of $$\varphi_3$$, which seeks to determine whether the process has a deterministic time trend. The values for the $$\varphi_1 , \varphi_2$$ and $$\varphi_3$$ statistics are constructed as if they were $$F$$-tests, $\begin{eqnarray} \nonumber \varphi_i = \frac{[RSS(restricted) - RSS(unrestricted)] / r}{RSS(unrestricted) / (T-k)} \end{eqnarray}$ where $$RSS(restricted)$$ and $$RSS(unrestricted)$$ are the sum of the squared residuals for the two variants of the model, and $r$ is number of restrictions, $$T$$ is the number of usable observations, and $$k$$ is the number of estimated parameters in the unrestricted model. When comparing the calculated value of $$\varphi_i$$ to the values in Dicky-Fuller tables, we need to determine the significance level at which the restriction is binding, to test the null hypothesis that the data is generated by the restricted model. In this case the alternative hypothesis is that the data is generated by the unrestricted model. If the restriction is not binding $$RSS(restricted)$$ should be close to the value for $$RSS(restricted)$$, and $$\varphi_i$$ will be small. This would imply that large values of $$\varphi_i$$ suggest that the restriction is binding, which would result in a rejection of the null hypothesis. ## 5.2 Implementing an ADF test When implementing the augmented Dickey-Fuller test, it has been suggested that one should employ a general-to-specific approach, where the first step is to make use of the test equation that includes a constant and time trend. If we find that we are unable to reject the null of a unit root, we would then need to consider the value of $$\varphi_3$$. If we are unable to reject the null that the process does not include a time trend then we would need to estimate the subsequent test equation. The full details of this testing procedure are provided in Figure 5. Figure 5: Augmented Dickey-Fuller: general-to-specific procedure Note that if we suspect that the process is integrated of the second order, we would need to perform Dickey-Fuller tests on successive differences of $$y_t$$. For example, if we want to test whether $$y_t \sim I(2)$$ then we would estimate the equation, $\begin{eqnarray}\nonumber \Delta^2 y_t = \mu + \xi_1 \Delta y_{t-1} + \varepsilon_t \end{eqnarray}$ where we cannot reject the null that $$\xi_1 = 0$$, we would conclude that $$y_t$$ is $$I(2)$$. ## 5.3 Unit roots and structural breaks In a much cited paper, Perron (1989) showed that the ADF test has little power to discriminate between a stochastic and deterministic trend when the data is subject to structural break. This would imply that in the presence of structural breaks, the various ADF tests are biased towards the non-rejection of a unit root. For example, consider the moving average representation of an autoregressive model, $$y_t = S_t + 0.5 \sum \varepsilon_t$$. This time series has been simulated for 500 observations, where the level shift is described by $$S$$, where for the first half of the sample, $$S_{1-249} = 0$$ and for the second half, $$S_{250-500} = 10$$. This time series is depicted in Figure 6. Figure 6: Stationary time series plus structural break If we were to fit a AR(1) model to this process, the coefficient would be biased towards unity, since low values are followed by low values, and high values are followed by high values. Hence, the ADF tests of this misspecified model may suggest that this process follows a random walk plus drift, where it is clearly just a stationary time series with a structural break. Perron (1989) includes a formal procedure for testing unit roots in the presence of a structural change. The parameter $$\tau$$, is used to denote the position of the structural break, which in the above example would occur at position 250. This test could take one of the following three forms, If we assume that the null considers a one-time jump (pulse) in the level of the unit root process, we could construct the hypothesis $\begin{eqnarray} \nonumber H_0 \; : \;\; y_t = \mu + y_{t-1} + \beta_1 D_P + \varepsilon_t \end{eqnarray}$ where $$D_P = 1$$ if $$t = \tau +1$$, and 0 otherwise. This specification would describe a random-walk plus drift with the addition of a structural break. Note that as a unit root process has infinite memory, the effect of the structural break at $$\tau +1$$ we be present in the remainder of the time series. In addition, since we know that a random-walk plus drift would usually trend upwards or downwards, an appropriate alternative hypothesis would be to consider a (level shift) structural break in the intercept of a stationary process that has a deterministic trend,7 $\begin{eqnarray} \nonumber H_1 \; : \;\; y_t = \mu + \alpha t + \beta_2 D_L + \varepsilon_t \end{eqnarray}$ where $$D_L = 1$$ if $$t > \tau$$, and 0 otherwise. To consider a permanent change in the drift of a unit root process, we could construct the null hypothesis, $\begin{eqnarray} \nonumber H_0\; : \;\; y_t = \mu + y_{t-1} + \beta_1 D_L + \varepsilon_t \end{eqnarray}$ where $$D_L = 1$$ if $$t > \tau$$, and 0 otherwise. In this case the infinite memory of random-walk plus drift would ensure that the inclusion of the level shift dummy would provide behaviour that may be characterised by an increase (or decrease) in the drift. As such an appropriate alternative hypothesis would be to consider a trend-stationary process that has a dummy variable that has such a change in slope, $\begin{eqnarray} \nonumber H_1 \; : \;\; y_t = \mu + \alpha t + \beta_3 D_T + \varepsilon_t \end{eqnarray}$ where $$D_T = t-\tau$$ if $$t > \tau$$, and 0 otherwise. To consider a change in both the level and drift, we could construct the null hypotheses that makes use of the previous two specifications, $\begin{eqnarray}\nonumber H_0 \; : \;\; y_t = \mu + y_{t-1} + \beta_1 D_P + \beta_2 D_L + \varepsilon_t \end{eqnarray}$ For which the alternative would also make use of the previous two specifications, such that $\begin{eqnarray}\nonumber H_1 \; : \;\; y_t = \mu + \alpha t + \beta_2 D_L + \beta_1 D_T + \varepsilon_t \end{eqnarray}$ After making use of this procedure Perron (1989) found that there was less evidence of unit roots in economic time series, than had been previously reported in the literature. To implement this procedure one could estimate the model for the alternative hypothesis, which may contain the effects of the constant, time trend and structural break. The residuals from this model would then exclude the effects of these terms and could be tested using a simple ADF specification as provided in equation (5.3). Alternatively, if we are testing the null of a one-time jump in a unit root process (against the alternative of level shift in a trend-stationary process) one could combine these steps by estimating the equation, $\begin{eqnarray}\nonumber y_t = \mu + \phi_1 y_{t-1} + \alpha t + \beta_2 D_L + \sum_{i=1}^{k} \gamma_i \Delta y_{t-i} + \varepsilon_t \end{eqnarray}$ Appropriate critical values for this hypothesis test are contained in Perron (1989). While this technique is highly intuitive, Christiano (1992) and a number of other researchers criticised the Perron approach on the basis that it required prior knowledge about the exact date of such a break point, which is not always available. This lead to the development of a number of different methods that treats the break point as unknown (prior to testing). Examples of these procedures are contained in the work of Perron and Vogelsang (1992), Banerjee, Lumsdaine, and Stock (1992), Perron (1997), and Vogelsang and Perron (1998). While most of these studies provide interesting insights, the technique that is described in Zivot and Andrews (2002) is the most popular procedure for identifying a unit root with an unknown endogenous structural break. This procedure makes use of an optimisation routine that identifies the date of the endogenous structural shift, as that point which gives the least favourable result for the null hypothesis of a random walk with drift. Therefore, the test statistics are formulated as, $\begin{eqnarray} \nonumber \Delta y_t = \mu + \pi y_{t-1} + \alpha t + \beta_2 D_L \hat{\lambda} + \sum_{i=1}^{k} \gamma_i \Delta y_{t-i} + \varepsilon_t \\ \nonumber \Delta y_t = \mu + \pi y_{t-1} + \alpha t + \beta_3 D_T \hat{\lambda} + \sum_{i=1}^{k} \gamma_i \Delta y_{t-i} + \varepsilon_t \\ \nonumber \Delta y_t = \mu + \pi y_{t-1} + \alpha t + \beta_2 D_L \hat{\lambda} + \beta_3 D_T \hat{\lambda} + \sum_{i=1}^{k} \gamma_i \Delta y_{t-i} + \varepsilon_t \end{eqnarray}$ where $$\lambda$$ is the estimated date for the structral break and we are essentially interested in the value for $$\pi=\phi-1$$. Critical values for this technique are provided in Zivot and Andrews (2002). # 6 Testing the assumption of stationarity An alternative testing procedure has been proposed by Kiawatkowski et al. (1992), who consider the null hypothesis that a series is stationary. In this case the alternate hypothesis is that the variable is nonstationary (i.e. $$I(1)$$). This procedure is usually referred to as the KPSS test. To consider the intuitive appeal of this procedure, assume that the data generating process has the form, $\begin{eqnarray} y_{t}=\mu+x_{t}+\upsilon_{t} \tag{6.1} \end{eqnarray}$ where $$\mu$$ is a constant, $$\upsilon_{t}$$ is a stationary component, and $$x_{t}$$ takes the form of a random walk, such that $\begin{eqnarray} x_{t}=x_{t-1}+\varepsilon_{t} \;\;\; \text{where }\; \varepsilon_{t}\sim \mathsf{i.i.d.} \mathcal{N} \left(0,\sigma^{2}\right) \tag{6.2} \end{eqnarray}$ It can then be shown that if the variance of $$\varepsilon$$ is zero, then $$x_{t}=x_{0}$$ for all $$t$$. For instance, if there is no variation in the error term, $$\varepsilon_t$$ then $$x_t$$ must be constant. This would infer that $$y_{t}$$ would be stationary when $$\sigma^{2}=0$$, as it would only include constants and the stationary process, $$\upsilon_{t}$$. Therefore, the test statistic could be formulated with the null hypothesis that $$y_{t}$$ is stationary, where we specify, $\begin{eqnarray} \nonumber H_{0}\; :\sigma^{2}=0 \end{eqnarray}$ which implies that $$x_{t}$$ is a constant, against the alternative hypothesis, $\begin{eqnarray} \nonumber \ H_{0}\; :\sigma^{2}>0 \end{eqnarray}$ which implies that $$x_{t}$$ varies over time and $$y_{t}$$ will be nonstationary. To derive the test statistic we regress $$y_{t}$$ on a constant, $$\mu$$, to obtain the residuals, which we call $$\hat{\upsilon}_{t}$$. Thereafter, we calculate $$S_{t}=\sum_{s=1}^{t}\hat{\upsilon}_{t}$$ and $$\hat{\sigma}_{\infty}^{2}$$, which relates to the long-run variance of the process. The KPSS test statistic could then be derived with the aid of the following calculation, $\begin{eqnarray} KPSS=\frac{1}{T^{2}}\frac{\sum_{t=1}^{T}\hat{S}_{t}^{2}}{\hat{\sigma}_{\infty}^{2}} \tag{6.3} \end{eqnarray}$ This test statistic may be augmented to allow for additional deterministic components, such as a deterministic trend. Note that any changes to the test equation would require a different set of critical values. # 7 Bayesian analysis and unit roots Up to this point we have adopted the classical statistical perspective, where we estimate the value of $$\phi$$ in an autoregressive model. When using these classical techniques, the Dickey-Fuller testing procedure suggested that if the uncertainty with which we estimate of the coefficient value is relatively high, and that coefficient is relatively close to one, then we would be unable to reject the null of a unit root. When using Bayesian estimation techniques, all the parameters are treated as random variables, so we need to specify the moments for the prior distribution of the parameter. To derive the final posterior estimates we would then multiply the prior distribution by the likelihood function (which would provide a summary of the parameter estimates, conditional on the observed values of the data). Note that in this case, if the distribution for the likelihood function is relatively flat, which would occur when the data suggests that there is a great deal of uncertainty about the parameter estimates, then the posterior would converge on the prior. Similarly, when the likelihood function is relatively narrow and there is a great deal of certainty relating to the estimated parameter estimates, then the posterior would converge on the likelihood function. Hence, if we suspect that the time series contains a unit root, then we would make use of a prior distribution that has a mean value of unity. If the data strongly suggests that this is not a unit root process then the prosterior would converge on the value that is provided by the likelihood function to provide a parameter estimate that is less than one. Similarly, if the data suggests that there is a great deal of uncertainty about the possible value of the parameter, then the final parameter estimate would be unity (or closely related to unity). In this way the final parameter estimate is not biased. For further use of Bayesian techniques in the presence of a unit root, see Sims (1988) and Sims and Uhlig (1991). # 8 Conclusion Standard regressions that are performed on nonstationary data may provide spurious results. This is important since many time series variables have deterministic or stochastic trends, which would infer that they are nonstationary. If a process returns to its (non-zero) trend value after a shock we say that it has a deterministic trend and is trend-stationary. These variables can be made stationary by removing the deterministic time trend. Time series variables that are integrated of order one, $$I(1)$$, can be made stationary by differencing. Such variables are often termed difference-stationary, or we say that they have one unit root. The most widely used unit root test is the Augmented Dickey-Fuller test, which should be employed within a general-to-specific procedure. The Perron test should be used in the presence of a know structural break, while the Zivot-Andrews test should be used for an unknown endogenous structural break. An alternative method that tests the null hypothesis of stationarity is the KPSS test. # 9 Appendix ## 9.1 Monte Carlo simulations for the bias in a unit root In the tutorial we constructed a number of simulation exercises, where we noted that after generating a random walk process 10,000 times, the estimated coefficients for $$\hat{\phi}$$, were biased to values below 1. The results of this simulation exercise are contained in Figure 6. Figure 7: Bias in unit root process when $$\phi=1$$ To make use of a Monte Carlo simulation for such a data generating process (DGP) that may have been generated for a particular model, we need to specify information relating to: Therefore, if we assume that the DGP is generated by an AR(1) model that does not have a constant, such as: $\begin{eqnarray} \nonumber y_t = \phi y_{t -1} + \epsilon_t, \;\;\; \text{for } t = 1, . . . , T \;\;\; \text{and } \epsilon_t \sim \text{i.i.d.} \mathcal{N}(0, \sigma^2 ) \end{eqnarray}$ Then we would need to specify values for the follow terms, where by way of example, $y_0 = 0, \phi = 1, \sigma = 1 \; \text{and } T = 100$. We would then be able to generate values for the variables with the aid a some form of simulatio, where the number of simulations would need to take on a defined value, e.g. $$N = 10,000$$. Thereafter, we could estimate an AR(1) model for each of these simulated time series, which could be used to investigate the bias in the estimated value of $$\hat{\phi}$$. Hence, ## 9.2 Power studies The power of a test is the probability of rejecting the null hypothesis given that the null hypothesis is not true (that is, one minus type II error). For example consider the power of the Dickey-Fuller test, where we assume that you know the $$5\%$$ critical value of the one-sided $$t$$-test for $$\phi = 1$$ denoted by $$\tau_{0.05}$$. To ascertain the power of the Dickey-Fuller test for $$\phi \ne 1$$, which is where the test suggests that the series contains a unit root (when you know it doesn’t). To obtain sample of estimated $$t$$-statistics: We could then consider different values of $$\phi$$ to investigate the relation between power and $$\phi$$, which may be used to draw a power function. These studies suggest that the power of the Dickey-Fuller test is relatively low. For example, when making use of a simulation exercise for a stationary time series process that has a long memory, where $$\phi = 0.95$$, we noted that the Dickey-Fuller test was only able to reject the null of a unit root 4.3% of the time (when using the critical values at the 95% level). # 10 References Banerjee, A., R. L. Lumsdaine, and J. H. Stock. 1992. “Recursive and Sequential Tests of the Unit-Root and Trend-Break Hypotheses: Theory and International Evidence.” Journal of Business and Economic Statistics 10(3): 271–87. Christiano, Lawrence J. 1992. “Searching for a Break in GNP.” Journal of Business and Economic Statistics 10(3): 237–50. Dickey, D. A., and W. A. Fuller. 1979. “Distribution of the Estimates for Autoregressive Time Series with a Unit Root.” Journal of American Statistical Association 74(366): 427–31. ———. 1981. “Likelihood Ratio Statistics for Autoregressive Time Series with a Unit Root.” Econometrica 49: 1057–72. Haldrup, N., and W. Jansen. 2006. “Palgrave Handbook of Econometrics: Vol 1 Economic Theory.” In, edited by T. Mills and K. Patterson. Pagrave MacMillan. Kiawatkowski, D., P. C. Phillips, P. Schmidt, and Y. Shin. 1992. “Testing the Null Hypothesis of Staionarity Aganist the Alternative of a Unit Root: How Sure Are We That Economic Time Series Have a Unit Root?” Journal of Econometrics 54(1): 159–78. MacKinnon, J. 1991. “Long-Run Economic Relationships: Readings in Cointegration.” In, edited by R. F. Engle and C. W. J. Granger. Advanced Texts in Econometrics. Oxford: Oxford University Press. Nelson, C.R., and C.I. Plosser. 1982. “Trends and Random Walks in Macroeconmic Time Series: Some Evidence and Implications.” Journal of Monetary Economics 10: 139–62. Perron, Pierre. 1989. “The Great Crash, the Oil Price Shock, and the Unit Root Hypothesis.” Econometrica, 1361–1401. ———. 1997. “Further Evidence on Breaking Trend Functions in Macroeconomic Variables.” Journal of Econometrics 80: 355–85. ———. 2006. “Palgrave Handbook of Econometrics, Volume 1.” In, 278–352. Pagrave MacMillan. Perron, Pierre, and Timothy Vogelsang. 1992. “Nonstationary and Level Shifts with an Application to Purchasing Power Parity.” Journal of Business and Economic Statistics 10: 301–20. Sims, Christopher A. 1988. “Bayesian Skepticism on Unit Root Econometrics.” Journal of Economic Dynamics and Control 12 (2-3): 463–74. Sims, Christopher A., and Harald Uhlig. 1991. “Understanding Unit Rooters: A Helicopter Tour.” Econometrica 59(6): 1591–9. Vogelsang, Timothy, and Pierre Perron. 1998. “Additional Tests for a Unit Root Allowing for a Break in the Trend Function at an Unknown Time.” International Economic Review 39: 1073–1100. Yule, G. U. 1926. “Why Do We Sometimes Get Nonsense-Correlations Between Time Series.” Journal of Statistical Society 89: 1–64. Zivot, Eric, and Donald Andrews. 2002. “Further Evidence on the Great Crash, the Oil-Price Shock, and the Unit Root Hypothesis.” Journal of Business and Economic Statistics 20: 25–44. 1. Similar results were obtained after including a deterministic time trend in the model. 2. The earlier results for the regression in levels may been due to the dramatic improvements to medicine and a change in preferences (to get married) that may have occurred over this period of time. 3. Such an example may allow for instances where a change in technology permanently affects the level of output. 4. This would imply that it is only when the calculate value of the test statistic is smaller (or more negative) that we are able to reject the null of a unit root. 5. The values of Dickey and Fuller (1979) have been included in the urca package. 6. For example, the level of economic output is usually increasing over time. 7. Figure 3 contains an example of a random walk plus drift.
2021-05-16 03:34:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8532460331916809, "perplexity": 365.7752740526061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991659.54/warc/CC-MAIN-20210516013713-20210516043713-00260.warc.gz"}
https://testbook.com/question-answer/emf-equation-of-alternator-is-__________--630c95a26ed9f308af80d957
# EMF equation of alternator is __________. This question was previously asked in UPPCL Technician (Electrical) 28 Mar 2021 Official Paper (Shift 2) View all UPPCL Technician Papers > 1. 4.44 ØFT volts 2. 2.22 ØFT volts 3. 2.44 ØFT volts 4. 4.22 ØFT volts Option 1 : 4.44 ØFT volts Free DRDO Technician Electrician(Score Booster): Mini Mock Test 6 K Users 60 Questions 60 Marks 45 Mins ## Detailed Solution The correct answer is option 1 Concept: EMF Equation of Alternator: The emf induced by the alternator or synchronous generator is three-phase alternating in nature. Let us derive the mathematical equation of emf induced in the alternator. Let, = number of conductors in series per phase. Z = 2T, where T is the number of coils or turns per phase. One turn has two coil sides or a conductor. P = Number of poles. f = frequency of induced emf in Hertz Φ = flux per pole in webers Kp pitch factorKd = distribution factor N = Speed of the rotor in rpm (revolutions per minute) N/60 = Speed of the rotor in revolutions per second. Time is taken by the rotor to complete one revolution, dt = 1/(N/60) = 60/N second In one revolution of the rotor, the total flux ϕ cut by each conductor in the stator poles, $${\bf{d}}ϕ = ϕ {\bf{P}}\;{\bf{weber}}$$ By faraday’s law of electromagnetic induction, the emf induced is proportional to the rate of change of flux. Average emf induced per conductor = $$\frac{{{\bf{d}}ϕ }}{{{\bf{dt}}}} = \frac{{ϕ {\bf{P}}}}{{\frac{{60}}{{\bf{N}}}}} = \frac{{ϕ {\bf{NP}}}}{{60}}$$ We know, the frequency of induced emf $${\bf{f}} = \frac{{{\bf{PN}}}}{{120}}\;,\;{\bf{N}} = \frac{{120{\bf{f}}}}{{\bf{P}}}$$ Submitting the value of N in the induced emf equation, We get Average emf induced per conductor = $$\frac{{ϕ {\bf{P}}}}{{60}} \times \frac{{120{\bf{f}}}}{{\bf{P}}} = 2ϕ {\bf{f}}\;{\bf{volts}}$$ If there are Z conductors in series per phase, Average emf induced per conductor = $$2ϕ {\bf{fZ}} = 4ϕ {\bf{fT}}\;{\bf{volts}}$$ RMS value of emf per phase = Form factor x Average value of induced emf = 1.11 x 4 Φ f T RMS value of emf per phase = 4.44 Φ f T volts The obtained above equation is the actual value of the induced emf for full pitched coil or concentrated coil. However, the voltage equation gets modified because of the winding factors. Actual induced emf per phase (E) = 4.44 Kp Kd Φ f T volts Conclusion: Hence EMF induced in the Alternator is independent of the type of Alternator used.
2023-03-30 02:13:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6903955340385437, "perplexity": 3704.0982893647165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00653.warc.gz"}
https://math.stackexchange.com/questions/2145275/projecting-the-line-y-x-onto-the-riemann-sphere
# Projecting the line y=x onto the Riemann Sphere I am trying to project the line "y=x" on the complex plane including the point at infinity to the Riemann Sphere. I know the projection is a circle, but I want to understand how to find the radius of the circle on the sphere. Any hints are appreciated. Thank you! • Considering the tag you attached, you want to find the image circle of (the inverse of) the stereographic projection of the line $y=x$, right? – cjackal Feb 15 '17 at 7:52 • Sorry for not being clear. Here is a better way to phrase the question. Describe the projection on the Riemann Sphere of the following set in the complex plane, the line y=x (including the point at infinity). – GentGjonbalaj Feb 15 '17 at 7:55 • The line $y=x$ becomes a maximum circle on the Riemann sphere so its radius is the radius of the sphere. This is your question? – Emilio Novati Feb 15 '17 at 7:59 • Your question is a total nonsense at least in two points: the meaning of the projection is not settled, and there's many(infinitely many) ways to endow a metric structure on the Riemann sphere. But I guess the second ambiguity can be removed when you make it clear by what the projection is. – cjackal Feb 15 '17 at 7:59 • @EmilioNovati Yes that was my question. I knew that was going to be the projection. But how can I arrive to the solution algebraically? – GentGjonbalaj Feb 15 '17 at 8:09 (The lines from the North pole to the line $y=x$ form a plane perpendicular to the $x$ plane.)
2018-10-24 02:22:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7882222533226013, "perplexity": 197.7177410443212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583518753.75/warc/CC-MAIN-20181024021948-20181024043448-00541.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-sqrt-50-6-2
# How do you simplify sqrt(50*6^2)? Jul 2, 2016 $\sqrt{50 \cdot {6}^{2}} = 30 \sqrt{2}$ #### Explanation: $\sqrt{50 \cdot {6}^{2}}$ = $\sqrt{5 \times 5 \times 2 \times 6 \times 6}$ = $5 \times 6 \times \sqrt{2}$ = $30 \sqrt{2}$
2019-12-15 20:38:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886859655380249, "perplexity": 10116.182248562338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310866.82/warc/CC-MAIN-20191215201305-20191215225305-00418.warc.gz"}
https://mathematica.stackexchange.com/questions/65732/creating-an-orrery
# Creating an orrery I am interested in creating an orrery, ideally with SystemModeler. I have been experimenting with the MultiBody library since that allows you to do animations. The most relevant example seems to be Modelica.Mechanics.MultiBody.Examples.Elementary.PointGravity, but the problem is that it has a single point of gravity. You can change the gravity from the earth's gravity to something else, but for my orrery I would think each body needs to have gravity/mass. The bodies do have a settable mass, though, so does this mean I can just create one body for each planet, set the mass correctly and it will work? So, in other words one way to set up the model would seem to be to make the "world" have the gravity of the sun and gravity type of PointGravity, then have the planets as bodies moving around it. However, I don't think this will work right because the "world" object's gravity is constant, but in the solar system the force of gravity is a function of the distance. Another idea I had was to make the "world" object have type NoGravity, but then how do I constrain the planets? If I add in the sun as a body, and give the planets and the sun appropriate masses and distances and velocities will the simulator compute the instantaneous gravity between them, ie, solve the N-body problem automatically? Afterthought: is there a way to customize the look of the "bodies"? In the example they are just blue spheres and you can change their color and size but that is it. • most folks here use Mathematica, and systemModeler is separate program from Mathematica. I suggest asking at community.wolfram.com they have special group for system modeler there. – Nasser Nov 14 '14 at 23:31 • There was a vote on meta and the majority seemed to think SM should be on topic. (meta.mathematica.stackexchange.com/questions/1336/…) – Tyler Durden Nov 14 '14 at 23:44 • @TylerDurden OK. I've forgotten that question. You're right. – Dr. belisarius Nov 15 '14 at 0:07 • Tyler, the question is still quite broad though. Did you already try any of the strategies you mention? Perhaps you need to tackle the problem incrementally (e.g. starting with two bodies). – Yves Klett Nov 15 '14 at 8:49 • It would be somewhat tedious, but otherwise not very difficult to write routines for an orrery. See for instance the animation at the bottom of this answer, tho I only did the terrestrial planets. – J. M. is away Aug 29 '15 at 11:43
2019-07-16 07:16:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5646682381629944, "perplexity": 702.6789436246968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524503.7/warc/CC-MAIN-20190716055158-20190716081158-00366.warc.gz"}
http://slideplayer.com/slide/229312/
# Determining signs of Trig Functions (Pos/Neg) ## Presentation on theme: "Determining signs of Trig Functions (Pos/Neg)"— Presentation transcript: Determining signs of Trig Functions (Pos/Neg) Last class we found trig values using an x-y coordinate. Not all trig values are positive x- x+ y+ y y+ We can determine the signs of trig values based on the quadrant that the terminal side of the angle falls in x x- x+ The signs of the x and y coordinates factor into the signs of the trig values y- y- Finding Trig Values from an x-y coordinate Find the 6 trig functions for an angle which has terminal side passing through (-3, 4) and give them the proper signs y This comes from the y coordinate which is positive x Finding Trig Values from an x-y coordinate Find the 6 trig functions for an angle which has terminal side passing through (-3, 4) and give them the proper signs y The hypotenuse is always positive So we have positive over positive meaning the sine is positive in the 2nd quadrant x Finding Trig Values from an x-y coordinate Find the 6 trig functions for an angle which has terminal side passing through (-3, 4) and give them the proper signs y This is comes from the x-coordinate which is negative x Finding Trig Values from an x-y coordinate Find the 6 trig functions for an angle which has terminal side passing through (-3, 4) and give them the proper signs y Hypotenuse – always positive Negative over positive making cosine negative in the 2nd quadrant x Finding Trig Values from an x-y coordinate Find the 6 trig functions for an angle which has terminal side passing through (-3, 4) and give them the proper signs y y-coordinate - positive x-coordinate - negative x positive over negative means tan is negative in the 2nd quadrant Finding Trig Values from an x-y coordinate Find the 6 trig functions for an angle which has terminal side passing through (-3, 4) and give them the proper signs y csc has the same sign as sin, sec is the same as cos, cot is the same as tan x Determining signs of Trig Functions (Pos/Neg) Add this into the original chart we started with y sin+ x- sin+ x+ cos+ cos- y+ y+ tan+ tan- csc+ csc+ sec+ sec- cot+ cot- x x- sin- x+ sin- cos- cos+ y- y- tan+ tan- csc- csc- sec- sec+ cot+ cot- Finding Trig Values from an x-y coordinate Find the 6 trig functions for an angle which has terminal side passing through (2, -7) and give them the proper signs y x Finding Trig Values from an x-y coordinate Find the 6 trig functions for an angle which has terminal side passing through (-5, 3) and give them the proper signs y x Homework – pg 697 #9, 13, 14, 17, 20
2018-06-24 17:18:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9496522545814514, "perplexity": 2317.514922846769}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866984.71/warc/CC-MAIN-20180624160817-20180624180817-00187.warc.gz"}
https://www.physicsforums.com/threads/request-for-equations.50349/
Request for equations Hi Does anyone know the projectile motion equations in which the acceleration is NOT constant? JasonRox Homework Helper Gold Member Learn Calculus. That's all you need. ahh..... Isn't there an equation(s) derived from the kinematic ones i can just apply? Integral Staff Emeritus Gold Member There is no way that anybody can guess what you are asking for. You will have to ask an understandable question. As for variable accelerations, just apply Newtons 3rd: F=ma Nothing said: Hi Does anyone know the projectile motion equations in which the acceleration is NOT constant? Isn't there an equation(s) derived from the kinematic ones i can just apply? I believe you probably already know the standard proceedure here, Nothing. Usually, beginning with F = ma, just take the derivative of both sides of the equation since you are looking for the time rate of change of accel. dF/dt = m(dA/dt) However, the effectiveness of this equation goes beyond the original assumptions in Newton's law. In cases of rapid change of acceleration a modification of Newton's law is probable. :surprised Is that what you are getting at? Creator F=ma is a differential equation since $a =d^2 x /dt^2$ with x, a and F vectors. Given a certain force you can find the velocity or position as a function of time by integrating the force respectively one or two times. But you indeed need to know some calculus for that... Alkatran Homework Helper Is the change in acceleration constant? Doesnt: Position = Sum(x(i)*t^i/i!) where i goes from 0 to infinity, x(0) is initial position, x(1) is inital speed, x(2) is inital acceleration, etc.... ok u know that equation: y = v0 sin (theta) - 0.5at^2 ? where v0 sin (theta) is the vertical component the muzzle velocity is there a counterpart where a is not constant? arildno $$y(t)=v_{0}t\sin\theta-\int_{0}^{t}(\int_{0}^{\tau}a(s)ds)d\tau$$
2021-06-24 13:12:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6810776591300964, "perplexity": 1107.5286967214574}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488553635.87/warc/CC-MAIN-20210624110458-20210624140458-00331.warc.gz"}
https://mathematica.stackexchange.com/questions/236299/help-in-solve-the-matrix
# Help in solve the matrix I have written the following code. I have two main problems. The first is how to solve the matrix for different Omega, and the second is that if I put letters instead of numbers for k or m, the values will not add up(plus). Subscript[k, 1] = K; Subscript[k, 2] = K; Subscript[k, 3] = K; Subscript[m, 1] =M; Subscript[m, 2] = 2*M; Subscript[m, 3] = 2*M; n = 3; Format[m[n_]] := Subscript[m, n]; mv = Array[m, n]; (mm = (mv) IdentityMatrix[n]) // MatrixForm Format[k[n_]] := Subscript[k, n]; kv = Array[k, n]; (kk = (kv + Join[Rest[kv], {0}]) IdentityMatrix[n] + DiagonalMatrix[-Rest[kv], 1] + DiagonalMatrix[-Rest[kv], -1]) // MatrixForm (Omega = Solve[Det[kk - mm*\[Omega]] == 0, \[Omega]]) // N (Time = 2 Pi/Sqrt[Omega] // RootReduce) // N (mA = kk - mm*Omega[[i]] // RootReduce); mC = {0, 0}; mX = Array[\[Phi], 2]; eqn = mA.mX == mC; sol = Solve[eqn, mX] (sol = (SolveAlways[eqn, mX])) // N And how to get the answer in the form of a matrix as shown below n = 3 Table[Subscript[\[CapitalPhi], i, j], {i, n}, {j, n}]; MatrixForm[%] I thought maybe it would help, the top matrix is a modal matrix for structural modes In the book it is mentioned that for each Omega there is a Phi vector which is known as the special vector or the characteristic vector, also elsewhere the unit value is considered for the component related to the first class. Clear["Global*"] k[1] = 4000; k[2] = 4000; k[3] = 5000; m[1] = 10; m[2] = 2; m[3] = 5; n = 3; mv = m /@ Range[n]; mm = (mv) IdentityMatrix[n]; kv = k /@ Range[n]; (kk = (kv + Join[Rest[kv], {0}]) IdentityMatrix[n] + DiagonalMatrix[-Rest[kv], 1] + DiagonalMatrix[-Rest[kv], -1]); (Omega = ω /. Solve[Det[kk - mm*ω] == 0, ω]) // N (* {177.181, 857.532, 5265.29} *) (Time = 2 Pi/Sqrt[Omega] // RootReduce) // N (* {0.472032, 0.214563, 0.0865902} *) (mA = Table[kk - mm*Omega[[i]], {i, n}] // RootReduce); mC = ConstantArray[0, n]; Format[ϕ[n_]] := Subscript[ϕ, n]; mX = Array[ϕ, n]; eqns = Table[mA[[i]].mX == mC, {i, n}]; (sol = Solve[#, mX] & /@ eqns // RootReduce // Quiet) /. x_Root :> N[x] The solution gives ϕ[2] and ϕ[3] in terms of ϕ[1]. If instead you want ϕ[1] and ϕ[2] in terms of ϕ[3], (sol2 = Solve[#, Most@mX, MaxExtraConditions -> All] & /@ eqns // RootReduce // Quiet) /. x_Root :> N[x] I do not understand the relation of the results to the pictures that you show. EDIT: Using symbolic values Clear["Global*"] k[1] = K; k[2] = K; k[3] = K; m[1] = M; m[2] = 2*M; m[3] = 2*M; n = 3; mv = Array[m, n]; (mm = (mv) IdentityMatrix[n]) // MatrixForm; kv = Array[k, n]; (kk = (kv + Join[Rest[kv], {0}]) IdentityMatrix[n] + DiagonalMatrix[-Rest[kv], 1] + DiagonalMatrix[-Rest[kv], -1]) // MatrixForm; (Omega = ω /. Solve[Det[kk - mm*ω] == 0, ω]) (* {K/M, (5 K - Sqrt[21] K)/(4 M), (5 K + Sqrt[21] K)/(4 M)} *) Time = 2 Pi/Sqrt[Omega]; mA = Table[kk - mm*Omega[[i]], {i, n}]; mC = ConstantArray[0, n]; Format[ϕ[m_, n_]] := Subscript[ϕ, m, n] mX = Array[ϕ, {n, n}]; eqn = Table[mA[[i]].mX[[i]] == mC, {i, n}]; sol = Table[Solve[eqn[[i]], mX[[i]]], {i, n}] // Quiet • Your comment is unclear. First, do not set k[2] = k Since k is being used as an indexed variable; do not also try to use it as a separate variable. At best it is confusing. Second, what "answer" are you talking about? In any event, anywhere k[1] + k[2] occurs it would be K + k since K and k are distinct. As I said before, if you are having a problem with your code show the actual code that demonstrates the problem. – Bob Hanlon Dec 11 '20 at 15:46 • First of all, thank you very much for providing the solution, and then I have to say that I realized my mistake and for this reason I deleted the comment, I put lowercase(k) letters instead of uppercase(K),And now the problem is solved but... – Scott Constantine Dec 11 '20 at 19:22 • To solve the problem Phi must be like this {{\[Phi][1, 1], \[Phi][1, 2], \[Phi][1, 3]}, {\[Phi][2, 1], \[Phi][2, 2], \[Phi][2, 3]}, {\[Phi][3, 1], \[Phi][3, 2], \[Phi][3, 3]}} but in your code is {Subscript[\[Phi], 1], Subscript[\[Phi], 2]} – Scott Constantine Dec 11 '20 at 19:36 • The question in your comment makes no sense to me. The phi for a given omega have a set relationship. They cannot all be arbitrarily set. At most one of the three. To ask a question show what you have tried and state where you are having a problem. – Bob Hanlon Dec 13 '20 at 16:40 • sol /. {\[Phi][1, 1] -> 1, \[Phi][2, 1] -> 1, \[Phi][3, 1] -> 1} Note that as previously stated, only one of the phi for an omega can be assigned a value. – Bob Hanlon Dec 13 '20 at 18:00
2021-06-14 16:04:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20445293188095093, "perplexity": 8310.676703102925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612537.23/warc/CC-MAIN-20210614135913-20210614165913-00492.warc.gz"}
http://gmatclub.com/forum/sum-of-number-500-that-have-a-reminder-of-1-when-div-by-86377.html
Find all School-related info fast with the new School-Specific MBA Forum It is currently 27 Apr 2015, 14:37 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Sum of number < 500 that have a reminder of 1 when div by 14 Author Message TAGS: Intern Joined: 02 Nov 2009 Posts: 8 Followers: 0 Kudos [?]: 1 [1] , given: 3 Sum of number < 500 that have a reminder of 1 when div by 14 [#permalink]  05 Nov 2009, 07:15 1 KUDOS s3017789 wrote: Can someone explain the division test for 14. If there is one. The test seems to be if the number is divisible by 2 and 7 since 2*7=14. ex: 56 is divisible by 14 because 56/2=28 and 56/7=8 Quote: I had a test that wanted to know the sum of numbers <500 that had a remainder of 1 when divided by 14. For this one the only solution I could come up with is as follows: 36*14=504 which is >500, so we are interested in values under 36. Try 35*14=490. So we are interested in values 35 to 1. Now any number divided by 14 can have remainders 0,1,2,3,4...13. 499/14 = 35*14 + 9 .. .. 492/14 = 35*14 + 2 491/14 = 35*14 + 1 490/14 = 35*14 + 0 489/14 = 34*14 + 13 .. .. 1/14 = 0*14 + 1 So the thing is we are interested in values like 491, i.e (multiple of 14) + 1 The sum would look something like this; (35*14) + 1 + (34*14) + 1 + (34*14) + 1 + ..... + (14*1) + 1 + (14*0) + 1 = 36 + 14(35 + 34 + 33 + ... +1) = 36 + 14( (35+1)(35)/2 ) = 36 + (14*18*35) = 36 + 8820 = 8856 I get the 36 by adding up all the 1's And for the sum of 35+34..+1, I'm just using the formula n(n+1)/2. I think it's part of HH's notes. However I haven't verified the answer, and I might be miscounting some where along the line, so this answer is half baked UPDATE: Ok I plugged it into the computer and that's the answer I'm getting. I'd hate to get this on an exam though. VP Joined: 05 Mar 2008 Posts: 1473 Followers: 11 Kudos [?]: 216 [1] , given: 31 Re: Sum of number < 500 that have a reminder of 1 when div by 14 [#permalink]  05 Nov 2009, 07:40 1 KUDOS I don't know about a rule of 14 but here is my take Well with these problems it always helps to find the first two values (to get a formula and confirm what I did is correct) and the last value: First value: 1 because 1/14 has remainder 1 2nd value: 15 formula: 14x + 1 < 500 starting at x = 0 (0 being the 1st value that gives remainder 1 when plugged into the formula) To get the last value solve 14x + 1 < 500 for x and the answer is 35 (the 35th number); the 35th number is the value 491 the number of integers between 0 and 35 inclusive is 36 the median is the number (35 + 0)/2 = 17.5; that means 17.5th number which must be multiplied by 14x and then add 1 (the formula from above) to get the actual number when numbers are evenly spaced the mean = median and when multiplied by the number of integers you get the sum average (or median) is 17.5 * 14 = 245 + 1 = 246 multiply by 36 and you get 8856 Math Expert Joined: 02 Sep 2009 Posts: 27123 Followers: 4189 Kudos [?]: 40504 [2] , given: 5540 Re: Sum of number < 500 that have a reminder of 1 when div by 14 [#permalink]  05 Nov 2009, 07:55 2 KUDOS Expert's post drukpaGuy wrote: Quote: I had a test that wanted to know the sum of numbers <500 that had a remainder of 1 when divided by 14. For this one the only solution I could come up with is as follows: 36*14=504 which is >500, so we are interested in values under 36. Try 35*14=490. So we are interested in values 35 to 1. Now any number divided by 14 can have remainders 0,1,2,3,4...13. 499/14 = 35*14 + 9 .. .. 492/14 = 35*14 + 2 491/14 = 35*14 + 1 490/14 = 35*14 + 0 489/14 = 34*14 + 13 .. .. 1/14 = 0*14 + 1 So the thing is we are interested in values like 491, i.e (multiple of 14) + 1 The sum would look something like this; (35*14) + 1 + (34*14) + 1 + (34*14) + 1 + ..... + (14*1) + 1 + (14*0) + 1 = 36 + 14(35 + 34 + 33 + ... +1) = 36 + 14( (35+1)(35)/2 ) = 36 + (14*18*35) = 36 + 8820 = 8856 I get the 36 by adding up all the 1's And for the sum of 35+34..+1, I'm just using the formula n(n+1)/2. I think it's part of HH's notes. However I haven't verified the answer, and I might be miscounting some where along the line, so this answer is half baked UPDATE: Ok I plugged it into the computer and that's the answer I'm getting. I'd hate to get this on an exam though. OK first let's determine how many such numbers are <500, that lives a remainder of 1 when divided by 14. Formula for these numbers is x=14p+1 (where p is an integer >=0). 14p+1<500 --> p<35.6 as p can be zero too, thus we have total of 36 such numbers 1, 15, 29....491 (36th number) Now since the question is not given there can be two cases: A. We are asked to determine the sum of these 36 numbers 1,15,29,..491 In this case: as we have the arithmetic progression with firs term 1 and common difference of 14, then sum Sn=n*(2a1+d(n-1))/2 (a1 first term, 1 in our case; n number of terms 36 in our case; and d common difference 14 in our case). S=36*(2+14*35)/2=8856. B. If we are asked to determine just the sum of theses 36 meaning 1+2+3+...+36, then S=n*(1+n)/2=36*37/2=666. _________________ Intern Joined: 02 Nov 2009 Posts: 8 Followers: 0 Kudos [?]: 1 [0], given: 3 Re: Sum of number < 500 that have a reminder of 1 when div by 14 [#permalink]  05 Nov 2009, 08:13 Thanks guys. Sn=n*(2a1+d(n-1))/2 where a1 = first term, n = number of terms, d = difference between terms. Math Expert Joined: 02 Sep 2009 Posts: 27123 Followers: 4189 Kudos [?]: 40504 [1] , given: 5540 Re: Sum of number < 500 that have a reminder of 1 when div by 14 [#permalink]  05 Nov 2009, 08:22 1 KUDOS Expert's post drukpaGuy wrote: Thanks guys. Sn=n*(2a1+d(n-1))/2 where a1 = first term, n = number of terms, d = difference between terms. We could use another formula for it as well: Sn=n*(a1+an)/2 (an is the last term 491 in our case). _________________ Re: Sum of number < 500 that have a reminder of 1 when div by 14   [#permalink] 05 Nov 2009, 08:22 Similar topics Replies Last post Similar Topics: 2 A $500 investment and a$1,500 investment have a combined 5 06 Oct 2013, 05:27 44 The integers r, s and t all have the same remainder when div 9 23 Sep 2013, 04:55 1 What will be the remainder when 13^7+14^7+15^7+16^7 is div.. 3 06 Sep 2013, 12:10 4 When N is divided by 10 the remainder is 1 and when N is div 2 06 Sep 2013, 11:44 Divisbility problems 1. If r is the reminder when N is 1 22 Sep 2007, 07:39 Display posts from previous: Sort by
2015-04-27 22:37:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.667214572429657, "perplexity": 1736.3117749383514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659483.51/warc/CC-MAIN-20150417045739-00035-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.campusgate.co.in/2019/05/equations-1-3.html
# Equations 1/3 17. A number exceeds 20% of itself by 40. The number is a. 50 b. 60 c. 80 d. 320 18. Three numbers are in the ratio 3:4:5. The sum of the largest and the smallest equals the sum of the third and 52.The smallest number is : a. 20 b. 27 c. 39 d. 52 d. None of these 19. If 16% of 40% of a number is 8, then the number is a. 200 b. 225 c. 125 d. 320 20. If 3 is added to the denominator of a fraction, it becomes $\displaystyle\frac{1}{3}$ and if 4 be added to its numerator, it becomes $\displaystyle\frac{3}{4}$ ; the fraction is : a. $\displaystyle\frac{4}{9}$ b. $\displaystyle\frac{3}{{20}}$ c. $\displaystyle\frac{7}{{24}}$ d. $\displaystyle\frac{5}{{12}}$ 21. Of the three numbers, the first is twice the second and is half of the third. If the average of three numbers is 56, then the smallest number is a. 24 b. 36 c. 40 d. 48 22. The difference of two numbers is 8 and $\displaystyle\frac{{{1^{th}}}}{{12}}$ of the sum is 1. The numbers are a. 10, 2 b. 18, 26 c. 10, 18 d. 26, 34 23. A number is 25 more than its $\displaystyle\frac{2}{5}{\rm{th}}$. The number is a. 60 b. 80 c. $\displaystyle\frac{{125}}{3}$ d. $\displaystyle\frac{{125}}{7}$ 24. The sum of three numbers is 68. If the ratio between first and second be 2 : 3 and that between second and third be 5 : 3, then the second number is a. 30 b. 20 c. 58 d. 48 25. The sum of two numbers is 100 and their difference is 37. The difference of their squares is a. 37 b. 100 c. 63 d. 3700
2019-07-18 08:49:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5546686053276062, "perplexity": 424.1348965795012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525587.2/warc/CC-MAIN-20190718083839-20190718105839-00558.warc.gz"}
https://pymc3.readthedocs.io/en/latest/notebooks/model_comparison.html
# Model comparison¶ To demonstrate the use of model comparison criteria in PyMC3, we implement the 8 schools example from Section 5.5 of Gelman et al (2003), which attempts to infer the effects of coaching on SAT scores of students from 8 schools. Below, we fit a pooled model, which assumes a single fixed effect across all schools, and a hierarchical model that allows for a random effect that partially pools the data. In [1]: %matplotlib inline import pymc3 as pm import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_context('notebook') The data include the observed treatment effects and associated standard deviations in the 8 schools. In [2]: J = 8 y = np.array([28, 8, -3, 7, -1, 1, 18, 12]) sigma = np.array([15, 10, 16, 11, 9, 11, 10, 18]) ## Pooled model¶ In [3]: with pm.Model() as pooled: mu = pm.Normal('mu', 0, sd=1e6) obs = pm.Normal('obs', mu, sd=sigma, observed=y) trace_p = pm.sample(1000) Auto-assigning NUTS sampler... Initializing NUTS using ADVI... Average Loss = 43.342: 4%|▍ | 7754/200000 [00:00<00:12, 14950.83it/s] Convergence archived at 9400 Interrupted at 9,400 [4%]: Average Loss = 43.902 100%|██████████| 1500/1500 [00:00<00:00, 2870.72it/s] In [4]: pm.traceplot(trace_p); ## Hierarchical model¶ In [5]: with pm.Model() as hierarchical: eta = pm.Normal('eta', 0, 1, shape=J) mu = pm.Normal('mu', 0, sd=1e6) tau = pm.HalfCauchy('tau', 5) theta = pm.Deterministic('theta', mu + tau*eta) obs = pm.Normal('obs', theta, sd=sigma, observed=y) trace_h = pm.sample(1000) Auto-assigning NUTS sampler... Initializing NUTS using ADVI... Average Loss = 43.29: 11%|█▏ | 22935/200000 [00:02<00:14, 12516.51it/s] Convergence archived at 23000 Interrupted at 23,000 [11%]: Average Loss = 44.107 94%|█████████▍| 1412/1500 [00:01<00:00, 1234.30it/s]/Users/fonnescj/Repos/pymc3/pymc3/step_methods/hmc/nuts.py:456: UserWarning: Chain 0 contains 1 diverging samples after tuning. If increasing target_accept does not help try to reparameterize. % (self._chain_id, n_diverging)) 100%|██████████| 1500/1500 [00:01<00:00, 1162.34it/s] In [6]: pm.traceplot(trace_h, varnames=['mu']); In [7]: pm.forestplot(trace_h, varnames=['theta']); ## Deviance Information Criterion (DIC)¶ DIC (Spiegelhalter et al. 2002) is an information theoretic criterion for estimating predictive accuracy that is analogous to Akaike’s Information Criterion (AIC). It is a more Bayesian approach that allows for the modeling of random effects, replacing the maximum likelihood estimate with the posterior mean and using the effective number of parameters to correct for bias. In [8]: pooled_dic = pm.dic(trace_p, pooled) pooled_dic Out[8]: 90.882480923881175 In [9]: hierarchical_dic = pm.dic(trace_h, hierarchical) hierarchical_dic Out[9]: 124.2888830531669 ## Widely-applicable Information Criterion (WAIC)¶ WAIC (Watanabe 2010) is a fully Bayesian criterion for estimating out-of-sample expectation, using the computed log pointwise posterior predictive density (LPPD) and correcting for the effective number of parameters to adjust for overfitting. In [10]: pooled_waic = pm.waic(trace_p, pooled) pooled_waic.WAIC Out[10]: 61.136455281870752 In [11]: hierarchical_waic = pm.waic(trace_h, hierarchical) hierarchical_waic.WAIC Out[11]: 61.350518836043364 PyMC3 includes two convenience functions to help compare WAIC for different models. The first of this functions is compare, this one computes WAIC (or LOO) from a set of traces and models and returns a DataFrame. In [12]: df_comp_WAIC = pm.compare((trace_h, trace_p), (hierarchical, pooled)) df_comp_WAIC Out[12]: WAIC pWAIC dWAIC weight SE dSE warning 1 61.1365 0.676001 0 0.526732 2.20685 0 0 0 61.3505 0.985125 0.214064 0.473268 1.94631 0.0389335 0 We have many columns so let check one by one the meaning of them: 1. The first column clearly contains the values of WAIC. The DataFrame is always sorted from lowest to highest WAIC. The index reflects the order in which the models are passed to this function. 2. The second column is the estimated effective number of parameters. In general, models with more parameters will be more flexible to fit data and at the same time could also lead to overfitting. Thus we can interpret pWAIC as a penalization term, intuitively we can also interpret it as measure of how flexible each model is in fitting the data. 3. The third column is the relative difference between the value of WAIC for the top-ranked model and the value of WAIC for each model. For this reason we will always get a value of 0 for the first model. 4. Sometimes when comparing models, we do not want to select the “best” model, instead we want to perform predictions by averaging along all the models (or at least several models). Ideally we would like to perform a weighted average, giving more weight to the model that seems to explain/predict the data better. There are many approaches to perform this task, one of them is to use Akaike weights based on the values of WAIC for each model. These weights can be loosely interpreted as the probability of each model (among the compared models) given the data. One caveat of this approach is that the weights are based on point estimates of WAIC (i.e. the uncertainty is ignored). 5. The fifth column records the standard error for the WAIC computations. The standard error can be useful to assess the uncertainty of the WAIC estimates. Nevertheless, caution need to be taken because the estimation of the standard error assumes normality and hence could be problematic when the sample size is low. 6. In the same way that we can compute the standard error for each value of WAIC, we can compute the standard error of the differences between two values of WAIC. Notice that both quantities are not necessarily the same, the reason is that the uncertainty about WAIC is correlated between models. This quantity is always 0 for the top-ranked model. 7. Finally we have the last column named “warning”. A value of 1 indicates that the computation of WAIC may not be reliable, this warning is based on an empirical determined cutoff value and need to be interpreted with caution. For more details you can read this paper. The second convenience function takes the output of compare and produces a summary plot in the style of the one used in the book Statistical Rethinking by Richard McElreath (check also this port of the examples in the book to PyMC3). In [14]: pm.compareplot(df_comp_WAIC); The empty circle represents the values of WAIC and the black error bars associated with them are the values of the standard deviation of WAIC. The value of the lowest WAIC is also indicated with a vertical dashed grey line to ease comparison with other WAIC values. The filled black dots are the in-sample deviance of each model, which for WAIC is 2 pWAIC from the corresponding WAIC value. For all models except the top-ranked one we also get a triangle indicating the value of the difference of WAIC between that model and the top model and a grey errobar indicating the standard error of the differences between the top-ranked WAIC and WAIC for each model. ## Leave-one-out Cross-validation (LOO)¶ LOO cross-validation is an estimate of the out-of-sample predictive fit. In cross-validation, the data are repeatedly partitioned into training and holdout sets, iteratively fitting the model with the former and evaluating the fit with the holdout data. Vehtari et al. (2016) introduced an efficient computation of LOO from MCMC samples, which are corrected using Pareto-smoothed importance sampling (PSIS) to provide an estimate of point-wise out-of-sample prediction accuracy. In [15]: pooled_loo = pm.loo(trace_p, pooled) pooled_loo.LOO /Users/fonnescj/Repos/pymc3/pymc3/stats.py:255: UserWarning: Estimated shape parameter of Pareto distribution is greater than 0.7 for one or more samples. You should consider using a more robust model, this is because importance sampling is less likely to work well if the marginal posterior and LOO posterior are very different. This is more likely to happen with a non-robust model and highly influential observations. happen with a non-robust model and highly influential observations.""") Out[15]: 61.555247474354104 In [16]: hierarchical_loo = pm.loo(trace_h, hierarchical) hierarchical_loo.LOO /Users/fonnescj/Repos/pymc3/pymc3/stats.py:255: UserWarning: Estimated shape parameter of Pareto distribution is greater than 0.7 for one or more samples. You should consider using a more robust model, this is because importance sampling is less likely to work well if the marginal posterior and LOO posterior are very different. This is more likely to happen with a non-robust model and highly influential observations. happen with a non-robust model and highly influential observations.""") Out[16]: 61.413990925656876 We can also use compare with LOO. In [17]: df_comp_LOO = pm.compare((trace_h, trace_p), (hierarchical, pooled), ic='LOO') df_comp_LOO Out[17]: LOO pLOO dLOO weight SE dSE warning 0 61.414 1.01686 0 0.51765 1.95466 0 1 1 61.5552 0.885397 0.141257 0.48235 2.1805 0.0347903 1 The columns return the equivalent values for LOO, notice that in this example we get two warnings. Also notice that the order of the models is not the same as the one for WAIC. We can also plot the results In [18]: pm.compareplot(df_comp_LOO); ## Interpretation¶ Though we might expect the hierarchical model to outperform a complete pooling model, there is little to choose between the models in this case, giving that both models gives very similar values of the information criteria. This is more clearly appreciated when we take into account the uncertainty (in terms of standard errors) of WAIC and LOO.
2019-10-22 01:12:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6789942979812622, "perplexity": 1760.3013222235197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795403.76/warc/CC-MAIN-20191022004128-20191022031628-00266.warc.gz"}
https://physics.stackexchange.com/questions/300348/rocket-thrust-equation
# Rocket Thrust Equation I am doing the math required to find the thrust of a rocket engine (more specifically backwards, I have the thrust required and I am designing the engine). In my looking over the equations I have found on the NASA website: http://spaceflightsystems.grc.nasa.gov/education/rocket/rktthsum.html $$F={\dot {m}}*Ve+(Pe-Po)*Ae$$ In this equation: ${\dot {m}}$ = Mass flow rate $Ve$ = Exit velocity $Pe$ = Exit pressure $Po$ = Chamber pressure $Ae$ = Exit area It seems to me that if the ambient pressure and exit pressure are equal, that the equation would just be: $F={\dot {m}}*Ve$ Would this be the correct because $(Pe-Po)*Ae$ is then zero? • Welcome on Physics SE :) Are you asking whether $a-b=0$ for $a=b$ or is your question really whether the equation you found is correct? For the latter question, it might be of advantage to edit your post to (i) include a reference and (ii) explain the variables you are using – Sanya Dec 22 '16 at 21:29 • – Phoenix87 Dec 22 '16 at 23:10 Yes, that is correct, with $V_e$ the velocity of the gases at the nozzle exit.
2019-08-17 11:11:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5046851634979248, "perplexity": 262.76577233957113}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312128.3/warc/CC-MAIN-20190817102624-20190817124624-00313.warc.gz"}
https://dash.harvard.edu/handle/1/25426537?show=full
dc.contributor.author Schlein, Benjamin dc.contributor.author Yau, Horng-Tzer dc.contributor.author Yin, Jun dc.date.accessioned 2016-02-17T21:32:42Z dc.date.issued 2012 dc.identifier.citation Erdos, László, Benjamin Schlein, Horng-Tzer Yau, and Jun Yin. 2012. “The Local Relaxation Flow Approach to Universality of the Local Statistics for Random Matrices.” Annales de l’Institut Henri Poincaré, Probabilités et Statistiques 48 (1) (February): 1–46. doi:10.1214/10-aihp388. en_US dc.identifier.issn 0246-0203 en_US dc.identifier.uri http://nrs.harvard.edu/urn-3:HUL.InstRepos:25426537 dc.description.abstract We present a generalization of the method of the local relaxation flow to establish the universality of local spectral statistics of a broad class of large random matrices. We show that the local distribution of the eigenvalues coincides with the local statistics of the corresponding Gaussian ensemble provided the distribution of the individual matrix element is smooth and the eigenvalues {$x_{j}$}$_{j=1}^{N}$ are close to their classical location {$\gamma$$_{j}$}$_{j=1}^{N}$ determined by the limiting density of eigenvalues. Under the scaling where the typical distance between neighboring eigenvalues is of order 1/$N$, the necessary apriori estimate on the location of eigenvalues requires only to know that $\mathbb{E}$ |$x_{j}$ $-$ $\gamma$$_{j}$|$^{2}$ $\leq$ $N$$^{-1-\epsilon}$ on average. This information can be obtained by well established methods for various matrix ensembles. We demonstrate the method by proving local spectral universality for Wishart matrices. en_US dc.description.sponsorship Mathematics en_US dc.language.iso en_US en_US dc.publisher Institute of Mathematical Statistics en_US dc.relation.isversionof doi://10.1214/10-AIHP388 en_US dc.relation.hasversion http://arxiv.org/abs/0911.3687v5 en_US dash.license OAP dc.subject random matrix en_US dc.subject sample covariance matrix en_US dc.subject Wishart matrix en_US dc.subject Wigner–Dyson statistics en_US dc.title The local relaxation flow approach to universality of the local statistics for random matrices en_US dc.type Journal Article en_US dc.description.version Accepted Manuscript en_US dc.relation.journal Ann. Inst. H. Poincaré Probab. Statist. en_US dash.depositing.author Yau, Horng-Tzer dc.date.available 2016-02-17T21:32:42Z dc.identifier.doi 10.1214/10-AIHP388 * dash.contributor.affiliated Yau, Horng-Tzer 
2019-08-21 18:06:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7629278302192688, "perplexity": 12693.550809629232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316150.53/warc/CC-MAIN-20190821174152-20190821200152-00396.warc.gz"}
https://www.ccagml.com/?p=230
self-dividing number is a number that is divisible by every digit it contains. For example, 128 is a self-dividing number because 128 % 1 == 0128 % 2 == 0, and 128 % 8 == 0. Also, a self-dividing number is not allowed to contain the digit zero. Given a lower and upper number bound, output a list of every possible self dividing number, including the bounds if possible. Example 1: Input: left = 1, right = 22 Output: [1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 15, 22] Note: • The boundaries of each input argument are 1 <= left <= right <= 10000. Python class Solution(object): def selfDividingNumbers(self, left, right): """ :type left: int :type right: int :rtype: List[int] """ a = [] for i in range(left, right + 1): if self.isDN(i): a.append(i) return a def isDN(self, num): if str(0) in str(num): return False for i in str(num): if num % int(i) != 0: return False return True
2023-04-01 20:18:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28607484698295593, "perplexity": 4436.923828269627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00083.warc.gz"}
https://search.r-project.org/CRAN/refmans/DPpack/html/GaussianMechanism.html
GaussianMechanism {DPpack} R Documentation ## Gaussian Mechanism ### Description This function implements the Gaussian mechanism for differential privacy by adding noise to the true value(s) of a function according to specified values of epsilon, delta, and l2-global sensitivity(-ies). Global sensitivity calculated based either on bounded or unbounded differential privacy can be used (Kifer and Machanavajjhala 2011). If true.values is a vector, the provided epsilon and delta are divided such that (epsilon, delta)-level differential privacy is satisfied across all function values. In the case that each element of true.values comes from its own function with different corresponding sensitivities, a vector of sensitivities may be provided. In this case, if desired, the user can specify how to divide epsilon and delta among the function values using alloc.proportions. ### Usage GaussianMechanism( true.values, eps, delta, sensitivities, alloc.proportions = NULL ) ### Arguments true.values Real number or numeric vector corresponding to the true value(s) of the desired function. eps Positive real number defining the epsilon privacy parameter. delta Positive real number defining the delta privacy parameter. sensitivities Real number or numeric vector corresponding to the l2-global sensitivity(-ies) of the function(s) generating true.values. This value must be of length 1 or of the same length as true.values. If it is of length 1 and true.values is a vector, this indicates that the given sensitivity applies simultaneously to all elements of true.values and that the privacy budget need not be allocated (alloc.proportions is unused in this case). If it is of the same length as true.values, this indicates that each element of true.values comes from its own function with different corresponding sensitivities. In this case, the l2-norm of the provided sensitivities is used to generate the Gaussian noise. type.DP String indicating the type of differential privacy desired for the Gaussian mechanism. Can be either 'pDP' for probabilistic DP (Liu 2019) or 'aDP' for approximate DP (Dwork et al. 2006). Note that if 'aDP' is chosen, epsilon must be strictly less than 1. alloc.proportions Optional numeric vector giving the allocation proportions of epsilon and delta to the function values in the case of vector-valued sensitivities. For example, if sensitivities is of length two and alloc.proportions = c(.75, .25), then 75% of the privacy budget eps (and 75% of delta) is allocated to the noise computation for the first element of true.values, and the remaining 25% is allocated to the noise computation for the second element of true.values. This ensures (eps, delta)-level privacy across all computations. Input does not need to be normalized, meaning alloc.proportions = c(3,1) produces the same result as the example above. ### Value Sanitized function values based on the bounded and/or unbounded definitions of differential privacy, sanitized via the Gaussian mechanism. ### References Dwork C, McSherry F, Nissim K, Smith A (2006). “Calibrating Noise to Sensitivity in Private Data Analysis.” In Halevi S, Rabin T (eds.), Theory of Cryptography, 265–284. ISBN 978-3-540-32732-5, https://doi.org/10.1007/11681878_14. Kifer D, Machanavajjhala A (2011). “No Free Lunch in Data Privacy.” In Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data, SIGMOD '11, 193–204. ISBN 9781450306614, doi:10.1145/1989323.1989345. Liu F (2019). “Generalized Gaussian Mechanism for Differential Privacy.” IEEE Transactions on Knowledge and Data Engineering, 31(4), 747-756. https://doi.org/10.1109/TKDE.2018.2845388. Dwork C, Kenthapadi K, McSherry F, Mironov I, Naor M (2006). “Our Data, Ourselves: Privacy Via Distributed Noise Generation.” In Vaudenay S (ed.), Advances in Cryptology - EUROCRYPT 2006, 486–503. ISBN 978-3-540-34547-3, doi:10.1007/11761679_29. ### Examples # Simulate dataset n <- 100 c0 <- 5 # Lower bound c1 <- 10 # Upper bound D1 <- stats::runif(n, c0, c1) # Privacy budget epsilon <- 0.9 # eps must be in (0, 1) for approximate differential privacy delta <- 0.01 sensitivity <- (c1-c0)/n # Approximate differential privacy private.mean.approx <- GaussianMechanism(mean(D1), epsilon, delta, sensitivity) private.mean.approx # Probabilistic differential privacy private.mean.prob <- GaussianMechanism(mean(D1), epsilon, delta, sensitivity, type.DP = 'pDP') private.mean.prob # Simulate second dataset d0 <- 3 # Lower bound d1 <- 6 # Upper bound D2 <- stats::runif(n, d0, d1) D <- matrix(c(D1,D2),ncol=2) sensitivities <- c((c1-c0)/n, (d1-d0)/n) epsilon <- 0.9 # Total privacy budget for all means delta <- 0.01 # Here, sensitivities are summed and the result is used to generate Laplace # noise. This is essentially the same as allocating epsilon proportional to # the corresponding sensitivity. The results satisfy (0.9,0.01)-approximate # differential privacy. private.means <- GaussianMechanism(apply(D, 2, mean), epsilon, delta, sensitivities) private.means # Here, privacy budget is explicitly split so that 75% is given to the first # vector element and 25% is given to the second. private.means <- GaussianMechanism(apply(D, 2, mean), epsilon, delta, sensitivities, alloc.proportions = c(0.75, 0.25)) private.means [Package DPpack version 0.0.11 Index]
2022-11-28 09:33:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.350070983171463, "perplexity": 4093.6551074434124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710488.2/warc/CC-MAIN-20221128070816-20221128100816-00410.warc.gz"}
http://2019.edocconsultoria.com.br/assassin-s-lsodf/339f57-linear-pair-theorem-equation
The Hurwitz Matrix Equations Lemma 2.1. 3. , C.F. The goal is to solve this pair of equations for ∈ 1. and ∈ ⊥ as functions of . Exercise. = = = = = = = = M at h Com poser 1. When two linear equations having same variables in both the equation is said to be pair of linear equations in two variables. ; Use angle pair relationships to write and solve equations Apply the Linear Pair Postulate and the Vertical Angles Theorem. 17: ch. Linear Pair Theorem: If two angles are a linear pair (consecutive angles with a shared wall that create a straight line), then their measures will add to equal 180° Example: Given: Prove: ∠ + ∠ =180° Reasons ∠ & ∠ are a linear pair Given ∠ + ∠ =180° Linear Pair Theorem Example 1: Solve the pair of linear equation by using graph method x+3y=6 and 2x-3y=12. 1. com o 3x 90 To sketch the graph of pair of linear equations in two variables, we draw two lines representing the equations. Obtain a table of ordered pairs (x, y), which satisfy the given equation. De Moivre’s theorem. ; Complementary Angles Two angles are complementary angles if the sum of their measures is . The matrix can be considered as a function, a linear transformation , which maps an N-D vector in the domain of the function into an M-D vector in the codomain of the function. Let L(y) = 0 be a homogeneous linear second order differential equation and let y1 and y2 be two solutions. Also notice that the Jacobian of the right side with respect to , when evaluated at =0and ( )=(0 0),equalstheidentity and hence is invertible. Ratio of volume of octahedron to sphere; Sitting on the Fence 1. ?q�S��)���I��&� ���fg'��-�Bo �����I��6eɧ~�8�Kd��t�z,��O�L�C��&+6��/�Tl�K6�U��am�w���Ÿsqm�I�K����7��m2ؓB�Z��6�є��_߼qK�����A�����������S0��5�dX�ahtB�R=]5�D쫿J��&aW������}l����8�>���@=#d���P�x�3�ܽ+!1�.XM�K Maths solutions for class 10 chapter 4 linear equations in two variables. The pair of linear equations 8 x − 5 y = 7 and 5 x − 8 y = − 7, have: View solution. 2) and the matrix linear unilateral equations + = , (1. Then c1y1 + c2y2 is also a solution for any pair or constants c1 and c2. In such a method, the condition for consistency of pair of linear equation in two variables must be checked, which are as follows: If $$\frac{a_1}{a_2}$$ ≠ $$\frac{b_1}{b_2}$$, then we get a unique solution and the pair of linear equations in two variables are consistent. s�ƒf؅� 7��yV�yh�0x��\�gE^���.�T���(H����ݫJZ[���z�b�v8�,���H��q��H�G&��c��j���L*����8������Cg�? 5 ht t p: / / www. Plot the graphs for the two equations on the graph paper. If (1) has an integral solution then it has an infinite number of integral solutions. Explain why the linear Diophantine equation $2x-101y=82$ is solvable or not solvable. 5 ht t p: / / www. Let a, b, and c ∈ Z and set d = gcd(a,b). 10 or linear recurrence relation sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence.The polynomial's linearity means that each of its terms has degree 0 or 1. Example-Problem Pair. Author: Kevin Tobe. Nature of the roots of a quadratic equations. Since we have two constants it makes sense, hopefully, that we will need two equations, or conditions, to find them. Cross-multiplication Method of finding solution of a pair of Linear Equations. Pair of Linear Equations in Two Variables Class 10 Extra Questions Very Short Answer Type. Inter maths solutions You can also see the solutions for senior inter. Linear Algebra (6) Linear Approximation (2) Linear Equations (3) Linear Functions (1) Linear Measure (1) Linear Pair Angles Theorem (2) Locus of Points (1) Logarithmic Differentiation (2) Logarithmic Equations (1) Logarithms (4) Maclaurin Series (1) Mass Percent Composition from Chemical Formulas (2) Math Puzzles (2) Math Tricks (6) Matrices (5) 3. A theorem corresponding to Theorem 4.8 is given as follows. 1. Use linear pair theorem to find the value of x. In such a method, the condition for consistency of pair of linear equation in two variables must be checked, which are as follows: If $$\frac{a_1}{a_2}$$ ≠ $$\frac{b_1}{b_2}$$, then we get a unique solution and the pair of linear equations in two variables are consistent. Problems on 2nd Order Linear Homogeneous Equations ... Use the Existence – uniqueness theorem to prove that if any pair of solutions, y1 and y2, to the DE (∗) vanish at the same point in the interval α < x < β , then they cannot form a fundamental set of solutions on this interval. We get 20 = 16 + 4 = 20, (1) is verified. 5 ht t p: / / www. Write this statement as a linear equation in two variables. If $$a$$ divides $$b$$, then the equation $$ax = b$$ has exactly one solution that is an integer. 2. Show all your steps. Theorem 2: Assume that the linear, mth-order di erential operator L is not singular on [a,b]. 1. Question 1. Let (1) be an oscillatory equation and let y 1,y 2 be a pair of linearly independent solutions normalized by the unit Wronskian |w(y 1,y 2)| = 1. Solving quadratic equations by completing square. com o 136 4x+12 M at h Com poser 1. 5 ht t p: / / www. The Definition of Linear Pair states that both ∠ABC and ∠CBD are equal to 180 degrees. 2. 2. Consider the differential equation. Apply multivariable calculus ideas to an important pair of nonlinear equations. where and are constants, is also a solution. In general, solution of the non-homogeneous linear Diophantine equation is equal to the integer solution of its associated homogeneous linear equation plus any particular integer solution of the non-homogeneous linear equation, what is given in the form of a theorem. com o 136 4x+12 M at h Com poser 1. Take the pair of linear equations in two variables of the form a 1 x + b 1 y + c 1 = 0 a 2 x + b 2 y + c 2 = 0 e.g. com 2x+5 65 o M at h Com poser 1. 3. Exercise 4.3. 3. a�s�^(-�la����fa��P�j���C�\��4h�],�P3�]�a�G Recall that for a first order linear differential equation $y' + p(t) y = g (t) \;\;\; y(t_0) = y_0 \nonumber$ if $$p(t)$$ and $$g(t)$$ are continuous on $$[a,b]$$, then there exists a unique solution on the interval $$[a,b]$$. 10 or linear recurrence relation sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence.The polynomial's linearity means that each of its terms has degree 0 or 1. Are all linear pairs supplementary angles? Pair of Linear Equations in Two Variables Class 10 Important Questions Short Answer-1 (2 Marks) Question 5. Axioms. The two lines AB and CD intersect at the point (20, 16), So, x = 20 and y = 16 is the required solution of the pair of linear equations i.e. We can ask the same questions of second order linear differential equations. ; Use angle pair relationships to write and solve equations Apply the Linear Pair Postulate and the Vertical Angles Theorem. 2 Systems of Linear Equations: Algebra. Linear Diophantine Equations Theorem 1. Use linear algebra to figure out the nature of equilibria. 4. This method is known as the Gaussian elimination method. This means that the sum of the angles of a linear pair is always 180 degrees. In general, solution of the non-homogeneous linear Diophantine equation is equal to the integer solution of its associated homogeneous linear equation plus any particular integer solution of the non-homogeneous linear equation, what is given in the form of a theorem. A linear pair of angles is always supplementary. If $$a$$ does not divide $$b$$, then the equation $$ax = b$$ has no solution that is an integer. a 2 x + b 2 y + c 2 =0, x and y can be calculated as. This method for solving a pair of simultaneous linear equations reduces one equation to one that has only a single variable. m at hcom poser. Proof. New Resources. 3) where , , and are matrices of appropriate size over a certain field ℱ or over a ring ℛ, , are unknown matrices. com o 2x 50 M at h Com poser 1. com 7x-8 76 o M at h Com poser 1. com o 5x 75 M at h Com poser 1. Solution: Let the cost of a ball pen and fountain pen be x and y respectively. Notice that equation (9b) is satisfied by =0when ( )=(0 0). If possible find all solutions. 5 ht t p: / / www. Coordinates of every point onthis line are the solution. Apart from the stuff given in this section, if you need any other stuff in math, please use our google custom search here. m at hcom poser. x (t), y (t) of one independent variable . = = = = = = = = M at h Com poser 1. Let V be a nite-dimensional vector space over C. If there is a pair of invertible anti-commuting linear operators on V, then dimV is even. Example: Show graphically that the system of equations 2x + 3y = 10, 4x + 6y = 12 has no solution. Sum and product of the roots of a quadratic equations Algebraic identities So, you're equation should be (3x - 6) + (3x - 6) = 180. We write: As the ray OA lies on the line segment CD, angles ∠AOD and ∠AOC form a linear pair. Once this has been done, the solution is the same as that for when one line was vertical or parallel. Show all your steps. Does the linear equation $$-3x = 20$$ have a solution that is an integer? 1. The equation aX +bY = c (1) has an integral solution (X,Y) = (x,y) ∈ Z2 if and only if d|c. Find the value of c for which the pair of equations cx – y = 2 and 6x – 2y = 3 will have infinitely many solutions. … Theorem 4.10 The time invariant linear discrete system (4.2) is asymptoti-cally stable if and only if the pair à Ï­Ü®ßCá is observable, ÕâÔÚÕ Ð ã Ø, and the algebraic Lyapunov equation (4.30) has a unique positive definite solution. 1. The solution to a system of linear equations represents all of the points that satisfy all of the equations in the system simultaneously. In mathematics and in particular dynamical systems, a linear difference equation: ch. Ratio of volume of octahedron to sphere; Sitting on the Fence ; Trigonometric graphs from circular motion; Exploring quadratic forms #2; A more elegant form of representing Euler's equation; Discover Resources. x = (b 1 c 2 −b 2 c 1)/(a 1 b 2 −a 2 b 1) y = (c 1 a 2 −c 2 a 1)/(a 1 b 2 −a 2 b 1) Solving Linear Equations Equations reducible to a pair … Solve the linear congruence $5x\equiv 15 \pmod{35}$ by solving a linear Diophantine equation. The equation ax+ by = c has integer solutions if and only if gcd(a;b) divides. 5 ht t p: / / www. A linear pair creates a 180 degree angle. A linear pair is created using two adjacent, supplementary angles. The lines of two equations are coincident. In such a case, the pair of linear equations … Solving one step equations. �"��"#���C���&�[L��"�K;��&��X8����}��t2ċ&��C13��7�o�����xm�X|q��)�6 �4�,��}�+�]0)�+3�O���Fc1�\Y�O���DCSb. To learn more about this topic, review the accompanying lesson titled Linear Pair: Definition, Theorem & Example. 1. A pair of simultaneous first order homogeneous linear ordinary differential equations for two functions . com o 4x 120 M at h Com poser 1. Prove the following theorem: Theorem 8.18. �P�%$Qւ�쬏ey���& Example 2. If a = 0, then the equation is linear, not quadratic, as there is no ax² term. This method is known as the Gaussian elimination method. Definition: linear Diophantine equation in one variable If a and b are integers with a ≠ 0, then the equation ax = b is a linear Diophantine equation in one variable. The such equations are the matrix linear bilateral equations with one and two variables + = , (1. The fundamental theorem of calculus is the statement that differentiation and integration are inverse operations: if a continuous function is first integrated and then differentiated, the original function is retrieved. Included with Brilliant Premium Linearization. Ratio – Fractions and Linear Equations; 5. m at hcom poser . 5 ht t p: / / www. This method for solving a pair of simultaneous linear equations reduces one equation to one that has only a single variable. we get 20 + 16 = 36 36 = 36, (2) is verified. Verifying the Superposition Principle. If (1) has an integral solution then it has an infinite number of integral solutions. Let $$a, b \in \mathbb{Z}$$ with $$a \ne 0$$. ... Pythagorean theorem. Let v(x) = y2 1 (x) + y 2 2(x) and suppose that lim x→∞ 1. Find at least three such pairs for each equation. 3. Solution: We will plot the graph of the lines individually and then try to find out the intersection point. = = = = = = = = M at h Com poser 1. Included with Brilliant Premium The Hartman-Grobman Theorem. Exercise. Chapter : Linear Equation In Two Variable Examples of Solutions of Pair of Equations Example: Show graphically that the system of equations x – 4y + 14 = 0 ; 3x + 2y – 14 = … Question 2. 5 ht t p: / / www. m at hcom poser. Solving quadratic equations by factoring. The next question that we can ask is how to find the constants $$c_{1}$$ and $$c_{2}$$. m at hcom poser . Putting x = 20 and y = 16 in (2). <> In mathematics and in particular dynamical systems, a linear difference equation: ch. Let a, b, and c ∈ Z and set d = gcd(a,b). So, if we now make the assumption that we are dealing with a linear, second order homogeneous differential equation, we now know that $$\eqref{eq:eq3}$$ will be its general solution. The following cases are possible: i) If both the lines intersect at a point, then there exists a unique solution to the pair of linear equations. Solution Sets; Linear Independence; Subspaces; Basis and Dimension; Bases as Coordinate Systems; The Rank Theorem; 4 Linear Transformations and Matrix Algebra. the Cauchy–Euler equation (q(x) = γ2/x2), we now present a theorem which characterizes the pair y 1,y 2 by a condition on v0: Theorem 1. m at hcom poser. Axiom 1: If a ray stands on a line then the adjacent angles form a linear pair of angles. fprintf(' \n Let (u0, v0) be a solution pair to the equation au+mv=gcd(a,m) \n%d u + %d v = %d ', a, m, gcd_of_a_and_m); fprintf( ' \n u0 = %d v0 = %d\n ' , u0, v0); % Multiplying the solution by c/gcd(a,m) because we need the solutions to ax + my = c Quadratic equations Exercise 3(a) Exercise 3(b) Exercise 3(c) 4. Example 2. Exercise. m at hcom poser. 4. In the figure above, all the line segments pass through the point O as shown. Reason The system of equations 3 x − 5 y = 9 and 6 x − 1 0 y = 8 has a unique solution. The linear pair theorem is widely used in geometry. 3. Solve the linear congruence$5x\equiv 15 \pmod{35}$by solving a linear Diophantine equation. 1. According to the question the following equation can be formed, x = y/2 − 5. or x = (y – 10)/2. The fundamental theorem of linear algebra concerns the following four subspaces associated with any matrix with rank (i.e., has independent columns and rows). m at hcom poser. The proof of this superposition principle theorem is left as an exercise. Linear Pair Theorem. 1. Alternative versions. Simultaneous Linear Equations The Elimination Method. feel free to create and share an alternate version that worked well for your class following the guidance here Using the terminology of linear algebra, we know that L is a linear transformation of the vector space of differentiable functions into itself. 1. or 2x = y – 10. or 2x – y + 10 = 0. 5 ht t p: / / www. 2 Linear Diophantine Equations Theorem 1 Let a;b;c be integers. m at hcom poser . Writing Equations From Ordered Pairs Analyzing Functions and Graphs Functions Study Guide Pythagorean Theorem Pythagorean Theorem Videos Simplifying Expressions Linear Equations Linear Equations Vocabulary Simplifying Expression with Distribution One and Two-Step Equations Multi-Step Equations A linear pair creates a line. 1) + = , (1. 5 ht t p: / / www. Stability Analysis for Non-linear Ordinary Differential Equations . Student Name: _____ Score: Free Math Worksheets @ http://www.mathworksheets4kids.com 17: ch. Systems of Linear Equations; Row reduction; Parametric Form; Matrix Equations; 3 Solution Sets and Subspaces. Learning Objectives Define complementary angles, supplementary angles, adjacent angles, linear pairs, and vertical angles. Since Land L0have nonzero Learning Objectives Define complementary angles, supplementary angles, adjacent angles, linear pairs, and vertical angles. Explain why the linear Diophantine equation$2x-101y=82$is solvable or not solvable. Superposition Principle. q1 is answered by what's called the superposition. 1. New Resources. may be re-written as a linked pair of first order homogeneous ordinary differential equations, by introducing a second dependent variable: dx y dt dy qx py dt and may also be represented in matrix form 2) and the matrix linear unilateral equations + = , (1. We write: In algebra, a quadratic equation (from the Latin quadratus for "square") is any equation that can be rearranged in standard form as ax²+bx+c=0 where x represents an unknown, and a, b, and c represent known numbers, where a ≠ 0. t, dx x ax by dt dy y cx dy dt = = + = = + may be represented by the matrix equation . The such equations are the matrix linear bilateral equations with one and two variables + = , (1. Downloadable version. 3. The equation aX +bY = c (1) has an integral solution (X,Y) = (x,y) ∈ Z2 if and only if d|c. If 2 pairs of imaginary roots are equal i.e. Exercise. We state this fact as the following theorem. (۹Z���|3�o�DI�_5���/��ϏP�hS]�]rʿ��[~���z6���.���T�s�����ū>-��_=�����I�_�|�G�#��IO}6�?�ڸ+��w�<=��lJ�'/B�L٤t��Ӽ>�ѿkͳW�΄Ϟo���ch��:4��+FM���3Z���t>����wi���9B~�Tp��1 �B�;PYE><5�X@����Pg\�?_��� m at hcom poser. Suppose L;L0: V !V are linear, invertible, and LL0= L0L. Hence, the given equations are consistent with infinitely many solutions. %�쏢 If possible find all solutions. I'll just quote to you. Solving quadratic equations by quadratic formula. Assertion If the system of equations 2 x + 3 y = 7 and 2 a x + (a + b) y = 2 8 has infinitely many solutions, then 2 a − b = 0. Expand using binomial theorem up to nth degree as (n+1)th derivative of is zero 3. Use linear pair theorem to find the value of x. 1. %PDF-1.4 12.Solve in the nonnegative integers the equation 2x 1 = xy. If and are solutions to a linear homogeneous differential equation, then the function. com 2x+5 65 o M at h Com poser 1. \angle 1 … Prove that \measuredangle ABC + \measuredangle ABD = 180^o . General form of linear equation in two variables is ax + by + c = 0. fprintf(' \n Let (u0, v0) be a solution pair to the equation au+mv=gcd(a,m) \n%d u + %d v = %d ', a, m, gcd_of_a_and_m); fprintf( ' \n u0 = %d v0 = %d\n ' , u0, v0); % Multiplying the solution by c/gcd(a,m) because we need the solutions to ax + my = c Class 10 NCERT Solutions - Chapter 3 Pair of Linear Equations in Two Variables - Exercise 3.3; Class 10 RD Sharma Solutions - Chapter 1 Real Numbers - Exercise 1.4; Class 10 NCERT Solutions - Chapter 2 Polynomials - Exercise 2.2; Class 10 NCERT Solutions- Chapter 13 Surface Areas And Volumes … Find out why linearization works so well by borrowing ideas from topology. \angle ABC \text{ and } \angle ABD are a linear pair. This lesson covers the following objectives: Understand what constitutes a linear pair 5 ht t p: / / www. Intelligent Practice. 3 Find whether the following pair of linear equations is consistent or inconsistent: (2015) 3x + 2y = 8 6x – 4y = 9 Solution: Therefore, given pair of linear equations is … Class 10 NCERT Solutions - Chapter 3 Pair of Linear Equations in Two Variables - Exercise 3.3; Class 10 RD Sharma Solutions - Chapter 1 Real Numbers - Exercise 1.4; Class 10 NCERT Solutions - Chapter 2 Polynomials - Exercise 2.2; Class 10 NCERT Solutions- Chapter 13 Surface Areas And Volumes … In the question, this tells you that m∠ABC and m∠CBD = (3x - 6). 5 ht t p: / / www. 1. 1. This is seen graphically as the intersecting or overlapping points on the graph and can be verified algebraically by confirming the coordinate point(s) satisfy the equations when they are substituted in. Linear Diophantine Equations Theorem 1. Moreover, if at least one of a … !��F ��[�E�3�5b�w�,���%DD�D�x��� ر ~~A|�. Use linear pair theorem to find the value of x. View solution. Simultaneous Linear Equations The Elimination Method. If the lines given by 3x + 2ky = 2 and 2x + 5y + 1 = 0 are parallel, then find value of k. Solution: Since the given lines are parallel. Linear Pair Theorem. Solving linear equations using cross multiplication method. m at hcom poser . Similarly, ∠QOD and ∠POD form a linear pair and so on. The solution of a linear homogeneous equation is a complementary function, denoted here … ... how to solve pair of linear equations by using elimination method. length of the garden is 20 m and width of the garden is 16 m. Verification: Putting x = 20 and y = 16 in (1). ; Complementary Angles Two angles are complementary angles if the sum of their measures is . Note: Observe the solutions and try them in your own methods. com o 45 5x+25 M at h Com poser 1. a 1 x + b 1 y + c 1 =0. This is called the linear pair theorem. Let's attack there for problem one first. A linear pair is made using three or more angles. com o 45 5x+25 M at h Com poser 1. stream d���{SIo{d[\�[���E��\�?_��E}z����NA30��/P�7����6ü*���+�E���)L}6�t�g�r��� ��6�0;��h GK�R/�D0^�_��x����N�.��,��OA���r�Y�����d�Fw�4��3��x&��]�Ɲ����)�|Z�I|�@�8������l� ��X�6䴍Pl2u���7߸%hsp�p�k����a��w�u����"0�Y�a�t�b=}3��K�W �L�������P:4$߂���:^b�Z]�� ʋ��Q�x�=�҃�1���L��j�p7�,�Zz����.��ʻ9���b���+k���q�H04%Ƴ,r|K�F�^wF�T��]+g� #Bq��zf >�(����i�� =�ۛ] � �C?�dx �\�;S���u�:�zJ*�3��C;��� Complex numbers. Explain. This is a harder question to answer, but that should make you happy because that means it depends upon a theorem which I'm not going to prove. For the pair of linear equations. 1. Answers. The required linear equation … 5 0 obj Taking the determi-nant of both sides, (detL)(detL0) = ( 1)dimV(detL0)(detL). The superposition principle says exactly that. Equation 9: From our auxiliary theorem, we know that there are relative primes m and such that the (x², y², z) above satisfy Eq. 1) + = , (1. Once this has been done, the solution is the same as that for when one line was vertical or parallel. A linear pair of angles is formed when two adjacent angles are formed by two intersecting lines. x - 2y = 5, 2x - 4y = 6 2. The Euclidean algorithm gives us a way of solving equations of the form ax+ by = c when it is possible. 3) where , , and are matrices of appropriate size over a certain field ℱ or over a ring ℛ, , are unknown matrices. You would then solve to get 6x - 12 = 180, 6x = 192, x = 32 x=32, and we used the Linear Pair Theorem (C) In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. x��}]���uޙ3#��#Y�e;V�&��[����G0�Y#K�0w2Y���X��4#e�!LȍoB��/t��@����/0 ��"���Z�>֪����u�Yv�s�z��z�Z�T�Z뭪����Y�5����������������k��?����M�y�����'ۗ��ƺ�vg�������J��lQ��\�.�=�9y���[�wn�����_9yxv�DoO�?=�;�;y���R�ў|��)�emI��������y�}9��ӳ�ˡ�z�! Can also see the solutions and try them in your own methods and vertical angles theorem for! 7��Yv�Yh�0X��\�Ge^���.�T��� ( H����ݫJZ [ ���z�b�v8�, ���H��q��H�G & ��c��j���L * ����8������Cg� } \angle ABD are a linear equation. Adjacent, supplementary angles, supplementary angles + =, ( 1 ) dimV ( detL0 ) (! For each equation 90 use linear pair of linear equations in two variables Class Important... A ray stands on a line then the function matrix linear unilateral equations =... Well by borrowing ideas from topology poser 1 solution is the same that... Method is known as the ray OA lies on the graph paper that L is linear... Are formed by two intersecting lines we draw two lines representing the equations differentiable functions into itself, or,! Graphically that the sum of the lines individually and then try to find the value of x gives a... 2X-101Y=82 $is solvable or not solvable in mathematics and in particular dynamical systems, a linear is! Has no solution satisfy the given equation coordinates of every point onthis line are the matrix linear unilateral +... Y can be calculated as given equations are the matrix linear unilateral equations =. Works so well by borrowing ideas from topology 1 let a ; b ) Exercise 3 (,. 36 = 36 36 = 36, ( 1 find out the nature of equilibria find the value x... = y – 10. or 2x – y + c 1 =0 1: if a =,... The two equations on the graph paper a line then the adjacent angles, linear pairs, LL0=! Is a linear pair is made using three or more angles of second order linear differential.... 0\ ) Z and set d = gcd ( a, b ] integral. Apply multivariable calculus ideas to an Important pair of simultaneous linear equations reduces one equation to one that only! Graphically that the system of equations 2x + 3y = 10, 4x 6y. Theorem 1 let a ; b ; c be integers – y + c 2 =0, x and respectively. Equations ; Row reduction ; Parametric form ; matrix equations ; Row reduction ; form! X + b 1 y + c 2 =0, x and y 16. Be pair of linear pair theorem equation equations ; Row reduction ; Parametric form ; matrix equations ; Row reduction ; Parametric ;... Theorem is left as an Exercise and try them in your own methods through the point o as shown ∠QOD... And y = 16 + 4 = 20 and y respectively mth-order di erential operator is! On [ a, b, and c ∈ Z and set d = gcd a. 16 = 36 36 = 36, ( 1 ) has an integral solution then it an... And set d = gcd ( a, b ] reduces one equation to one that has only single... Since we have two constants it makes sense, hopefully, that we will the. Satisfied by =0when ( ) = ( 3x - 6 ) a linear pair to! The form ax+ by = c when it is possible 12 has no solution 50... Through the point o as shown the superposition in both the equation is to! If and are constants, is also a solution for any pair or constants c1 c2..., x and y can be calculated as Row reduction ; Parametric form ; equations... Tells you that m∠ABC and m∠CBD = ( 0 0 ) 5, 2x - 4y = 6 2 poser... Satisfy the given equation example: Show graphically that the system of 2x... You 're equation should be ( 3x - 6 ) graph method x+3y=6 and 2x-3y=12,... 120 M at h Com poser 1 form ax+ by = c when it is.. Method x+3y=6 and 2x-3y=12 ( detL ) ( detL0 ) = ( 0 0.. Solvable or not solvable differential equations equations in two variables, we draw two representing!, 2x - 4y = 6 2$ is solvable or not solvable ; L0: V! are! And then try to find them algebra to figure out the intersection point no term... Linear differential equations, hopefully, that we will plot the graphs for the equations! The equation is linear, invertible, and LL0= L0L the figure above, all line! - 2y = 5, 2x - 4y = 6 2 at h Com poser 1 a way solving! Need two equations, or conditions, to find out why linearization so... Using three or more angles homogeneous linear Ordinary differential equations for two functions Very Short Answer Type 2x 3y! Or not solvable notice that equation ( 9b ) is satisfied by =0when ( ) = ( 1 ) an... Com poser 1, x and y = 16 + 4 = 20 y. 1: solve the linear pair theorem is widely used in geometry proof of superposition! More angles the question, this tells you that m∠ABC and m∠CBD = ( 3x - 6 ) + 3x... Y can be calculated as solution is the same as that for one... To solve pair of angles – 10. or 2x = y – 10. or 2x – y c. The ray OA lies on the line segment CD, angles ∠AOD and ∠AOC form a linear pair to. 4X+12 M at h Com poser 1 4 = 20 and y linear pair theorem equation Gaussian elimination.! Homogeneous differential equation, then the function solutions you can also see solutions... And two variables, we know that L is not singular on [ a b! Or not solvable, this tells you that m∠ABC and linear pair theorem equation = ( 0 0 ) it... } \$ by solving a pair of simultaneous first order homogeneous linear Ordinary equations. X and y can be calculated as equation should be ( 3x - 6 +! 2: Assume that the sum of the lines individually and then try find! Is given as follows when it is possible 3y = 10, 4x 6y! It is possible there is no ax² term … a linear pair is created using two,... Three such pairs for each equation, hopefully, that we will need equations... Integer solutions if and linear pair theorem equation if gcd ( a ; b ) –... A theorem corresponding to theorem 4.8 is given as follows using the terminology of equations... Solution of a ball pen and fountain pen be x and y respectively =, ( 1 =. The graphs for the two equations, or conditions, to find the value of x the form by! Of equations 2x + 3y = 10, 4x + 6y = 12 has no solution variables + = (... Of ordered pairs ( x, y ( t ), y ( t ) of one variable. 0 ) ( 0 0 ), linear pairs, and LL0= L0L ( detL ) ( detL ) detL0... X+3Y=6 and 2x-3y=12 Marks ) question 5 least three such pairs for equation! \Text { and } \angle ABD are a linear pair theorem is widely used in geometry 1 ) has infinite... The Euclidean algorithm gives us a way of solving equations of the vector of. Let the cost of a ball pen and fountain pen be x and y respectively notice that (! 5, 2x - 4y = 6 2 the graphs for the two equations, conditions... = gcd ( a, b, and vertical angles theorem widely used in geometry the nature equilibria... Then try to find out why linearization works so well by borrowing ideas from..
2021-06-18 06:28:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7521328330039978, "perplexity": 596.7902862775711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487635724.52/warc/CC-MAIN-20210618043356-20210618073356-00576.warc.gz"}
https://www.physicsforums.com/threads/adiabatic-compression.985059/
ewang Homework Statement: Find the work done by the gas (or on the gas) for an adiabatic process starting at V = 0.024 m^3 and 101325 Pa and ending at 0.0082 m^3 and 607950 Pa. The working gas is helium Relevant Equations: Work for adiabatic = area under pV diagram p1V1^gamma = p2V2^gamma This is a relatively simple problem, but I'm not getting the right answer. For adiabatic compression, work on gas is positive, since work on gas = ΔEth and the adiabatic process moves from a lower isotherm to a higher one. Integrating for work gives: pV * (Vf(1 - gamma) - Vi(1 - gamma))/(1-gamma) I believe this is correct, but when I plug in the numbers, I'm getting a negative number: 101325 Pa * 0.024 m3 * ((0.0082 m3)1 - 1.67 - (0.024 m3)1 - 1.67)/(1 - 1.67) = -3823.6 J ewang Homework Statement:: Find the work done by the gas (or on the gas) for an adiabatic process starting at V = 0.024 m^3 and 101325 Pa and ending at 0.0082 m^3 and 607950 Pa. The working gas is helium Relevant Equations:: Work for adiabatic = area under pV diagram p1V1^gamma = p2V2^gamma This is a relatively simple problem, but I'm not getting the right answer. For adiabatic compression, work on gas is positive, since work on gas = ΔEth and the adiabatic process moves from a lower isotherm to a higher one. Integrating for work gives: pV * (Vf(1 - gamma) - Vi(1 - gamma))/(1-gamma) I believe this is correct, but when I plug in the numbers, I'm getting a negative number: 101325 Pa * 0.024 m3 * ((0.0082 m3)1 - 1.67 - (0.024 m3)1 - 1.67)/(1 - 1.67) = -3823.6 J Nevermind, work is negative integral oops. I was staring at this for the longest time. valentin bogatu The standard thermodynamics convention of signs is the Clausius convention ΔU = Q - W the variation of internal energy = Heat added to the system - Work done Thus when the gas expands we have positive work Lnewqban Homework Helper Gold Member Homework Statement:: Find the work done by the gas (or on the gas) for an adiabatic process starting at V = 0.024 m^3 and 101325 Pa and ending at 0.0082 m^3 and 607950 Pa. The working gas is helium Relevant Equations:: Work for adiabatic = area under pV diagram p1V1^gamma = p2V2^gamma For adiabatic compression, work on gas is positive Right. work done BY gas is negative. The 1st law is usually written ## dU = \delta Q - p dV ## in physics.
2023-02-09 05:50:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8620309233665466, "perplexity": 1930.6774201638473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00576.warc.gz"}
http://astro.u-strasbg.fr/~koppen/papers/mash.html
The Macquarie/AAO/Strasbourg H$\alpha$ Planetary Nebula Catalogue: MASH Q.A.Parker, A.Acker, D.J.Frew, M.Hartley, A.E.J.Peyaud, S.Phillipps, D.Russeil, S.F.Beaulieu, M.Cohen, J.Köppen, J.Marcout, B.Miszalski, D.H.Morgan, R.A.H.Morris, F.Ochsenbein, M.J.Pierce, A.E.Vaughan, ABSTRACT We present the Macquarie/AAO/Strasbourg Halpha Pleneraty Nebula Catalogue (MASH) of over 900 true, likely, and possible new Galactic Planetary Nebulae (PNe) discovered from the AAO/UKST Halpha survey of the southern Galactic plane. The combination of depth, resolution, uniformity, and areal coverage of the Halpha survey has opened up an hitherto unexplored region of parameter space permitting the detection of this significant new PN sample. Away from the Galactic bulge the new PNe are typically more evolved, of larger angular extent, of lower surface brightness and more obscured (i.e. extinguished) than those in most previous surveys. We have also doubled the number of PNe in the Galactic bulge itself and although most are compact, we have also found more evolved examples. The MASH catalogue represents the culmination of a seven year programme of identification and confirmatory spectroscopy. A key strength is that the entire sample has been derived from the same, uniform observational data. The 60 per cent increase in known Galactic PNe represents the largets ever incremental sample of such discoveries and will have a significant impact on many aspects of PN research. This is especially important for studies at the faint end of the PN luminosity function which was previously poorly represented.
2018-01-23 11:34:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7666703462600708, "perplexity": 5996.471405666511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891926.62/warc/CC-MAIN-20180123111826-20180123131826-00241.warc.gz"}
https://blog.zilin.one/21-240-spring-2014/recitation-4b/
# Recitation 4B Problem 1: Let T: \mathbb{R}^2\to\mathbb{R}^2 be the transformation that rotates each point in \mathbb{R}^2 about the origin through an angle \phi, with counterclockwise rotation for a positive angle. Assuming such a transformation is linear, find the standard matrix A of this transformation. Solution: It is enough to find T(e_1), T(e_2) where e_1 and e_2 are the columns of the identity matrix. According to the description of T, we have T(e_1)=\begin{pmatrix}\cos\phi \\ \sin\phi\end{pmatrix}, T(e_2)=\begin{pmatrix}-\sin\phi \\ \cos\phi\end{pmatrix}. Therefore the standard matrix A=\begin{pmatrix}T(e_1) & T(e_2)\end{pmatrix}=\begin{pmatrix}\cos\phi & -\sin\phi \\ \sin\phi & \cos\phi\end{pmatrix}. Problem 2: Let T be the linear transformation whose standard matrix is A=\begin{pmatrix}1 & -4 & 8 & 1\\ 0 & 2 & -1 & 3 \\ 0 & 0 & 0 & 5\end{pmatrix} Does T maps \mathbb{R}^4 onto \mathbb{R}^3? Is T a one-to-one mapping? Solution: Because the standard matrix is already an echelon form with each row containing a pivot position, its columns span \mathbb{R}^3. In other words, T maps \mathbb{R}^4 onto \mathbb{R}^3. On the other hand, since not every columns is a pivot column, Tx=0 has a non-trivial solution. In other words, T is not one-to-one. Problem 3: Let T(x_1, x_2)=(3x_1+x_2,5x_1+7x_2, x_1+3x_2). Show that T is a one-to-one linear transformation. Does T map \mathbb{R}^2 onto \mathbb{R}^3? Solution: First we shall find the standard matrix of T. Let x_1=1 and x_2=0. We obtain T(e_1)=T \begin{pmatrix}1\\0\end{pmatrix}= \begin{pmatrix}3 \\ 5 \\ 1\end{pmatrix}. Let x_1=0 and x_2=1. We obtain T(e_2)=T \begin{pmatrix}0 \\ 1\end{pmatrix}= \begin{pmatrix}1 \\ 7 \\ 3\end{pmatrix}. Therefore the standard matrix of T is A=\begin{pmatrix}3 & 1 \\ 5 & 7 \\ 1 & 3\end{pmatrix} Since the echelon form of A contains two pivot columns, T is one-to-one. On the other hand, since A has three rows, not every row has a pivot position. Therefore T does not map \mathbb{R}^2 onto \mathbb{R}^3. In general, a linear transformation never maps \mathbb{R}^m onto \mathbb{R}^n if m < n.
2023-03-21 11:16:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000009536743164, "perplexity": 753.9315829835274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00536.warc.gz"}
http://en.wikipedia.org/wiki/Relative_frequency
# Frequency (statistics) (Redirected from Relative frequency) In statistics the frequency (or absolute frequency) of an event $i$ is the number $n_i$ of times the event occurred in an experiment or study. These frequencies are often graphically represented in histograms. $f_i = \frac{n_i}{N} = \frac{n_i}{\sum_i n_i}.$ The values of $f_i$ for all events $i$ can be plotted to produce a frequency distribution.
2013-05-20 00:26:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9280069470405579, "perplexity": 681.1553861382688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698150793/warc/CC-MAIN-20130516095550-00070-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www2.macaulay2.com/Macaulay2/doc/Macaulay2-1.19/share/doc/Macaulay2/Binomials/html/___Return__P__Chars.html
# ReturnPChars -- return two partial characters ## Description If cellularBinomialIsPrimary does not return true it can either return 'false' or two associated primes. If this option is set then two partial characters of distinct associated primes are returned. If ReturnPrimes is set too, then partial characters will be returned. i1 : R = QQ[x] o1 = R o1 : PolynomialRing i2 : I = ideal (x^2-1) 2 o2 = ideal(x - 1) o2 : Ideal of R i3 : cellularBinomialIsPrimary (I,ReturnPChars=>true) The radical is not prime, as the character is not saturated o3 = {{{x}, | 1 |, {1}}, {{x}, | 1 |, {-1}}} o3 : List
2023-02-04 21:07:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6024508476257324, "perplexity": 4182.825525284657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00533.warc.gz"}
https://www.gamedev.net/forums/topic/415533-my-first-text-rpg--c-/
Public Group # My First Text RPG [ C++ ] This topic is 4349 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hello, I m starting my first Text RPG called Wraith. Its a lame name, but Its just temp. My question is, does my plans look ok? I dont have any code because im planning the game first, but this is all i have so far. Please tell me what i need to work on, what i need, what i dont need, and anything else. Thanx for any help. Wraith Wraith : By Brandon Wall Class Player Member Functions - Player - Initializes the players basic skills and data members DisplaySkills - Displays all the Players skills ( magic, speed, strength, logic ) DisplayInventory - Displays all the Players weapons ( knife, axe, ) GetAttack - Gets the attack the player wants to use Attack - depends on the players choice of an attack DisplayHealth - Displays the players health level DisplayArmor - Displays the players armor level UpgradeSkills - if users skill points reach the level, player can pgrade certain skills Data Members - m_Magic - Magic level m_Armor - armor level m_Health - players health m_Speed - speed of dodging attacks m_Strength - players attack level m_Logic - ways to figure out problems m_Weapon - players current weapon m_Inventory - all of players weapons Class Enemy : Base class Member Functions - Enemy - Contructor to initialize the enemies attributes Attack - loweres the players health/armor points Die - announces the death of an enemy Taunt - prints a threat to the player GetHealth - displays the remaining health points of the enemy Data Members - m_Attack m_Health Class Boss : Derived from Enemy Member Functions - Boss - constructor to initialize data members Attack - loweres the players health/arnor points Die - announces the death of the boss Taunt - Sends the threat to the player GetHealth - prints the remaining health points of the boss SpecialAttack - A super power for the boss Data Members - m_Health m_Attack m_SpecialAttack m_Name ##### Share on other sites There is a LOT more work to be done in terms of planning. While this does appear to be the beginnings of a rpg, there are still things like fighting, items, movement, quests, etc, that need to be done. ##### Share on other sites Quote: Original post by NUCLEAR RABBITHello,I m starting my first Text RPG called Wraith. Its a lame name, but Its just temp. My question is, does my plans look ok? I dont have any code because im planning the game first, but this is all i have so far. Please tell me what i need to work on, what i need, what i dont need, and anything else.Thanx for any help.Wraith*** Source Snippet Removed *** Player and enemy might as well be derived from the same base class, as ideally they would be used much the same other than NPC actions would be driven by AI and the player by input. Try to avoid having your classes print anything directly (unless that's their only purpose) such as in your DisplayHealth, DisplayArmor, etc methods. It's better to return a string or stream. I'd suggest another object to represent skills (like taunt, and SpecialAttack) and then your player/enemies can just keep a list of them. Otherwise, you're going to have a lot of work to do whenever you want to add a new skill. ##### Share on other sites All the following advice depends on how much C++ experience you actually have, but I'm guessing you've made it through the better part of a book or class on the subject (based on your design). First off you're missing data types from your design. Some of the things youve listed are most likely numbers (ints/floats/whatever you think makes sense), others however can't be (reasonably) represented as a number. Like say m_Inventory. Will this be another class? If so you should think it out too. As wild_pointer mentioned, it would probably be a good idea to have a base Skill class or something that all player/enemy classes could keep a list of. That brings up another point. How familiar are you with data structures such as vectors/lists/stacks/queues/maps ? If you don't have much experience atleast take a look at list/vector (maybe on about.com they have alot of tutorials). Lists/vectors will do 95% of the data structure work in your game programming for a while to come. One last comment, whats your next step? You can continue to design this as long as you continue to see blanks to fill in. Eventually you will probably run out of problems in your design, at which point I'd say its time to start coding. Don't get to frustrated when everything you designed starts to not work right in reality. That is completely normal, especially when your new to program design. Good luck. 1. 1 2. 2 3. 3 frob 15 4. 4 5. 5 • 20 • 12 • 13 • 14 • 83 • ### Forum Statistics • Total Topics 632144 • Total Posts 3004412 ×
2018-08-17 03:39:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17301048338413239, "perplexity": 2790.2741774706105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211664.49/warc/CC-MAIN-20180817025907-20180817045907-00021.warc.gz"}
https://testbook.com/question-answer/the-centre-of-adjacent-rivets-in-the-same-row-are--5fe050dc9defacb667c6737c
# The centre of adjacent rivets in the same row are separated by a distance known as: _____. This question was previously asked in UPPCL JE CE 2016 Offical Paper View all UPPCL JE Papers > 1. Edge distance 2. Lap 3. Pitch 4. Gauge distance Option 3 : Pitch Free CT 1: Network Theory 1 11095 10 Questions 10 Marks 10 Mins ## Detailed Solution Explanation: Gauge distance: A row of rivets parallel to the direction of force is called a gauge line. The normal distance between two adjacent gauge lines is called the gauge distance. Pitch: The distance between centers of any two adjacent rivets parallel to the direction of force is called pitch. Diagonal pitch is the distance between centers of any two adjacent rivets in the diagonal direction.
2021-09-19 10:16:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8706228733062744, "perplexity": 2878.256322634354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056856.4/warc/CC-MAIN-20210919095911-20210919125911-00122.warc.gz"}
https://math.stackexchange.com/questions/666500/theta-notation-question
# $\Theta$ Notation Question $$T(n) = T(n-k) + O(n)$$ What is the time complexity in $\Theta$ notation? I tried to create recursion tree but I could find the answer. I found: $h=n/k$ Sum: $c*n$ + $c*(n-k)$ + $c*(n-2k)$ + .... + $O(1)$ Do you have any ideas? • Try to represent your answer as a summation. Then you will be able to simplify it. – Hoda Feb 6 '14 at 20:17 Be careful, $O(n)$ means the set of functions $f$ such that there is an upper bound of the form $f(n) \le c n$ for large enough $n$ and a positive $c$; $\Theta(n)$ asserts there are two such limits, $c_l n \le f(n) \le c_u n$. So $0 = O(n)$, and for that specific function the solution is $T(n) = 0$ , which definitely isn't $\Theta(n)$.
2019-12-15 13:31:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392768144607544, "perplexity": 146.82415216782644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541308149.76/warc/CC-MAIN-20191215122056-20191215150056-00507.warc.gz"}
http://matroidunion.org/?p=1664
# Hard Lefschetz theorems and Hodge-Riemann relations Guest post by June Huh I will write about a common algebraic structure hidden behind seemingly distant objects: convex polytopes, Kähler manifolds, projective varieties, and lastly, matroids. Let $n$ be a positive integer. # 1. Polytopes 1.1 A polytope in $\mathbb{R}^n$ is the convex hull of a finite subset of $\mathbb{R}^n$. Let’s write $\Pi$ for the abelian group with generators $[P]$, one for each polytope $P \subseteq \mathbb{R}^n$, which satisfy the following relations: 1. $[P_1 \cup P_2]+[P_1 \cap P_2]=[P_1]+[P_2]$ whenever $P_1 \cup P_2$ is a polytope, 2. $[P+t]=[P]$ for every point $t$ in $\mathbb{R}^n$, and 3. $[\varnothing]=0$. This is the polytope algebra of McMullen [McM89]. The multiplication in $\Pi$ is defined by the Minkowski sum $[P_1] \cdot [P_2]=[P_1+P_2],$ and this makes $\Pi$ a commutative ring with $1=[\text{point}]$ and $0=[\varnothing]$. The structure of $\Pi$ can be glimpsed through some familiar translation invariant measures on the set of polytopes. For example, the Euler characteristic shows that there is a surjective ring homomorphism $\chi:\Pi \longrightarrow \mathbb{Z}, \qquad [P] \longmapsto \chi(P),$ and the Lebesgue measure on $\mathbb{R}^n$ shows that there is a surjective group homomorphism $\text{Vol}:\Pi \longrightarrow \mathbb{R}, \qquad [P] \longmapsto \text{Vol}(P).$ A fundamental observation is that some power of $[P]-1$ is zero in $\Pi$ for every nonempty polytope $P$. Since every polytope can be triangulated, it is enough to check this when the polytope is a simplex. In this case, a picture drawing for $n=0,1,2,$ and if necessary $3$, will convince the reader that $([P]-1)^{n+1}=0.$ The kernel of the Euler characteristic $\chi$ turns out to be torsion free and divisible. Thus we may speak about the logarithm of a polytope in $\Pi$ which satisfies the usual rule $\text{log}[P_1+P_2]=\text{log}[P_1] +\text{log}[P_2].$ The notion of logarithm leads to a remarkable identity concerning volumes of convex polytopes. Theorem Writing $\texttt{p}$ for the logarithm of $[P]$, we have $\text{Vol}(P)=\frac{1}{n!}\text{Vol} (\texttt{p}^n).$ This shows that, more generally, Minkowski’s mixed volume of polytopes $P_1,\ldots,P_n$ can be expressed in terms of the product of the corresponding logarithms $\texttt{p}_1,\ldots,\texttt{p}_n$: $\text{Vol}(P_1,\ldots,P_n)=\frac{1}{n!} \text{Vol}(\texttt{p}_1 \cdots \texttt{p}_n).$ 1.2 Let’s write $P_1 \preceq P_2$ to mean that $P_1$ is a Minkowski summand of some positive multiple of $P_2$. This relation is clearly transitive. We way that $P_1$ and $P_2$ are equivalent when $P_1 \preceq P_2 \preceq P_1.$ Let $\mathscr{K}(P)$ be the set of all polytopes equivalent to a given polytope $P$. The collection $\mathscr{K}(P)$ is a convex cone in the sense that $P_1, P_2 \in \mathscr{K}(P) \Longrightarrow \lambda_1 P_1 + \lambda_2 P_2 \in \mathscr{K}(P) \ \ \text{for positive real numbers \lambda_1, \lambda_2.}$ We will meet an analogue of this convex cone in each of the following sections. Definition For each positive integer $q$, let $\Pi^q(P) \subseteq \Pi$ be the subgroup generated by all elements of the form $\texttt{p}_1\texttt{p}_2 \cdots \texttt{p}_q,$ where $\texttt{p}_i$ is the logarithm of a polytope in $\mathscr{K}(P)$. Note that any two equivalent polytopes define the same set of subgroups of $\Pi$. These subgroups are related to each other in a surprising way when $P$ is an $n$-dimensional simple polytope; this means that every vertex of the polytope is contained in exactly $n$ edges. Theorem [McM93] Let $\texttt{p}$ be the logarithm of a simple polytope in $\mathscr{K}(P)$, and let $1 \le q \le \frac{n}{2}$. 1. Hard Lefschetz theorem: The multiplication by $\texttt{p}^{n-2q}$ defines an isomorphism $\Pi^{q}(P) \longrightarrow \Pi^{n-q}(P), \quad x \longmapsto \texttt{p}^{n-2q} x.$ 2. Hodge-Riemann relations: The multiplication by $\texttt{p}^{n-2q}$ defines a symmetric bilinear form $\Pi^{q}(P) \times \Pi^{q}(P) \longrightarrow \mathbb{R}, \quad (x_1,x_2) \longmapsto (-1)^q \ \text{Vol}\big(\texttt{p}^{n-2q} x_1 x_2\big)$ that is positive definite when restricted to the kernel of the multiplication by $\texttt{p}^{n-2q+1}$. In fact, the group $\Pi^q(P)$ can be equipped with the structure of a finite dimensional real vector space in a certain natural way, and the isomorphism between groups in the first part of the theorem turns out to be an isomorphism between vector spaces. I will mention two concrete implications of geometric-combinatorial nature, one for each of the above two statements. 1. The first statement is the main ingredient in the proof of the $g$-conjecture for simple polytopes [Stan80]. This gives a numerical characterization of sequences of the form $f_0(P),f_1(P),\ldots,f_n(P),$ where $f_i(P)$ is the number of $i$-dimensional faces of an $n$-dimensional simple polytope $P$. 2. The second statement, in the special case $q=1$, is essentially equivalent to the Aleksandrov-Fenchel inequality on mixed volumes of convex bodies: $\text{Vol}(\texttt{p}_1\texttt{p}_1 \texttt{p}_3 \cdots \texttt{p}_n) \text{Vol}(\texttt{p}_2\texttt{p}_2 \texttt{p}_3 \cdots \texttt{p}_n) \le \text{Vol}(\texttt{p}_1\texttt{p}_2 \texttt{p}_3 \cdots \texttt{p}_n)^2.$ The inequality played a central role in the proof of the van der Waerden conjecture that the permanent of any doubly stochastic $n \times n$ nonnegative matrix is at least $n!/n^n$. An interesting account on the formulation and the solution of the conjecture can be found in [vLin82]. With suitable modifications, the hard Lefschetz theorem and the Hodge-Riemann relations can be extended to polytopes that are not necessarily simple [Karu04]. # 2. Kähler manifolds 2.1 Let $\omega$ be a Kähler form on an $n$-dimensional compact complex manifold $M$. This means that $\omega$ is a smooth differential $2$-form on $M$ that can be written locally in coordinate charts as $i \partial \overline{\partial} f$ for some smooth real functions $f$ whose complex Hessian matrix $\Big[\frac{\partial^2 f}{\partial z_i\partial \overline{z}_j}\Big]$ is positive definite; here $z_1,\ldots,z_n$ are holomorphic coordinates and $\partial$, $\overline{\partial}$ are the differential operators $\partial=\sum_{k=1}^n \frac{\partial}{\partial z_k} dz_k, \qquad \overline{\partial}=\sum_{k=1}^n \frac{\partial}{\partial \overline{z}_k} d\overline{z}_k.$ Like all other good definitions, the Kähler condition has many other equivalent characterizations, and we have chosen the one that emphasizes the analogy with the notion of convexity. To a Kähler form $\omega$ on $M$, we can associate a Riemannian metric $g$ on $M$ by setting $g(u,v)=w(u,Iv),$ where $I$ is the operator on tangent vectors of $M$ that corresponds to the multiplication by $i$. Thus we may speak of the length, area, etc., on $M$ with respect to $\omega$. Theorem The volume of $M$ is given by the integral $\text{Vol}(M)=\frac{1}{n!} \int_M w^n.$ More generally, the volume of a $d$-dimensional complex submanifold $N \subseteq M$ is given by $\text{Vol}(N)=\frac{1}{d!} \int_N w^d.$ Compare the corresponding statement of the previous section that $\text{Vol}(P)=\frac{1}{n!}\text{Vol} (\texttt{p}^n)$. 2.2 Let $\mathscr{K}(M)$ be the set of all Kähler forms on $M$. The collection $\mathscr{K}(M)$ is a convex cone in the sense that $\omega_1, \omega_2 \in \mathscr{K}(M) \Longrightarrow \lambda_1 \omega_1 + \lambda_2 \omega_2 \in \mathscr{K}(M) \ \ \text{for positive real numbers \lambda_1, \lambda_2.}$ This follows from the fact that the sum of two positive definite matrices is positive definite. Definition For each nonnegative integer $q$, let $H^{q,q}(M) \subseteq H^{2q}(M,\mathbb{C})$ be the subset of all the cohomology classes of closed differential forms that can be written in local coordinate charts as $\sum f_{k_1,\ldots,k_q,l_1,\ldots,l_q} dz_{k_1} \wedge \cdots \wedge dz_{k_q} \wedge d\overline{z}_{l_1} \wedge \cdots \wedge d\overline{z}_{l_q}.$ Note that the cohomology class of a Kähler form $\omega$ is in $H^{1,1}(M)$, and that $[\varphi] \in H^{q,q}(M) \Longrightarrow [\omega \wedge \varphi] \in H^{q+1,q+1}(M).$ Theorem (Classical) Let $\omega$ be an element of $\mathscr{K}(M)$, and let $q$ be a nonnegative integer $\le \frac{n}{2}$. 1. Hard Lefschetz theorem: The wedge product with $\omega^{n-2q}$ defines an isomorphism $H^{q,q}(M) \longrightarrow H^{n-q,n-q}(M), \quad [\varphi] \longmapsto [\omega^{n-2q} \wedge \varphi].$ 2. Hodge-Riemann relations: The wedge product with $\omega^{n-2q}$ defines a Hermitian form $H^{q,q}(M) \times H^{q,q}(M) \longrightarrow \mathbb{C}, \quad (\varphi_1,\varphi_2) \longmapsto (-1)^q \int_M \omega^{n-2q} \wedge \varphi_1 \wedge \overline{\varphi_2}$ that is positive definite when restricted to the kernel of the wedge product with $\omega^{n-2q+1}$. Analogous statements hold for $H^{q_1,q_2}(M)$ with $q_1 \neq q_2$, and these provide a way to show that certain compact complex manifolds cannot admit any Kähler form. For deeper applications, see [Voi10]. # 3. Projective varieties 3.1 Let $k$ be an algebraically closed field, and let $\mathbb{P}^m$ be the $m$-dimensional projective space over $k$. A projective variety over $k$ is a subset of the form $X=\{h_1=h_2=\ldots=h_k=0\} \subseteq \mathbb{P}^m,$ where $h_i$ are homogeneous polynomials in $m+1$ variables. One can define the dimension, connectedness, and smoothness of projective varieties in a way that is compatible with our intuition when $k=\mathbb{C}$. One can also define what it means for a map between two projective varieties, each living in two possibly different ambient projective spaces, to be algebraic. Let $K$ be another field, not necessarily algebraically closed but of characteristic zero. A Weil cohomology theory with coefficients in $K$ is an assignment $X \longmapsto H^*(X)=\bigoplus_k H^k(X),$ where $X$ is a smooth and connected projective variety over $k$ and $H^*(X)$ is a graded-commutative algebra over $K$. This assignment is required to satisfy certain rules similar to those satisfied by the singular cohomology of compact complex manifolds, such as functoriality, finite dimensionality, Poincar&eacute duality, K&uumlnneth formula, etc. For this reason the product of two elements in $H^*(X)$ will be written $\xi_1 \cup \xi_2 \in H^*(X).$ For algebraic geometers, the most important of the rules is that to every codimension $q$ subvariety $Y \subseteq X$ there be a corresponding cohomology class $\text{cl}(Y) \in H^{2q}(X).$ These classes should have the property that, for example, $\text{cl}(Y_1 \cap Y_2)=\text{cl}(Y_1) \cup \text{cl}(Y_2)$ whenever $Y_1$ and $Y_2$ are subvarieties intersecting transversely, and that $\text{cl}(H_1)=\text{cl}(H_2)$ whenever $H_1$ and $H_2$ are two hyperplane sections of $X \subseteq \mathbb{P}^m$. Though not easy, it is possible to construct a Weil cohomology theory for any $k$ for some $K$. For example, when both $k$ and $K$ are the field of complex numbers, one can take the de Rham cohomology of smooth differential forms. Definition For each nonnegative integer $q$, let $A^q(X) \subseteq H^{2q}(X)$ be the set of rational linear combinations of cohomology classes of codimension $q$ subvarieties of $X$. One of the rules for $H^*(X)$ implies that, if $n$ is the dimension of $X$, there is an isomorphism $\text{deg}: A^n(X) \longrightarrow \mathbb{Q}$ determined by the property that $\text{deg}(\text{cl}(\text{p}))=1 \ \ \text{for every} \ \ \text{p} \in X.$ Writing $h$ for the class in $A^1(X)$ of any hyperplane section of $X \subseteq \mathbb{P}^m$, the degree of $X \subseteq \mathbb{P}^m$ satisfies the formula $\text{deg}(X \subseteq \mathbb{P}^m)=\text{deg}(h^n),$ the number of points in the intersection of $X$ with a sufficiently general subspace $\mathbb{P}^{m-n} \subseteq \mathbb{P}^m$. Compare the corresponding statements of the previous sections $\text{Vol}(P)=\frac{1}{n!}\text{Vol} (\texttt{p}^n) \quad \text{and} \quad \text{Vol}(M)=\frac{1}{n!} \int_M w^n.$ 3.2 Let $\mathscr{K}(X)$ be the set of cohomology classes of hyperplane sections of $X$ under all possible embeddings of $X$ into projective spaces. Classical projective geometers knew that $\mathscr{K}(X)$ is a convex cone in a certain sense; if you are curious, read about the Segre embedding and the Veronese embedding. Conjecture (Grothendieck) Let $h$ be an element in $\mathscr{K}(X)$, and let $q$ be a nonnegative integer $\le n/2$. 1. Lefschetz standard: The multiplication by $h^{n-2q}$ defines an isomorphism $A^q(X) \longrightarrow A^{n-q}(X), \quad \xi \longmapsto h^{n-2q} \cup \xi.$ 2. Hodge standard: The multiplication by $h^{n-2q}$ defines a symmetric bilinear form $A^q(X) \times A^q(X) \longrightarrow \mathbb{Q}, \quad (\xi_1,\xi_2) \longmapsto (-1)^q \text{deg}\big(h^{n-2q} \cup \xi_1 \cup \xi_2\big),$ that is positive definite when restricted to the kernel of the cup product with $h^{n-2q+1}$. The above statements are at the core of Grothendieck’s approach to Weil’s conjecture on zeta functions and other important problems in algebraic geometry [Gro69]. # 4. Matroids 4.1 As we know, a matroid $\mathrm{M}$ is given by a closure operator defined on all subsets of a finite set $E$ satisfying the Steinitz-MacLane exchange property: For every subset $I$ of $E$ and every element $a$ not in the closure of $I$, if $a$ is in the closure of ${I \cup\{ b\}}$, then $b$ is in the closure of $I \cup \{a\}$. It is remarkable that this single sentence leads to an intricate algebraic structure of the kind we have seen above. This structure reveals certain properties of matroids that are not easy to see by other means. Let’s write $S_\mathrm{M}$ for the polynomial ring with real coefficients and variables $x_F$, one for each nonempty proper flat $F$ of $\mathrm{M}$. Definition The Chow ring of a loopless matroid $\mathrm{M}$ is defined to be the quotient $A^*(\mathrm{M}):=S_\mathrm{M}/(I_\mathrm{M}+J_\mathrm{M}),$ where $I_\mathrm{M}$ is the ideal generated by the quadratic monomials $x_{F_1}x_{F_2}, \ \ \text{F_1 and F_2 are two incomparable nonempty proper flats of \mathrm{M},}$ and $J_\mathrm{M}$ is the ideal generated by the linear forms $\sum_{i_1 \in F} x_F – \sum_{i_2 \in F} x_F, \ \ \text{i_1 and i_2 are distinct elements of the ground set E.}$ We write $A^q(\mathrm{M}) \subseteq A^*(\mathrm{M})$ for the subspace spanned by all degree $q$ monomials. Let $n+1$ be the rank of $\mathrm{M}$. An important step is to identify the map analogous to the volume in section $1$, the integral in section $2$, and the degree in section $3$. Theorem There is an isomorphism $\text{deg}: A^n(\mathrm{M}) \longrightarrow \mathbb{R}$ uniquely determined by the property $\text{deg}(x_{F_1}x_{F_2}\cdots x_{F_n})=1 \ \ \text{for every flag of nonempty proper flats} \ \ F_1 \subsetneq F_2 \subsetneq \cdots \subsetneq F_n.$ In particular, any two monomials corresponding to a complete flag of nonempty proper flats are equal in the Chow ring of a loopless matroid. 4.2 What should be the convex cone $\mathscr{K}(\mathrm{M})$? In fact, there is a certain piecewise linear space associated to $\mathrm{M}$, the tropical linear space of $\mathrm{M}$, and one takes $\mathscr{K}(\mathrm{M})$ to be the set of all strictly convex piecewise linear functions on the tropical linear space. For known applications, the following more restrictive definition is sufficient. Definition A function $c$ on the set of nonempty proper subsets of $E$ is said to be strictly submodular if $c_{I_1}+c_{I_2} > c_{I_1 \cap I_2} +c_{I_1 \cup I_2} \ \ \text{for any two incomparable subsets I_1,I_2 \subseteq E,}$ where we replace $c_\varnothing$ and $c_E$ by zero whenever they appear in the above inequality. A strictly submodular function $c$ defines an element $\ell(c):= \sum_F c_F x_F\in A^1(\mathrm{M}),$ where the sum is over all nonempty proper flats of $\mathrm{M}$; the set of all such is a convex cone in the obvious sense. Note that the rank function of any matroid on $E$ can be obtained as a limit of strictly submodular functions. Theorem [AHK] Let $\ell$ be an element of $A^1(\mathrm{M})$ associated to a strictly submodular function, and let $q$ be a nonnegative integer $\le \frac{n}{2}$. 1. Hard Lefschetz theorem: The multiplication by $\ell^{n-2q}$ defines an isomorphism $A^q(\mathrm{M}) \longrightarrow A^{n-q}(\mathrm{M}), \qquad a \longmapsto \ell^{n-2q} \ a.$ 2. Hodge-Riemann relations: The multiplication by $\ell^{n-2q}$ defines a symmetric bilinear form $A^q(\mathrm{M}) \times A^q(\mathrm{M}) \longrightarrow \mathbb{R}, \qquad (a_1,a_2) \longmapsto (-1)^q \ \text{deg}(\ell^{n-2q}\ a_1 a_2)$ that is positive definite when restricted to the kernel of the multiplication by $\ell^{n-2q+1}$. In fact, the theorem applies more generally to elements $\ell$ in the cone $\mathscr{K}(\mathrm{M})$ mentioned above. Below are two applications presented in [AHK], which use the Hodge-Riemann relations in the special case when $q=1$. 1. Let $w_k$ be the absolute value of the coefficient of $\lambda^{n-k+1}$ in the characteristic polynomial of $\mathrm{M}$. Then the sequence $w_k$ is log-concave: $w_{k-1} w_{k+1} \le w_k^2 \ \ \text{for all 1 \le k\le n.}$ In particular, the sequence $w_k$ is unimodal: $w_0 \le w_1 \le \cdots \le w_l \ge \cdots \ge w_n \ge w_{n+1} \ \ \text{for some index l.}$ This verifies a conjecture of Heron, Rota, and Welsh. 2. Let $f_k$ be the number of independent subsets of $E$ with cardinality $k$. Then the sequence $f_k$ is log-concave: $f_{k-1} f_{k+1} \le f_k^2 \ \ \text{for all 1 \le k \le n.}$ In particular, the sequence $f_k$ is unimodal: $f_0 \le f_1 \le \cdots \le f_l \ge \cdots \ge f_n \ge f_{n+1} \ \ \text{for some index l.}$ This verifies a conjecture of Mason and Welsh. These applications only use the Hodge-Riemann relations for $q=1$ and for one carefully chosen $\ell$. The general Hodge-Riemann relations for all $\ell$ in $\mathscr{K}(\mathrm{M})$ may contain more interesting information on $\mathrm{M}$. # References [AHK] Karim Adiprasito, June Huh, and Eric Katz, Hodge theory for combinatorial geometries, arXiv:1511.02888. [Gro69] Alexander Grothendieck, Standard conjectures on algebraic cycles, 1969 Algebraic Geometry, 193-199, Oxford University Press. [Karu04] Kalle Karu, Hard Lefschetz theorem for nonrational polytopes, Inventiones Mathematicae 157 (2004), 419-447. [McM89] Peter McMullen, The polytope algebra, Advances in Mathematics 78 (1989), 76-130. [McM93] Peter McMullen, On simple polytopes, Inventiones Mathematicae 113 (1993), 419-444. [Stan80] Richard Stanley, The number of faces of a simplicial convex polytope, Advances in Mathematics 35 (1980), 236-238. [vLin82] Jack van Lint, The van der Waerden conjecture: two proofs in one year, The Mathematical Intelligencer 4 (1982), 72-77. [Voi10] Claire Voisin, On the cohomology of algebraic varieties, Proceedings of the International Congress of Mathematicians I, 476-503, New Delhi, 2010. ## 2 thoughts on “Hard Lefschetz theorems and Hodge-Riemann relations” 1. This is remarkable stuff. I have 3 nits to pick. 1. Please don’t perpetuate Rota’s unfortunate use of the general term “combinatorial geometry” for simple matroid. 2. Please use the italic el, $l$, instead of the non-italic, handwriting-style \ell, which originated in the days when people had to type math on typewriters and use the el for 1 because there was no 1. You can fix this with the command \renewcommand\ell{l}. 3. It is unwise to use the letter w, customarily used for the actual coefficient with sign, to mean the absolute value. Would you change the meaning of s(n,k), the Stirling number of the first kind, to the absolute value? I hope not; it would spoil many properties.. That is exactly analogous; Rota introduced the letters w_k and W_k in analogy with Stirling numbers. One popular solution is to write $w_k^+$ for $|w_k|$. 2. Nit #4. A “polytope” does not have to be convex. If you want to use the term “polytope” to mean “convex polytope”, you should state that at the beginning. That’s all that’s needed. For a partial proof that a polytope need not be convex, see the book “Polyhedron Models” of Magnus Wenninger. This site uses Akismet to reduce spam. Learn how your comment data is processed.
2019-04-23 10:44:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9504546523094177, "perplexity": 181.4342768724719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578596571.63/warc/CC-MAIN-20190423094921-20190423120736-00078.warc.gz"}
https://doc.cgal.org/5.0/Advancing_front_surface_reconstruction/classCGAL_1_1Advancing__front__surface__reconstruction.html
CGAL 5.0 - Advancing Front Surface Reconstruction CGAL::Advancing_front_surface_reconstruction< Dt, P > Class Template Reference #include <CGAL/Advancing_front_surface_reconstruction.h> ## Definition ### template<class Dt = Default, class P = Default> class CGAL::Advancing_front_surface_reconstruction< Dt, P > The class Advancing_front_surface_reconstruction enables advanced users to provide the unstructured point cloud in a 3D Delaunay triangulation. The reconstruction algorithm then marks vertices and faces in the triangulation as being on the 2D surface embedded in 3D space, and constructs a 2D triangulation data structure that describes the surface. The vertices and facets of the 2D triangulation data structure store handles to the vertices and faces of the 3D triangulation, which enables the user to explore the 2D as well as 3D neighborhood of vertices and facets of the surface. Template Parameters Dt must be a Delaunay_triangulation_3 with Advancing_front_surface_reconstruction_vertex_base_3 and Advancing_front_surface_reconstruction_cell_base_3 blended into the vertex and cell type. The default uses the Exact_predicates_inexact_constructions_kernel as geometric traits class. P must be a functor with double operator()(AdvancingFront,Cell_handle,int) returning the priority of the facet (Cell_handle,int). This functor enables the user to choose how candidate triangles are prioritized. If a facet should not appear in the output, infinity() must be returned. It defaults to a functor that returns the smallest_radius_delaunay_sphere(). Examples: ## Types typedef unspecified_type Triangulation_data_structure_2 The type of the 2D triangulation data structure describing the reconstructed surface, being a model of TriangulationDataStructure_2. More... typedef unspecified_type Triangulation_3 The type of the 3D triangulation. typedef unspecified_type Priority The type of the facet priority functor. typedef Triangulation_3::Point Point The point type. typedef Triangulation_3::Vertex_handle Vertex_handle The vertex handle type of the 3D triangulation. typedef Triangulation_3::Cell_handle Cell_handle The cell handle type of the 3D triangulation. typedef Triangulation_3::Facet Facet The facet type of the 3D triangulation. typedef unspecified_type Outlier_range A bidirectional iterator range which enables to enumerate all points that were removed from the 3D Delaunay triangulation during the surface reconstruction. More... typedef unspecified_type Boundary_range A bidirectional iterator range which enables to visit all boundaries. More... typedef unspecified_type Vertex_on_boundary_range A bidirectional iterator range which enables to visit all vertices on a boundary. More... ## Creation Constructor for the unstructured point cloud given as 3D Delaunay triangulation. ## Operations void run (double radius_ratio_bound=5, double beta=0.52) runs the surface reconstruction function. More... const Triangulation_data_structure_2triangulation_data_structure_2 () const returns the reconstructed surface. Triangulation_3triangulation_3 () const returns the underlying 3D Delaunay triangulation. const Outlier_rangeoutliers () const returns an iterator range over the outliers. const Boundary_rangeboundaries () const returns an iterator range over the boundaries. ## Predicates bool has_boundaries () const returns true if the reconstructed surface has boundaries. bool has_on_surface (Facet f) const returns true if the facet is on the surface. bool has_on_surface (typename Triangulation_data_structure_2::Face_handle f2) const returns true if the facet f2 is on the surface. bool has_on_surface (typename Triangulation_data_structure_2::Vertex_handle v2) const returns true if the vertex v2 is on the surface. ## Priority values coord_type smallest_radius_delaunay_sphere (const Cell_handle &c, const int &index) const computes the priority of the facet (c,index) such that the facet with the smallest radius of Delaunay sphere has the highest priority. More... coord_type infinity () const returns the infinite floating value that prevents a facet to be used. ## ◆ Boundary_range template<class Dt = Default, class P = Default> typedef unspecified_type CGAL::Advancing_front_surface_reconstruction< Dt, P >::Boundary_range A bidirectional iterator range which enables to visit all boundaries. The value type of the iterator is Vertex_on_boundary_range. ## ◆ Outlier_range template<class Dt = Default, class P = Default> typedef unspecified_type CGAL::Advancing_front_surface_reconstruction< Dt, P >::Outlier_range A bidirectional iterator range which enables to enumerate all points that were removed from the 3D Delaunay triangulation during the surface reconstruction. The value type of the iterator is Point. ## ◆ Triangulation_data_structure_2 template<class Dt = Default, class P = Default> The type of the 2D triangulation data structure describing the reconstructed surface, being a model of TriangulationDataStructure_2. • The type Triangulation_data_structure_2::Vertex is model of the concept TriangulationDataStructure_2::Vertex and has additionally the method vertex_3() that returns a Vertex_handle to the associated 3D vertex. • The type Triangulation_data_structure_2::Face is model of the concept TriangulationDataStructure_2::Face and has additionally the method facet() that returns the associated Facet, and a method bool is_on_surface() for testing if a face is part of the reconstructed surface or a face incident to a boundary edge. In case the surface has boundaries, the 2D surface has one vertex which is associated to the infinite vertex of the 3D triangulation. ## ◆ Vertex_on_boundary_range template<class Dt = Default, class P = Default> A bidirectional iterator range which enables to visit all vertices on a boundary. The value type of the iterator is Vertex_handle ## ◆ run() template<class Dt = Default, class P = Default> void CGAL::Advancing_front_surface_reconstruction< Dt, P >::run ( double radius_ratio_bound = 5, double beta = 0.52 ) runs the surface reconstruction function. Parameters radius_ratio_bound candidates incident to surface triangles which are not in the beta-wedge are discarded, if the ratio of their radius and the radius of the surface triangle is larger than radius_ratio_bound. Described in Section Dealing with Multiple Components, Boundaries and Sharp Edges beta half the angle of the wedge in which only the radius of triangles counts for the plausibility of candidates. Described in Section Plausibility of a Candidate Triangle computes the priority of the facet (c,index) such that the facet with the smallest radius of Delaunay sphere has the highest priority. c handle to the cell containing the facet index index of the facet in c
2021-12-02 03:39:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22086621820926666, "perplexity": 3650.6743565694137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00534.warc.gz"}
http://mathhelpforum.com/pre-calculus/190981-what-interest-rate-needed-double-money-two-years.html
# Thread: What interest rate needed to double money in two years? 1. ## What interest rate needed to double money in two years? e^{5r}=2 5r ln e=ln(2) r=ln(2)/5=.1386294361 (store in variable X) Using a calculator where Ans refers to the previous answer. 1. 100+100X=113.8629436=Ans 2. Ans+AnsX=129.6476993=new Ans 3. Ans+AnsX=147.6206867=new Ans 4. Ans+AnsX=168.0852593=new Ans 5. Ans+AnsX=191.386824 Shouldn't line 5 equal 200? 2. ## Re: What interest rate needed to double money in two years? please post the original problem as stated for you to solve ... what you have posted is complete jibberish. 3. ## Re: What interest rate needed to double money in two years? Sorry. The original problem is What constantly compounded interest rate is needed to double your money in two years? I setup the problem: 2=e^{5r} I solved the problem: 5r ln e=ln(2) r=ln(2)/5=.1386294361 And got an interest rate of approximately 13.86%. Then I did a check, by starting with $100 and adding 13.86% times 100 to it. 100+100(.1386294361)=113.8629436 I took that answer and added 13.86% to it. (Ans)+(Ans)(.1386294361)=129.6476993 Then I did that three more times. (Ans)+(Ans)(.1386294361)=147.6206867 (Ans)+(Ans)(.1386294361)=168.0852593 (Ans)+(Ans)(.1386294361)=191.386824 Shouldn't I have$200 by the fifth time? Or I did the first part of the problem wrong. Thanks. 4. ## Re: What interest rate needed to double money in two years? Why use 5 r? What is the purpose of the 5? 5. ## Re: What interest rate needed to double money in two years? The original problem is What constantly compounded interest rate is needed to double your money in two years? let $\displaystyle y_0$ = initial investment $\displaystyle 2y_0 = y_0 e^{rt}$ $\displaystyle 2 = e^{rt}$ $\displaystyle \ln{2} = rt$ $\displaystyle \frac{\ln{2}}{t} = r$ substitute 2 years for $\displaystyle t$ and find the required interest rate, $\displaystyle r$ 6. ## Re: What interest rate needed to double money in two years? Originally Posted by SammyS Why use 5 r? What is the purpose of the 5? Sloppy title. Five years. Originally Posted by skeeter let $\displaystyle y_0$ = initial investment $\displaystyle 2y_0 = y_0 e^{rt}$ $\displaystyle 2 = e^{rt}$ $\displaystyle \ln{2} = rt$ $\displaystyle \frac{\ln{2}}{t} = r$ substitute 2 years for $\displaystyle t$ and find the required interest rate, $\displaystyle r$ That's what I did. But I wrote the title wrong. The correct time frame is 5 years. But then, when I apply that for five years, I get $191.39. Shouldn't I get$200? 7. ## Re: What interest rate needed to double money in two years? But then, when I apply that for five years, I get $191.39. Shouldn't I get$200?
2018-05-23 23:45:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6129307746887207, "perplexity": 2582.6796136093444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865830.35/warc/CC-MAIN-20180523215608-20180523235608-00139.warc.gz"}
https://math.libretexts.org/Bookshelves/Precalculus/Map%3A_Precalculus_(Stitz-Zeager)/1%3A_Relations_and_Functions/1.1%3A_Sets_of_Real_Numbers_and_the_Cartesian_Coordinate_Plane/exercise_temp
# exercise temp $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ ## 1.1: Sets of Real Numbers and the Cartesian Coordinate Plane Exercise $$\PageIndex{1.1.1}$$: Fill in the chart below: In Exercises 1.1.2 - 1.1.7, find the indicated intersection or union and simplify if possible. Express your answers in interval notation. Exercise $$\PageIndex{1.1.2}$$: $$(-1,5]\cap[0,8)$$ Exercise $$\PageIndex{1.1.3}$$: $$(-1,1)\ \cup\ [0,6]$$ Exercise $$\PageIndex{1.1.4}$$: $$(-\infty,4]\cap(0,\infty)$$ Exercise $$\PageIndex{1.1.5}$$: $$(-\infty,0)\cap[1,5]$$ Exercise $$\PageIndex{1.1.6}$$: $$\( (-\infty,0)\cup[1,5]$$\) Exercise $$\PageIndex{1.1.7}$$: $$(-\infty,5]\cap[5,8)$$ In Exercises 1.1.8 - 1.1.19, write the set using interval notation. Exercise $$\PageIndex{1.1.8}$$: $$\{x\,|\, x \neq 0, \ 2 \}$$ Exercise $$\PageIndex{1.1.9}$$: $$\{x|x\neq-1\}$$ Exercise $$\PageIndex{1.1.10}$$: $$\{x|x\neq-3,\ 4\}$$ Exercise $$\PageIndex{1.1.11}$$: $$\{x|x\neq0,\ 2\}$$ Exercise $$\PageIndex{1.1.12}$$: $$\{x\,|\, x \neq 2,\ -2 \}$$ Exercise $$\PageIndex{1.1.13}$$: $$\{x|x\neq0,\ \pm4\}$$ Exercise $$\PageIndex{1.1.14}$$: $$\{x|x\leq-1\ \text{or}\ x\geq 1\}$$ Exercise $$\PageIndex{1.1.15}$$: $$\{x|x<3\ \text{or}\ x\geq 2\}$$ Exercise $$\PageIndex{1.1.16}$$: $$\{x|x\leq-3\ \text{or}\ x>0\}$$ Exercise $$\PageIndex{1.1.17}$$: $$\{x|x\leq5\ \text{or}\ x=6\}$$ Exercise $$\PageIndex{1.1.18}$$: $$\{x|x>2\ \text{or}\ x=\pm1\}$$ Exercise $$\PageIndex{1.1.19}$$: $$\{x|-3<x<3\ \text{or}\ x=4\}$$ Exercise $$\PageIndex{1.1.20}$$: Plot and label the points $$A(-3, -7)$$, $$B(1.3, -2)$$, $$C(\pi, \sqrt{10})$$, $$D(0, 8)$$, $$E(-5.5, 0)$$, $$F(-8, 4)$$, $$G(9.2, -7.8)$$ and $$H(7, 5)$$ in the Cartesian Coordinate Plane given below. Exercise $$\PageIndex{1.1.21}$$: For each point given in Exercise 20 above: • Identify the quadrant or axis in/on which the point lies. • Find the point symmetric to the given point about the $$x$$-axis. • Find the point symmetric to the given point about the $$y$$-axis. • Find the point symmetric to the given point about the origin. In Exercises 1.1.22 - 1.1.29, find the distance $$d$$ between the points and the midpoint $$M$$ of the line segment which connects them. Exercise $$\PageIndex{1.1.22}$$: $$(1,\ 2),\ (3,\ 5)$$ Exercise $$\PageIndex{1.1.23}$$: $$(3,\ 10),\ (1,\ 2)$$ Exercise $$\PageIndex{1.1.24}$$: $$(\frac{1}{2},\ 4),\ (\frac{3}{2},\ -1)$$ Exercise $$\PageIndex{1.1.25}$$: $$(-\frac{2}{3},\ \frac{3}{2}),\ (\frac{7}{3},\ 2)$$ Exercise $$\PageIndex{1.1.26}$$: $$(-\frac{24}{5},\ \frac{6}{5}),\ (-\frac{11}{5},\ -\frac{19}{5})$$ Exercise $$\PageIndex{1.1.27}$$: $$(\sqrt{2},\ \sqrt{3}),\ (-\sqrt{8},-\sqrt{12})$$ Exercise $$\PageIndex{1.1.28}$$: $$(2\sqrt{45},\ \sqrt{12}),\ (\sqrt{20},\sqrt{27})$$ Exercise $$\PageIndex{1.1.29}$$: $$(0,\ 0),\ (x,\ y)$$ Exercise $$\PageIndex{1.1.30}$$: Find all of the points of the form $$(x,\ -1)$$ which are 4 units from the point $$(3,\ 2)$$. Exercise $$\PageIndex{1.1.31}$$: Find all of the points on the $$y$$-axis which are 5 units from the point $$(-5,\ 3)$$. Exercise $$\PageIndex{1.1.32}$$: Find all of the points on the $$x$$-axis which are 2 units from the point $$(-1,\ 1)$$. Exercise $$\PageIndex{1.1.33}$$: Find all of the points of the form $$(x,\ -x)$$ which are 1 unit from the origin. Exercise $$\PageIndex{1.1.34}$$: Let's assume for a moment that we are standing at the origin and the positive $$y$$-axis points due North while the positive $$x$$-axis points due East. Our Sasquatch-o-meter tells us that Sasquatch is 3 miles West and 4 miles South of our current position. What are the coordinates of his position? How far away is he from us? If he runs 7 miles due East what would his new position be? Exercise $$\PageIndex{1.1.35}$$: Add text here. For the automatic number to work, you need to add the "AutoNum" template (preferably at the end) to the page. Exercise $$\PageIndex{A}$$: The points are arranged vertically. (Hint: Use $$P(a,\ y_0)$$ and Q(a, y_1).) Exercise $$\PageIndex{B}$$: The points are arranged horizontally. (Hint: Use $$P(x_0,\ b)$$ and $$Q(x_1,\ b)$$.) Exercise $$\PageIndex{C}$$: The points are actually the same point. (You shouldn't need a hint for this one.) Exercise $$\PageIndex{1.1.36}$$: Verify the Midpoint Formula by showing the distance between $$P(x_1,\ y_1)$$ and $$M$$ and the distance between $$M$$ and $$Q(x_2,\ y2)$$ are both half of the distance between $$P$$ and $$Q$$. Exercise $$\PageIndex{1.1.37}$$: Show that the points $$A$$, $$B$$ and $$C$$ below are the vertices of a right triangle. Exercise $$\PageIndex{A}$$: $$A(-3,2),\ B(-6,4)$$, and $$C(1,8)$$ Exercise $$\PageIndex{B}$$: $$A(-3,\ 1),\ B(4,\ 0)$$, and $$C(0,\ 3)$$ Exercise $$\PageIndex{1.1.38}$$: Find a point $$D(x,\ y)$$ such that the points $$A(-3,\ 1)$$, $$B(4,\ 0)$$, $$C(0,\ 3)$$ and $$D$$ are the corners of a square. Justify your answer. Exercise $$\PageIndex{1.1.39}$$: Discuss with your classmates how many numbers are in the interval $$(0,\ 1)$$. Exercise $$\PageIndex{1.1.40}$$: The world is not at. Thus the Cartesian Plane cannot possibly be the end of the story. Discuss with your classmates how you would extend Cartesian Coordinates to represent the three dimensional world. What would the Distance and Midpoint formulas look like, assuming those concepts make sense at all? ## 1.2: Relations In Exercises 1 - 20, graph the given relation. Exercise $$\PageIndex{1}$$: $$\{(-3, 9), (-2, 4), (-1, 1), (0, 0), (1, 1), (2, 4), (3, 9)\}$$ Exercise $$\PageIndex{2}$$: $$\{(-2, 0), (-1, 1), (-1,-1), (0, 2), (0,-2), (1, 3), (1,-3)\}$$ Exercise $$\PageIndex{3}$$: $$\{(m,\ 2m) |m = 0;-1,\pm2\}$$ Exercise $$\PageIndex{4}$$: $$\{(\frac{6}{k},\ k)\ |k = \pm1,\pm2,\pm3,\pm4,\pm5,\pm6\}$$ Exercise $$\PageIndex{5}$$: $$\{(n,4-n^2)\ |n=0,\pm 1,\pm 2\}$$ Exercise $$\PageIndex{6}$$: $$\{(\sqrt{j},j)\ |j=0,1,4,9\}$$ Exercise $$\PageIndex{7}$$: $$\{(x,-2)\ |x>-4\}$$ Exercise $$\PageIndex{8}$$: $$\{(x,3)\ |x\leq4\}$$ Exercise $$\PageIndex{9}$$: $$\{(-1,y)\ |y>1\}$$ Exercise $$\PageIndex{10}$$: $$\{(2,y)\ |y\leq5\}$$ Exercise $$\PageIndex{11}$$: $$\{(-2,y)\ |-3<y\leq4\}$$ Exercise $$\PageIndex{12}$$: $$\{(3,y)\ |-4\leq y<3\}$$ Exercise $$\PageIndex{13}$$: $$\{(x,2)|\ -2\leq x<3\}$$ Exercise $$\PageIndex{14}$$: $$\{(x,-3)|\ -4<x\leq3\}$$ Exercise $$\PageIndex{15}$$: $$\{(x,y)\ |x>-2\}$$ Exercise $$\PageIndex{16}$$: $$\{(x,y)\ |x\leq3\}$$ Exercise $$\PageIndex{17}$$: $$\{(x,y)\ |y<4\}$$ Exercise $$\PageIndex{18}$$: $$\{(x,y)\ |x\leq3,y<2\}$$ Exercise $$\PageIndex{19}$$: $$\{(x,y)\ |x>0,y<4\}$$ Exercise $$\PageIndex{20}$$: $$\{(x,y)\ |-\sqrt{2}\leq x\leq \frac{2}{3},\pi <y\leq \frac{9}{2}\}$$ In Exercises 21 - 30, describe the given relation using either the roster or set-builder method. Exercise $$\PageIndex{21}$$: Exercise $$\PageIndex{22}$$: Exercise $$\PageIndex{23}$$: Exercise $$\PageIndex{24}$$: Exercise $$\PageIndex{25}$$: Exercise $$\PageIndex{26}$$: Exercise $$\PageIndex{27}$$: Exercise $$\PageIndex{28}$$: Exercise $$\PageIndex{29}$$: Exercise $$\PageIndex{30}$$: In Exercises 31 - 36, graph the given line. Exercise $$\PageIndex{31}$$: $$x=-2$$ Exercise $$\PageIndex{32}$$: $$x=3$$ Exercise $$\PageIndex{33}$$: $$y=3$$ Exercise $$\PageIndex{34}$$: $$y=-2$$ Exercise $$\PageIndex{35}$$: $$x=0$$ Exercise $$\PageIndex{36}$$: $$y=0$$ Some relations are fairly easy to describe in words or with the roster method but are rather diffcult, if not impossible, to graph. Discuss with your classmates how you might graph the relations given in Exercises 37 - 40. Please note that in the notation below we are using the ellipsis, . . . , to denote that the list does not end, but rather, continues to follow the established pattern indenitely. For the relations in Exercises 37 and 38, give two examples of points which belong to the relation and two points which do not belong to the relation. Exercise $$\PageIndex{37}$$: $$\{ (x,y)\ |x\ \text{is an odd integer, and } y\ \text{is an even integer.}\}$$ Exercise $$\PageIndex{38}$$: $$\{(x,\ 1)\ |x\text{ is an irrational number }\}$$ Exercise $$\PageIndex{39}$$: $$\{ (1,\ 0),\ (2,\ 1),\ (4,\ 2),\ (8,\ 3),\ (16,\ 4),\ (32,\ 5),...\}$$ Exercise $$\PageIndex{40}$$: $$\{...,(-3,\ 9),(-2,\ 4),(-1,\ 1),(0,\ 0),(1,\ 1),(2,\ 4),(3,\ 9)...\}$$ For each equation given in Exercises 41 - 52: • Find the x- and y-intercept(s) of the graph, if any exist. • Follow the procedure in Example 1.2.3 to create a table of sample points on the graph of the equation. • Plot the sample points and create a rough sketch of the graph of the equation. • Test for symmetry. If the equation appears to fail any of the symmetry tests, find a point on the graph of the equation whose reflection fails to be on the graph as was done at the end of Example 1.2.4 Exercise $$\PageIndex{41}$$: $$y=x^{2}+1$$ Exercise $$\PageIndex{42}$$: $$y=x^{2}\ -\ 2x\ -\ 8$$ Exercise $$\PageIndex{43}$$: $$y=x^{3}-x$$ Exercise $$\PageIndex{44}$$: $$y=\frac{x^{3}}{4}-3x$$ Exercise $$\PageIndex{45}$$: $$y=\sqrt{x-2}$$ Exercise $$\PageIndex{46}$$: $$y=2\sqrt{x+4}-2$$ Exercise $$\PageIndex{47}$$: $$3x\ -\ y=7$$ Exercise $$\PageIndex{48}$$: $$3x-2y=10$$ Exercise $$\PageIndex{49}$$: $$(x+2)^{2}+y^{2}=16$$ Exercise $$\PageIndex{50}$$: $$x^{2}-y^{2}=1$$ Exercise $$\PageIndex{51}$$: $$y=x^{2}+1$$ Exercise $$\PageIndex{52}$$: $$x^{3}y=-4$$ The procedures which we have outlined in the Examples of this section and used in Exercises 41 - 52 all rely on the fact that the equations were "well-behaved". Not everything in Mathematics is quite so tame, as the following equations will show you. Discuss with your classmates how you might approach graphing the equations given in Exercises 53 - 56. What difficulties arise when trying to apply the various tests and procedures given in this section? For more information, including pictures of the curves, each curve name is a link to its page at www.wikipedia.org. For a much Exercise $$\PageIndex{53}$$: Folium of Descartes: $$x^{3}+y^{3}-3xy\ =\ 0$$ Exercise $$\PageIndex{54}$$: Kampyle of Eudoxus: $$x^{4}\ =\ x^{2}+y^{2}$$ Exercise $$\PageIndex{55}$$: Tschirnhausen cubic: $$y^{2}\ =\ x^{3}+3x^{2}$$ Exercise $$\PageIndex{56}$$: Cooked egg: $$(x^{2}+y^{2})^{2}\ =\ x^{3}+y^{3}$$ Exercise $$\PageIndex{57}$$: With the help of your classmates, nd examples of equations whose graphs possess • symmetry about the $$x$$-axis only • symmetry about the $$y$$-axis only • symmetry about the origin only • symmetry about the $$x$$-axis, $$y$$-axis, and origin Can you nd an example of an equation whose graph possesses exactly two of the symmetries listed above? Why or why not? ## 1.3 Introduction to Functions In Exercises 1 - 12, determine whether or not the relation represents y as a function of x. Find the domain and range of those relations which are functions. Exercise $$\PageIndex{1}$$: $$\{(-3,9),\ (-2,4),\ (-1,1),\ (0,0),\ (1,1),\ (1,1),\ (1,1),\ (2,4),\ (3,9)\}$$ Exercise $$\PageIndex{2}$$: $$\{(-3,0),\ (1,6),\ (2,-3),\ (4,2),\ (-5,6),\ (4,-9),\ (6,2)\}$$ Exercise $$\PageIndex{3}$$: $$\{(-3,0),\ (-7,6),\ (5,5),\ (6,4),\ (4,-9),\ (3,0)\}$$ Exercise $$\PageIndex{4}$$: $$\{(1,2),\ (4,4),\ (9,6),\ (16,8),\ (25,10),\ (36,12),...\}$$ Exercise $$\PageIndex{5}$$: $$\{(x,y)\ |x \text{ is an odd integer, and }y\text{ is an even integer}\}$$ Exercise $$\PageIndex{6}$$: $$\{(x,1)\ |x \text{ is an irrational number}\}$$ Exercise $$\PageIndex{7}$$: $$\{(1,0),\ (2,1),\ (4,2),\ (8,3),\ (16,4),\ (32,5),...\}$$ Exercise $$\PageIndex{8}$$: $$..., \{(-3,9),\ (-2,4),\ (-1,1),\ (0,0),\ (1,1),\ (2,4),\ (3,9)...\}$$ Exercise $$\PageIndex{9}$$: $$\{ (-2,y)|-3<y<4\}$$ Exercise $$\PageIndex{10}$$: $$\{ (x,3)|-2\leq x <4\}$$ Exercise $$\PageIndex{11}$$: $$\{(x,x^{2})\ |x \text{ is an real number}\}$$ Exercise $$\PageIndex{12}$$: $$\{(x^{2},x)\ |x \text{ is an real number}\}$$ In Exercises 13 - 32, determine whether or not the relation represents y as a function of x. Find the domain and range of those relations which are functions. Exercise $$\PageIndex{13}$$: Exercise $$\PageIndex{14}$$: Exercise $$\PageIndex{15}$$: Exercise $$\PageIndex{16}$$: Exercise $$\PageIndex{17}$$: Exercise $$\PageIndex{18}$$: Exercise $$\PageIndex{19}$$: Exercise $$\PageIndex{20}$$: Exercise $$\PageIndex{21}$$: Exercise $$\PageIndex{22}$$: Exercise $$\PageIndex{23}$$: Exercise $$\PageIndex{24}$$: Exercise $$\PageIndex{25}$$: Exercise $$\PageIndex{26}$$: Exercise $$\PageIndex{27}$$: Exercise $$\PageIndex{28}$$: Exercise $$\PageIndex{29}$$: Exercise $$\PageIndex{30}$$: Exercise $$\PageIndex{31}$$: Exercise $$\PageIndex{32}$$: In Exercises 33 - 47, determine whether or not the equation represents $$y$$ as a function of $$x$$. Exercise $$\PageIndex{33}$$: $$y\ =\ x^{3}\ -\ x$$ Exercise $$\PageIndex{34}$$: $$y\ =\ \sqrt{x-2}$$ Exercise $$\PageIndex{35}$$: $$x^{3}y\ =\ -4$$ Exercise $$\PageIndex{36}$$: $$x^{2}\ -\ y^{2}\ =\ 1$$ Exercise $$\PageIndex{37}$$: $$y=\frac{x}{x^{2}-9}$$ Exercise $$\PageIndex{38}$$: $$x\ =\ -6$$ Exercise $$\PageIndex{39}$$: $$x\ =\ y^{2}\ +\ 4$$ Exercise $$\PageIndex{40}$$: $$y\ =\ x^{2}\ +\ 4$$ Exercise $$\PageIndex{41}$$: $$x^{2}\ +\ y^{2}\ =\ 4$$ Exercise $$\PageIndex{42}$$: $$y\ =\ \sqrt{4-x^{2}}$$ Exercise $$\PageIndex{43}$$: $$x^{2}\ -\ y^{2}\ =\ 4$$ Exercise $$\PageIndex{44}$$: $$x^{3}\ +\ y^{3}\ =\ 4$$ Exercise $$\PageIndex{45}$$: $$2x\ +\ 3y\ =\ 4$$ Exercise $$\PageIndex{46}$$: $$2xy\ =\ 4$$ Exercise $$\PageIndex{47}$$: $$x^{2}\ =\ y^{2}$$ Exercise $$\PageIndex{48}$$: Explain why the population $$P$$ of Sasquatch in a given area is a function of time $$t$$. What would be the range of this function? Exercise $$\PageIndex{49}$$: Explain why the relation between your classmates and their email addresses may not be a function. What about phone numbers and Social Security Numbers? The process given in Example 1.3.5 for determining whether an equation of a relation represents $$y$$  as a function of $$x$$ breaks down if we cannot solve the equation for $$y$$ in terms of $$x$$. However, that does not prevent us from proving that an equation fails to represent $$y$$ as a function of $$x$$. What we really need is two points with the same $$x$$-coordinate and different $$y$$-coordinates which both satisfy the equation so that the graph of the relation would fail the Vertical Line Test 1.1. Discuss with your classmates how you might find such points for the relations given in Exercises 50 - 53. Exercise $$\PageIndex{50}$$: $$x^{3}\ +\ y^{3}\ -3xy\ =\ 0$$ Exercise $$\PageIndex{51}$$: $$x^{4}\ =\ x^{2}\ +\ y^{2}$$ Exercise $$\PageIndex{52}$$: $$y^{2}\ =\ x^{3}\ +\ 3x^{2}$$ Exercise $$\PageIndex{53}$$: $$(x^{2}+y^{2})^{2}\ =\ x^{3}\ +\ y^{3}$$ ## 1.4 Function Notation: In Exercises 1 - 10, find an expression for $$f(x)$$ and state its domain. Exercise $$\PageIndex{1}$$: $$f$$ is a function that takes a real number $$x$$ and performs the following three steps in the order given: (1) multiply by 2; (2) add 3; (3) divide by 4. Exercise $$\PageIndex{2}$$: $$f$$ is a function that takes a real number $$x$$ and performs the following three steps in the order given: (1) add 3; (2) multiply by 2; (3) divide by 4. Exercise $$\PageIndex{3}$$: $$f$$ is a function that takes a real number $$x$$ and performs the following three steps in the order given: (1) divide by 4; (2) add 3; (3) multiply by 2. Exercise $$\PageIndex{4}$$: $$f$$ is a function that takes a real number $$x$$ and performs the following three steps in the order given: (1) multiply by 2; (2) add 3; (3) take the square root. Exercise $$\PageIndex{5}$$: $$f$$ is a function that takes a real number $$x$$ and performs the following three steps in the order given: (1) add 3; (2) multiply by 2; (3) take the square root. Exercise $$\PageIndex{6}$$: $$f$$ is a function that takes a real number $$x$$ and performs the following three steps in the order given: (1) add 3; (2) take the square root; (3) multiply by 2. Exercise $$\PageIndex{7}$$: $$f$$ is a function that takes a real number $$x$$ and performs the following three steps in the order given: (1) take the square root; (2) subtract 13; (3) make the quantity the denominator of a fraction with numerator 4. Exercise $$\PageIndex{8}$$: $$f$$ is a function that takes a real number $$x$$ and performs the following three steps in the order given: (1) subtract 13; (2) take the square root; (3) make the quantity the denominator of a fraction with numerator 4. Exercise $$\PageIndex{9}$$: $$f$$ is a function that takes a real number $$x$$ and performs the following three steps in the order given: (1) take the square root; (2) make the quantity the denominator of a fraction with numerator 4; (3) subtract 13. Exercise $$\PageIndex{10}$$: $$f$$ is a function that takes a real number $$x$$ and performs the following three steps in the order given: (1) make the quantity the denominator of a fraction with numerator 4; (2) take the square root; (3) subtract 13. In Exercises 11 - 18, use the given function $$f$$ to find and simplify the following: $$f(3)$$ $$f(4x)$$ $$f(x\ -\ 4)$$ $$f(-1)$$ $$4f(x)$$ $$f(x)\ -\ 4$$ $$f(\frac{3}{2})$$ $$f(-x)$$ $$f(x^{2})$$ Exercise $$\PageIndex{11}$$: $$f(x)\ =\ 2x\ +\ 1$$ Exercise $$\PageIndex{12}$$: $$f(x)\ =\ 3\ -\ 4x$$ Exercise $$\PageIndex{13}$$: $$f(x)\ =\ 2\ -\ x^{2}$$ Exercise $$\PageIndex{14}$$: $$f(x)\ =\ x^{2}\ -\ 3x\ +\ 2$$ Exercise $$\PageIndex{15}$$: $$f(x)\ =\ \frac{x}{x-1}$$ Exercise $$\PageIndex{16}$$: $$f(x)\ =\ \frac{2}{x^{3}}$$ Exercise $$\PageIndex{17}$$: $$f(x)\ =\ 6$$ Exercise $$\PageIndex{18}$$: $$f(x)\ = \ 0$$ In Exercises 19 - 26, use the given function $$f$$ to find and simplify the following: $$f(2)$$ $$2f(a)$$ $$f(\frac{2}{a})$$ $$f(-2)$$ $$4f(a\ +\ 2)$$ $$\frac{f(a)}{2}$$ $$f(2a)$$ $$f(a)\ +\ f(2)$$ $$f(a\ +\ h)$$ Exercise $$\PageIndex{19}$$: $$f(x)\ =\ 2x\ -\ 5$$ Exercise $$\PageIndex{20}$$: $$f(x)\ =\ 5\ -\ 2x$$ Exercise $$\PageIndex{21}$$: $$f(x)\ =\ 2x^{2}\ -\ 1$$ Exercise $$\PageIndex{22}$$: $$f(x)\ =\ 3x^{2}\ +\ 3x\ -\ 2$$ Exercise $$\PageIndex{23}$$: $$f(x)\ = \ \sqrt{2x\ +\ 1}$$ Exercise $$\PageIndex{24}$$: $$f(x)\ =\ 117}$$ Exercise $$\PageIndex{25}$$: $$f(x)\ =\ \frac{x}{2}$$ Exercise $$\PageIndex{26}$$: $$f(x)\ = \ \frac{2}{x}$$ In Exercises 27 - 34, use the given function $$f$$ to find $$f(0)$$ and solve $$f(x) = 0$$ Exercise $$\PageIndex{27}$$: $$f(x)\ =\ 2x\ -\ 1$$ Exercise $$\PageIndex{28}$$: $$f(x)\ =\ 3\ -\ \frac{2}{5}x$$ Exercise $$\PageIndex{29}$$: $$f(x)\ =\ 2x^{2}\ -\ 6$$ Exercise $$\PageIndex{30}$$: $$f(x)\ =\ x^{2}\ -\ x\ -\ 12$$ Exercise $$\PageIndex{31}$$: $$f(x)\ = \ \sqrt{x\ +\ 4}$$ Exercise $$\PageIndex{32}$$: $$f(x)\ = \ \sqrt{1\ -\ 2x}$$ Exercise $$\PageIndex{33}$$: $$f(x)\ =\ \frac{3}{4\ -\ x}$$ Exercise $$\PageIndex{34}$$: $$f(x)\ =\ \frac{3x^{2}\ -\ 12x}{4\ -\ x^{2}}$$ Exercise $$\PageIndex{35}$$: Let $$f(x)\ =\ \begin{cases}x+5 &\text{if}\qquad x \leq-3\\\sqrt{9-x^{2}} &\text{if}\qquad -3<x\leq3\\-x+5 &\text{if}\qquad x>3\end{cases}$$ Compute the following function values. Exercise $$\PageIndex{a}$$: $$f(-4)$$ Exercise $$\PageIndex{b}$$: $$f(-3)$$ Exercise $$\PageIndex{c}$$: $$f(3)$$ Exercise $$\PageIndex{d}$$: $$f(3.001)$$ Exercise $$\PageIndex{e}$$: $$f(-3.001)$$ Exercise $$\PageIndex{f}$$: $$f(2)$$ Exercise $$\PageIndex{36}$$: Let $$f(x)\ =\ \begin{cases}x^{2} &\text{if}\qquad x \leq-1\\\sqrt{1-x^{2}} &\text{if}\qquad -1<x\leq1\\x &\text{if}\qquad x>1\end{cases}$$ Compute the following function values. Exercise $$\PageIndex{a}$$: $$f(4)$$ Exercise $$\PageIndex{b}$$: $$f(-3)$$ Exercise $$\PageIndex{c}$$: $$f(1)$$ Exercise $$\PageIndex{d}$$: $$f(0)$$ Exercise $$\PageIndex{e}$$: $$f(-1)$$ Exercise $$\PageIndex{f}$$: $$f(-0.999)$$ In Exercises 37 - 62, find the (implied) domain of the function. Exercise $$\PageIndex{37}$$: $$f(x)\ =\ x^{4}\ -\ 13x^{3}\ +\ 56x^{2}\ -\ 19$$ Exercise $$\PageIndex{38}$$: $$f(x)\ =\ x^{2}\ +\ 4$$ Exercise $$\PageIndex{39}$$: $$f(x)\ =\ \frac{x\ -\ 2}{x\ +\ 1}$$ Exercise $$\PageIndex{40}$$: $$f(x)\ =\ \frac{3x}{x^{2}\ +\ x\ -\ 2}$$ Exercise $$\PageIndex{41}$$: $$f(x)\ =\ \frac{2x}{x^{3}\ +\ 3}$$ Exercise $$\PageIndex{42}$$: $$f(x)\ =\ \frac{2x}{x^{3}\ -\ 3}$$ Exercise $$\PageIndex{43}$$: $$f(x)\ =\ \frac{x\ +\ 4}{x^{2}\ -\ 36}$$ Exercise $$\PageIndex{44}$$: $$f(x)\ =\ \frac{x\ -\ 2}{x\ +\ 2}$$ Exercise $$\PageIndex{45}$$: $$f(x)\ =\ \sqrt{3\ -\ x}$$ Exercise $$\PageIndex{46}$$: $$f(x)\ =\ \sqrt{2x\ +\ 5}$$ Exercise $$\PageIndex{47}$$: $$f(x)\ =\ 9x\sqrt{x\ +\ 3}$$ Exercise $$\PageIndex{48}$$: $$f(x)\ =\ \frac{\sqrt{7\ -\ x}}{x^{2}\ +\ 1}$$ Exercise $$\PageIndex{49}$$: $$f(x)\ =\ \sqrt{6x\ -\ 2}$$ Exercise $$\PageIndex{50}$$: $$f(x)\ =\ \frac{6}{\sqrt{6x\ -\ 2}}$$ Exercise $$\PageIndex{51}$$: $$f(x)\ =\ \sqrt[3]{6x\ -\ 2}$$ Exercise $$\PageIndex{52}$$: $$f(x)\ =\ \frac{6}{4\ -\ \sqrt{6x\ -\ 2}}$$ Exercise $$\PageIndex{53}$$: $$f(x)\ =\ \frac{\sqrt{6x\ -\ 2}}{x^{2}\ -\ 36}$$ Exercise $$\PageIndex{54}$$: $$f(x)\ =\ \frac{\sqrt[3]{6x\ -\ 2}}{x^{2}\ +\ 36}$$ Exercise $$\PageIndex{55}$$: $$s(t)\ =\ \frac{t}{t-8}$$ Exercise $$\PageIndex{56}$$: $$Q(r)\ =\ \frac{\sqrt{r}}{r-8}$$ Exercise $$\PageIndex{57}$$: $$b(\theta)=\frac{\theta}{\sqrt{\theta\ -\ 8}}$$ Exercise $$\PageIndex{58}$$: $$A(x)\ =\ \sqrt{x\ -\ 7}\ +\ \sqrt{9\ -\ x}$$ Exercise $$\PageIndex{59}$$: $$\alpha(y)\ =\ \sqrt[3]{\frac{y}{y-8}}$$ Exercise $$\PageIndex{60}$$: $$g(v)\ =\ \frac{1}{4\ -\ \frac{1}{v^{2}}}$$ Exercise $$\PageIndex{61}$$: $$T(t)\ =\ \frac{\sqrt{t}\ -\ 8}{5\ -\ t}$$ Exercise $$\PageIndex{62}$$: $$u(w)\ =\ \frac{w\ -\ 8}{5\ -\ \sqrt{w}}$$ Exercise $$\PageIndex{63}$$: The area $$A$$ enclosed by a square, in square inches, is a function of the length of one of its sides $$x$$, when measured in inches. This relation is expressed by the formula $$A(x)\ =\ x2$$ for $$x\ >\ 0$$. Find $$A(3)$$ and solve $$A(x)\ =\ 36$$. Interpret your answers to each. Why is $$x$$ restricted to $$x\ >\ 0$$? Exercise $$\PageIndex{64}$$: The area $$A$$ enclosed by a square, in square inches, is a function of the length of one of its sides $$x$$, when measured in inches. This relation is expressed by the formula $$A(x)\ =\ x2$$ for $$x\ >\ 0.$$ Find $$A(3)$$ and solve $$A(x)\ =\ 36$$. Interpret your answers to each. Why is $$x$$ restricted to $$x\ >\ 0$$? Exercise $$\PageIndex{65}$$: The volume $$V$$ enclosed by a cube, in cubic centimeters, is a function of the length of one of its sides $$x$$, when measured in centimeters. This relation is expressed by the formula $$V(x)\ =\ x3$$ for $$x\ >\ 0$$. Find $$V(5)$$ and solve $$V(x)\ =\ 27$$. Interpret your answers to each. Why is $$x$$ restricted to $$x\ >\ 0$$? Exercise $$\PageIndex{66}$$: The volume $$V$$ enclosed by a sphere, in cubic feet, is a function of the radius of the sphere $$r$$, when measured in feet. This relation is expressed by the formula $$V(r)\ =\ \frac{4\pi}{3}r^{3}$$ for $$r\ >\ 0$$. Find $$V(3)$$ and solve $$V(r)\ =\ \frac{32\pi}{3}$$. Interpret your answers to each. Why is $$r$$ restricted to $$r\ >\ 0$$? Exercise $$\PageIndex{67}$$: The height of an object dropped from the roof of an eight story building is modeled by: $$h(t)\ =\ -16t^{2}\ +\ 64,\ 0\ \leq\ t\ \leq\ 2$$. Here, $$h$$ is the height of the object off the ground, in feet, $$t$$ seconds after the object is dropped. Find $$h(0)$$ and solve $$h(t)\ =\ 0$$. Interpret your answers to each. Why is $$t$$ restricted to $$0\ \leq\ t\ \leq\ 2$$? Exercise $$\PageIndex{68}$$: The temperature $$T$$ in degrees Fahrenheit $$t$$ hours after 6 AM is given by $$T(t) = -\frac{1}{2}t^{2}\ +\ 8t\ +\ 3$$ for $$0\ \leq\ t\ \leq\ 12$$. Find and interpret $$T(0),\ T(6)$$ and $$T(12)$$. Exercise $$\PageIndex{69}$$: The function $$C(x)\ =\ x^{2}\ -\ 10x\ +\ 27$$ models the cost, in hundreds of dollars, to produce $$x$$ thousand pens. Find and interpret $$C(0),\ C(2)$$ and $$C(5)$$. Exercise $$\PageIndex{70}$$: Using data from the Bureau of Transportation Statistics, the average fuel economy $$F$$ in miles per gallon for passenger cars in the US can be modeled by $$F(t) = -0.0076t^{2}\ +\ 0.45t\ +\ 16,\ 0 \leq t \leq 28$$, where $$t$$ is the number of years since 1980. Use your calculator to find $$F(0), F(14)$$ and $$F(28)$$. Round your answers to two decimal places and interpret your answers to each. Exercise $$\PageIndex{71}$$: The population of Sasquatch in Portage County can be modeled by the function $$P(t) = \frac{150t}{t+15}$$, where t represents the number of years since 1803. Find and interpret $$P(0)$$ and $$P(205)$$. Discuss with your classmates what the applied domain and range of $$P$$ should be. Exercise $$\PageIndex{72}$$: For $$n$$ copies of the book $$Me and my Sasquatch$$, a print on-demand company charges $$C(n)$$ dollars, where $$C(n)$$ is determined by the formula $$C(n)=\begin{cases}15n & \text{if} & 1\leq n\leq25 \\13.50n & \text{if} & 25<n\leq50 \\ 12n & \text{if} & n>50\end{cases}$$ Exercise $$\PageIndex{A}$$: Find and interpret $$C(20)$$. Exercise $$\PageIndex{B}$$: How much does it cost to order 50 copies of the book? What about 51 copies? Exercise $$\PageIndex{C}$$: Your answer to 72b should get you thinking. Suppose a bookstore estimates it will sell 50 copies of the book. How many books can, in fact, be ordered for the same price as those 50 copies? (Round your answer to a whole number of books.) Exercise $$\PageIndex{73}$$: An on-line comic book retailer charges shipping costs according to the following formula $$S(n)=\begin{cases}1.5n+2.5 & \text{if} & 1\leq n\leq14 \\0 & \text{if} & n\geq15\end{cases}$$ where n is the number of comic books purchased and S(n) is the shipping cost in dollars. Exercise $$\PageIndex{A}$$: What is the cost to ship 10 comic books? Exercise $$\PageIndex{b}$$: What is the signicance of the formula $$S(n) = 0$$ for $$n \geq 15$$? Exercise $$\PageIndex{74}$$: The cost $$C$$ (in dollars) to talk m minutes a month on a mobile phone plan is modeled by $$C(m)=\begin{cases}25 & \text{if} & 0\leq m\leq1000 \\25\ +\ 0.1(m\ -\ 1000) & \text{if} & m>1000\end{cases}$$ Exercise $$\PageIndex{a}$$: How much does it cost to talk 750 minutes per month with this plan? Exercise $$\PageIndex{b}$$: How much does it cost to talk 750 minutes per month with this plan? Exercise $$\PageIndex{75}$$: In Section 1.1.1 we dened the set of integers as $$Z\ =\ \{...\ -3,\ -2,\ -1,\ 0,\ 1,\ 2,\ 3,...\}$$ The greatest integer of x, denoted by $$\llcorner x \lrcorner$$, is defined to be the largest integer $$k$$ with $$k \leq x$$ Exercise $$\PageIndex{A}$$: Find $$\llcorner0.785 \lrcorner,\ \llcorner117 \lrcorner,\ \llcorner-2.001\lrcorner,\ \text{and } \llcorner\pi+6\lrcorner$$ Exercise $$\PageIndex{B}$$: Discuss with your classmates how $$\llcorner x \lrcorner$$ may be described as a piecewise defined function. HINT: There are infinitely many pieces! Exercise $$\PageIndex{C}$$: Is $$\llcorner a+b\lrcorner\ =\ \llcorner a \lrcorner+\llcorner b \lrcorner$$ always true? What if $$a$$ or $$b$$ is an integer? Test some values, make a conjecture, and explain your result. Exercise $$\PageIndex{76}$$: We have through our examples tried to convince you that, in general, $$f(a + b)\ \neq\ f(a)\ +\ f(b)$$. It has been our experience that students refuse to believe us so we'll try again with a different approach. With the help of your classmates, find a function $$f$$ for which the following properties are always true. Exercise $$\PageIndex{A}$$: $$f(0)\ =\ f(-1\ +\ 1)\ =\ f(-1)\ +\ f(1)$$ Exercise $$\PageIndex{B}$$: $$f(5)\ =\ f(2 + 3)\ =\ f(2)\ +\ f(3)$$ Exercise $$\PageIndex{c}$$: $$f(-6)\ =\ f(0\ -\ 6)\ =\ f(0)\ -\ f(6)$$ Exercise $$\PageIndex{D}$$: $$f(a\ +\ b)\ =\ f(a)\ +\ f(b)$$ How many functions did you find that failed to satisfy the conditions above? Did f(x) = x2 work? What about $$f(x)\ =\ \sqrt{x}$$ or $$f(x)\ =\ 3x\ +\ 7$$ or $$f(x)\ =\ \frac{1}{x}$$ ? Did you find an attribute common to those functions that did succeed? You should have, because there is only one extremely special family of functions that actually works here. Thus we return to our previous statement, in general, $$f(a\ +\ b)\ \neq\ f(a)\ +\ f(b)$$. ### 1.5 Function Arithmetic: In Exercises 1 - 10, use the pair of functions f and g to nd the following values if they exist. $$(f\ +\ g)(2)$$ $$(f-g)(-1)$$ $$(g-f)(1)$$ $$(fg)(\frac{1}{2})$$ $$(\frac{f}{g})(0)$$ $$(\frac{g}{f})(-2)$$ Exercise $$\PageIndex{1}$$: $$f(x)\ =\ 3x\ +\ 1$$ and $$g(x)\ =\ 4\ -\ x$$ Exercise $$\PageIndex{2}$$: $$f(x)\ =\ x^{2}$$ and $$g(x)\ =\ -2x\ +\ 1$$ Exercise $$\PageIndex{3}$$: $$f(x)\ =\ x^{2}\ -\ x$$ and $$g(x)\ =\ 12\ -\ x^{2}$$ Exercise $$\PageIndex{4}$$: $$f(x)\ =\ 2x^{3}$$ and $$g(x)\ =\ -x^{2}\ -\ 2x\ -\ 3$$ Exercise $$\PageIndex{5}$$: $$f(x)\ =\ sqrt{x+3}$$ and $$g(x)\ =\ 2x\ -\ 1$$ Exercise $$\PageIndex{6}$$: $$f(x)\ =\ sqrt{4-x}$$ and $$g(x)\ =\ sqrt{x+2}$$ Exercise $$\PageIndex{7}$$: $$f(x)\ =\ 2x$$ and $$g(x)\ =\ \frac{1}{2x\ +\ 1}$$ Exercise $$\PageIndex{8}$$: $$f(x)\ =\ x^{2}$$ and $$g(x)\ =\ \frac{3}{2x\ -\ 3}$$ Exercise $$\PageIndex{9}$$: $$f(x)\ =\ x^{2}$$ and $$g(x)\ =\ \frac{1}{x^{2}}$$ Exercise $$\PageIndex{10}$$: $$f(x)\ =\ x^{2}\ +\ 1$$ and $$g(x)\ =\ \frac{1}{x^{2}\ +\ 1}$$ In Exercises 11 - 20, use the pair of functions $$f$$ and $$g$$ to find the domain of the indicated function then find and simplify an expression for it. $$(f\ +\ g)(x)$$ $$(f\ -\ g)(x)$$ $$(fg)(x)$$ $$(\frac{f}{g})(x)$$ Exercise $$\PageIndex{11}$$: $$f(x)\ =\ 2x\ +\ 1$$ and $$g(x)\ =\ x\ -\ 2$$ Exercise $$\PageIndex{12}$$: $$f(x)\ =\ 1\ -\ 4x$$ and $$g(x)\ =\ 2x\ -\ 1$$ Exercise $$\PageIndex{13}$$: $$f(x)\ =\ x^{2}$$ and $$g(x)\ =\ 3x\ -\ 1$$ Exercise $$\PageIndex{14}$$: $$f(x)\ =\ x^{2}\ -\ x$$ and $$g(x)\ =\ 7x$$ Exercise $$\PageIndex{15}$$: $$f(x)\ =\ x^{2}\ -\ 4$$ and $$g(x)\ =\ 3x\ +\ 6$$ Exercise $$\PageIndex{16}$$: $$f(x)\ =\ -x^{2}\ +\ x\ +\ 6$$ and $$g(x)\ =\ x^{2}\ -\ 9$$ Exercise $$\PageIndex{17}$$: $$f(x)\ =\ \frac{x}{2}$$ and $$g(x)\ =\ \frac{2}{x}$$ Exercise $$\PageIndex{18}$$: $$f(x)\ =\ x\ -\ 1$$ and $$g(x)\ =\ \frac{1}{x\ -\ 1}$$ Exercise $$\PageIndex{19}$$: $$f(x)\ =\ x$$ and $$g(x)\ =\ \sqrt{x\ +\ 1}$$ Exercise $$\PageIndex{20}$$: $$f(x)\ =\ \sqrt{x\ -\ 5}$$ and $$g(x)\ =\ f(x)\ =\ \sqrt{x\ -\ 5}$$ In Exercises 21 - 45, find and simplify the difference quotient $$\frac{f(x + h) - f(x)}{h}$$ for the given function. Exercise $$\PageIndex{21}$$: $$f(x)\ =\ 2x\ -\ 5$$ Exercise $$\PageIndex{22}$$: $$f(x)\ =\ -3x\ +\ 5$$ Exercise $$\PageIndex{23}$$: $$f(x)\ =\ 6$$ Exercise $$\PageIndex{24}$$: $$f(x)\ =\ 3x^{2}\ -\ x$$ Exercise $$\PageIndex{25}$$: $$f(x)\ =\ -x^{2}\ +\ 2x\ -\ 1$$ Exercise $$\PageIndex{26}$$: $$f(x)\ =\ 4x^{2}$$ Exercise $$\PageIndex{27}$$: $$f(x)\ =\ x\ -\ x^{2}$$ Exercise $$\PageIndex{28}$$: $$f(x)\ =\ x^{3}\ +\ 1$$ Exercise $$\PageIndex{29}$$: $$f(x)\ =\ mx\ +\ b\ \text{where } m \dne 0$$ Exercise $$\PageIndex{30}$$: $$f(x)\ =\ ax^{2}\ +\ bx\ +\ c\ \text{where } a \dne 0$$ Exercise $$\PageIndex{31}$$: $$f(x)\ =\ \frac{2}{x}$$ Exercise $$\PageIndex{32}$$: $$f(x)\ =\ \frac{3}{1\ -\ x}$$ Exercise $$\PageIndex{33}$$: $$f(x)\ =\ \frac{1}{x^{2}}$$ Exercise $$\PageIndex{34}$$: $$f(x)\ =\ \frac{2}{x\ +\ 5}$$ Exercise $$\PageIndex{35}$$: $$f(x)\ =\ \frac{1}{4x\ -\ 3}$$ Exercise $$\PageIndex{36}$$: $$f(x)\ =\ \frac{3x}{x\ +\ 1}$$ Exercise $$\PageIndex{37}$$: $$f(x)\ =\ \frac{x}{x\ -\ 9}$$ Exercise $$\PageIndex{38}$$: $$f(x)\ =\ \frac{x^{2}}{2x\ +\ 1}$$ Exercise $$\PageIndex{39}$$: $$f(x)\ =\ \sqrt{x\ -\ 9}$$ Exercise $$\PageIndex{40}$$: $$f(x)\ =\ \sqrt{2x\ +\ 1}$$ Exercise $$\PageIndex{41}$$: $$f(x)\ =\ \sqrt{-4x\ +\ 5}$$ Exercise $$\PageIndex{42}$$: $$f(x)\ =\ \sqrt{4\ -\ x}$$ Exercise $$\PageIndex{43}$$: $$f(x)\ =\ \sqrt{ax\ +\ b}\ \text{where } a \dne 0$$ Exercise $$\PageIndex{44}$$: $$f(x)\ =\ x\sqrt{x}$$ Exercise $$\PageIndex{45}$$: $$f(x)\ =\ \sqrt[3]{x}$$ HINT: $$(a\ -\ b)(a^{2}\ +\ ab\ +\ b^{2})\ =\ a^{3}-b^{3}$$ In Exercises 46 - 50, $$C(x)$$ denotes the cost to produce $$x$$ items and $$p(x)$$ denotes the price-demand function in the given economic scenario. In each Exercise, do the following: Find and interpret $$C(0)$$. Find and interpret $$\overline{C}(10)$$. Find and interpret $$p(5)$$. Find and interpret $$R(x)$$. Find and interpret $$P(x)$$. Find $$P(x)\ =\ 0$$ and interpret. Exercise $$\PageIndex{46}$$: The cost, in dollars, to produce $$x$$ "I'd rather be a Sasquatch" T-Shirts is $$C(x)\ =\ 2x\ +\ 26$$, $$x\ \geq\ 0$$ and the price-demand function, in dollars per shirt, is $$p(x)\ =\ 90\ -\ 3x,\ 0\leq x\leq 30$$. Exercise $$\PageIndex{47}$$: The cost, in dollars, to produce $$x$$ bottles of 100% All-Natural Certied Free-Trade Organic Sasquatch Tonic is $$C(x)\ =\ 10x\ +\ 100, \ x\geq 0$$ and the price-demand function, in dollars per bottle, is $$p(x)\ =\ 90\ -\ 3x,\ 0\leq x\leq 30$$. Exercise $$\PageIndex{48}$$: The cost, in cents, to produce x cups of Mountain Thunder Lemonade at Junior's Lemonade Stand is $$C(x)\ =\ 18x\ +\ 240$$, $$x\geq 0$$ and the price-demand function, in cents per cup, is $$p(x)\ =\ 90\ -\ 3x$$, $$0\leq x \leq 30$$. Exercise $$\PageIndex{49}$$: The daily cost, in dollars, to produce $$x$$ Sasquatch Berry Pies $$C(x)\ =\ 3x\ +\ 36$$,$$x\geq 0$$ and the price-demand function, in dollars per pie, is $$p(x)\ =\ 12\ -\ 0.5x$$, $$0\leq x\leq 24$$. Exercise $$\PageIndex{50}$$: The monthly cost, in hundreds of dollars, to produce $$x$$ custom built electric scooters is $$C(x)\ =\ 20x\ +\ 1000$$, $$x \geq 0$$ and the price-demand function, in hundreds of dollars per scooter, is $$p(x)\ =\ 140\ -\ 2x$$, $$0\leq x\leq 70$$. In Exercises 51 - 62, let $$f$$ be the function de defined by $$f=\{(-3,4),(-2,2),(-1,0),(0,1),(1,3),(2,4),(3,-1)\}$$ and let $$g$$ be the function defined $$g=\{(-3,-2),(-2,0),(-1,-4),(0,0),(1,-3),(2,1),(3,2)\}$$ Compute the indicated value if it exists. Exercise $$\PageIndex{51}$$: $$(f\ +\ g)(-3)$$ Exercise $$\PageIndex{52}$$: $$(f\ -\ g)(2)$$ Exercise $$\PageIndex{53}$$: $$(fg)(-1)$$ Exercise $$\PageIndex{54}$$: $$(g\ +\ f)(1)$$ Exercise $$\PageIndex{55}$$: $$(g\ -\ f)(3)$$ Exercise $$\PageIndex{56}$$: $$(gf)(-3)$$ Exercise $$\PageIndex{57}$$: $$(\frac{f}{g})(-2)$$ Exercise $$\PageIndex{58}$$: $$(\frac{f}{g})(-1)$$ Exercise $$\PageIndex{59}$$: $$(\frac{f}{g})(2)$$ Exercise $$\PageIndex{60}$$: $$(\frac{g}{f})(-1)$$ Exercise $$\PageIndex{61}$$: $$(\frac{g}{f})(3)$$ Exercise $$\PageIndex{62}$$: $$(\frac{g}{f})(-3)$$
2019-04-23 22:30:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6697072982788086, "perplexity": 1333.1126445215068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613888.70/warc/CC-MAIN-20190423214818-20190424000818-00435.warc.gz"}
https://www.physicsforums.com/threads/integrating-newtons-law.425135/
# Integrating Newton's Law 1. Aug 29, 2010 ### bhsmith 1. The problem statement, all variables and given/known data Starting from the modified Newton's Law (dp(rel))/dt=F with a constant Force F, and assuming that the particle starts with v=0 at time t=0, show that the velocity at time t is given by V(t)=c [(Ft/mc)/(1+ Ft/mc)] 2. Relevant equations 3. The attempt at a solution I know that I can integrate both sides of the equation with respect to time and solve, but i'm stuck on how to start that off. Any help would be appreciated! 2. Aug 29, 2010 ### rpf_rr Integrate, you find p=Ft, substitute p=mv/sqrt(1-v^2/c^2), some arithmetics and you fininshed, your solution is wrong, is valid for v^2 3. Aug 29, 2010 ### bhsmith I figured that one out too. But that equation for v(t) is stated in the problem. I'm thinking it might be different because it is supposed to be a "modified" Newton's Law for relativity instead of the classical equation P(class)=mv 4. Aug 30, 2010 ### rpf_rr my result is correct for relativity (at least for special as far i know), it is even reported in my textbook
2018-02-20 08:25:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835589587688446, "perplexity": 868.685890710808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812913.37/warc/CC-MAIN-20180220070423-20180220090423-00619.warc.gz"}
https://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=63&journalID=2418&pageb=1&userQueryID=&sort=&local_page=1&sorType=DESC&sorCol=2
Subjects -> METEOROLOGY (Total: 113 journals) Showing 1 - 36 of 36 Journals sorted alphabetically Acta Meteorologica Sinica       (Followers: 4) Advances in Atmospheric Sciences       (Followers: 45) Advances in Climate Change Research       (Followers: 39) Advances in Meteorology       (Followers: 28) Advances in Statistical Climatology, Meteorology and Oceanography       (Followers: 10) Aeolian Research       (Followers: 6) Agricultural and Forest Meteorology       (Followers: 20) American Journal of Climate Change       (Followers: 34) Atmósfera       (Followers: 3) Atmosphere       (Followers: 29) Atmosphere-Ocean       (Followers: 16) Atmospheric and Oceanic Science Letters       (Followers: 13) Atmospheric Chemistry and Physics (ACP)       (Followers: 48) Atmospheric Chemistry and Physics Discussions (ACPD)       (Followers: 16) Atmospheric Environment       (Followers: 75) Atmospheric Environment : X       (Followers: 3) Atmospheric Research       (Followers: 71) Atmospheric Science Letters       (Followers: 40) Boundary-Layer Meteorology       (Followers: 32) Bulletin of Atmospheric Science and Technology       (Followers: 5) Bulletin of the American Meteorological Society       (Followers: 51) Carbon Balance and Management       (Followers: 5) Ciencia, Ambiente y Clima       (Followers: 3) Climate       (Followers: 6) Climate and Energy       (Followers: 7) Climate Change Economics       (Followers: 33) Climate Change Responses       (Followers: 18) Climate Dynamics       (Followers: 44) Climate of the Past (CP)       (Followers: 5) Climate of the Past Discussions (CPD) Climate Policy       (Followers: 51) Climate Research       (Followers: 6) Climate Resilience and Sustainability       (Followers: 21) Climate Risk Management       (Followers: 7) Climate Services       (Followers: 3) Climatic Change       (Followers: 68) Current Climate Change Reports       (Followers: 10) Developments in Atmospheric Science       (Followers: 31) Dynamics and Statistics of the Climate System       (Followers: 5) Dynamics of Atmospheres and Oceans       (Followers: 19) Earth Perspectives - Transdisciplinarity Enabled Economics of Disasters and Climate Change       (Followers: 9) Energy & Environment       (Followers: 24) Environmental and Climate Technologies       (Followers: 4) Environmental Dynamics and Global Climate Change       (Followers: 17) Frontiers in Climate       (Followers: 3) GeoHazards       (Followers: 2) Global Meteorology       (Followers: 18) International Journal of Atmospheric Sciences       (Followers: 23) International Journal of Biometeorology       (Followers: 1) International Journal of Climate Change Strategies and Management       (Followers: 27) International Journal of Climatology       (Followers: 30) International Journal of Environment and Climate Change       (Followers: 12) International Journal of Image and Data Fusion       (Followers: 2) Journal of Agricultural Meteorology Journal of Applied Meteorology and Climatology       (Followers: 36) Journal of Atmospheric and Oceanic Technology       (Followers: 34) Journal of Atmospheric and Solar-Terrestrial Physics       (Followers: 210) Journal of Atmospheric Chemistry       (Followers: 22) Journal of Climate       (Followers: 57) Journal of Climate Change       (Followers: 16) Journal of Climatology       (Followers: 3) Journal of Hydrology and Meteorology       (Followers: 36) Journal of Hydrometeorology       (Followers: 11) Journal of Integrative Environmental Sciences       (Followers: 4) Journal of Meteorological Research       (Followers: 1) Journal of Meteorology and Climate Science       (Followers: 17) Journal of Space Weather and Space Climate       (Followers: 28) Journal of the Atmospheric Sciences       (Followers: 84) Journal of the Meteorological Society of Japan       (Followers: 6) Journal of Weather Modification       (Followers: 2) Large Marine Ecosystems       (Followers: 1) Mediterranean Marine Science       (Followers: 1) Meteorologica       (Followers: 2) Meteorological Applications       (Followers: 4) Meteorological Monographs       (Followers: 2) Meteorologische Zeitschrift       (Followers: 3) Meteorology and Atmospheric Physics       (Followers: 27) Mètode Science Studies Journal : Annual Review Michigan Journal of Sustainability       (Followers: 1) Modeling Earth Systems and Environment       (Followers: 1) Monthly Notices of the Royal Astronomical Society       (Followers: 16) Monthly Weather Review       (Followers: 33) Nature Climate Change       (Followers: 144) Nature Reports Climate Change       (Followers: 39) Nīvār npj Climate and Atmospheric Science       (Followers: 6) Open Atmospheric Science Journal       (Followers: 4) Open Journal of Modern Hydrology       (Followers: 7) Revista Brasileira de Meteorologia Revista Iberoamericana de Bioeconomía y Cambio Climático Russian Meteorology and Hydrology       (Followers: 3) Space Weather       (Followers: 25) Studia Geophysica et Geodaetica Tellus A       (Followers: 22) Tellus B       (Followers: 21) The Cryosphere (TC)       (Followers: 6) The Quarterly Journal of the Royal Meteorological Society       (Followers: 28) Theoretical and Applied Climatology       (Followers: 13) Tropical Cyclone Research and Review Urban Climate       (Followers: 4) Weather       (Followers: 18) Weather and Climate Dynamics Weather and Climate Extremes       (Followers: 16) Weather and Forecasting       (Followers: 27) Weatherwise       (Followers: 4) 气候与环境研究       (Followers: 1) Similar Journals Climate DynamicsJournal Prestige (SJR): 2.445 Citation Impact (citeScore): 4Number of Followers: 44      Hybrid journal (It can contain Open Access articles) ISSN (Print) 1432-0894 - ISSN (Online) 0930-7575 Published by Springer-Verlag  [2658 journals] • Evaluating cloud radiative effect from CMIP6 and two satellite datasets over the Tibetan Plateau based on CERES observation Abstract: Based on 12 years (March 2000–February 2012) of monthly data from Clouds and the Earth’s Radiant Energy System energy balanced and filled (CERES-EBAF), this study systematically evaluates the applicability of Advanced Very High Resolution Radiometer (AVHRR) and second Along-Track Scanning Radiometer and advanced ATSR (AATSR) flux products at the top-of-the-atmosphere (TOA), and the ability of atmosphere-only simulations of the Coupled Model Intercomparison Project Phase6 (CMIP6/AMIP) model in reproducing the observed spatial–temporal patterns of TOA cloud radiative effect (CRE) over the Tibetan Plateau (TP). Results show that TOA radiative fluxes from AVHRR and AATSR can be used to analyze their spatial/temporal characteristics over TP region, especially for AVHRR, but none of them can capture the observed CRE trend since 2000. In particular, when using AATSR TOA radiative flux in clear-sky of TP, the large bias of SW flux (regional mean about 30.48 Wm−2) compared with CERES-EBAF must be taken seriously. The multimodel ensemble mean (MEM) can sufficiently reproduce the temporal changes of CREs, particularly the shortwave CRE. Regarding the geographical pattern of CREs of MEM, the annual mean deviations of longwave CRE are very small, while obvious underestimations can be found in the southeastern TP for shortwave CRE. Additionally, the spatial distribution of CREs is difficult to reproduce for many individual models due to albedo and temperature biases of surface. Our results also demonstrated that MEM still has evident difficulties to capture realistic CRE trends in TP due to poor simulations in surface and cloud properties (particularly cloud fraction). PubDate: 2021-10-15 • Cyclonic activity in the Mediterranean region from a high-resolution perspective using ECMWF ERA5 dataset Abstract: This study focuses on developing a new Cyclone Detection and Tracking Method (CDTM) to take advantage of the recent availability of a high-resolution reanalysis dataset of ECMWF ERA5. The proposed algorithm is used to perform a climatological analysis of the cyclonic activity in the Mediterranean Region (MR) into a 40-year window (1979–2018). The tuning of the new CDTM was based on the comparison with currently available CDTMs and verified through careful subjective analysis to fully exploit the finer details of MR cyclones features. The application of the new CDTM to the ERA5 high-resolution dataset resulted in an increase of 40% in the annual number of cyclones, mainly associated with subsynoptic and baroclinic driven lows. The main cyclogenetic areas and seasonal cycle were properly identified into the MR context, including areas often underestimated, such as the Aegean Sea, and emerging new ones with cyclogenetic potential such as the coast of Tunisia and Libya. The better cyclone features description defined three distinct periods of cyclonic activity in the MR with peculiar and persistent characteristics. In the first period (Apr–Jun), cyclones develop more frequently and present higher velocities and deepening rates. In the second (Jul–Sep), the cyclonic activity is governed by thermal lows spreading slowly over short tracks without reaching significant depths. In the last and longest season (Oct–Mar), cyclones become less frequent, but with the highest deepening rates and the lowest MSLP values, ranking this period as the most favourable to intense storms. PubDate: 2021-10-15 • The Indian summer monsoon and Indian Ocean Dipole connection in the IITM Earth System Model (IITM-ESM) Abstract: The Indian Ocean Dipole (IOD) is recognised as an important driver of interannual climate variability over different regions of the globe, including the regional monsoon systems. In particular, positive (negative) phases of IOD tend to be associated with above-normal (below-normal) monsoon rainfall over the Indian subcontinent. Realistic simulation of the IOD and Indian summer monsoon connection, however, remains a challenge in many of the state-of-the-art climate models. This study presents an analysis of IOD and its links to the Indian monsoon based on the historical simulations from the IITM Earth System Model (IITM-ESM) and other models that participated in the 6th phase of Coupled Model Intercomparison Project (CMIP6). Our findings indicate that the IITM-ESM provides not only a fairly realistic simulation of the ocean–atmosphere coupled interactions and the Bjerknes feedback processes associated with IOD events but also better captures the summer monsoon precipitation response over the Indian subcontinent during IOD events, as compared to several CMIP6 models. The physical mechanisms contributing to the improved simulation of IOD and its monsoon connection in the IITM-ESM are evaluated in this study. PubDate: 2021-10-15 • Evaluation of non-stationarity in summer precipitation and the response of vegetation over the typical steppe in Inner Mongolia Abstract: The typical steppe in Inner Mongolia is an important component of the Eurasian steppes. It plays a dominant role in preventing desertification and against sandstorms, but highly sensitive and vulnerable to climate change. Based on long-term observed precipitation data and remotely sensed Normalized Difference Vegetation Index (NDVI) images, the non-stationary behavior of summer precipitation and its linkage with vegetation change were investigated, by means of incorporating time-varying and physical based explanatory covariates in non-stationary modeling. Results indicated that time-dependent models exhibited good performance to reproduce the temporal variations of eco-hydrological variables. The non-stationarity of summer precipitation was prominently visible for the majority of sites during the period from 1957 to 2017, with the mean behavior described as a linear or nonlinear time-varying pattern. In general, the steppe has experienced a decreasing trend in summer precipitation, but whether the decline tends to maintain or weaken or strengthen depends on the spatial location of the site studied. Differences appeared in the changes of vegetation in summer from 1998 to 2017 in different sub-regions. Evidences for the presence of stationary evolution was found in most sub-regions in the middle part, together with a linear increase in the westernmost sub-regions while a non-linear decrease in the easternmost sub-regions. Covariate analyses further highlighted the role of precipitation variabilities in the modeling of the NDVI-related vegetation dynamics over the steppe. The potential relations of summer precipitation to vegetation growth were characterized as both linear and non-linear positive forms. In particular, precipitation extremes could be responsible for the occurrences of exceptional cases in vegetation condition. The fluctuations in summer precipitation have crucial significance for future predictions of vegetation succession. Findings from this study would lead to additional insights to understanding the effect of climate change on grassland ecosystem processes. PubDate: 2021-10-15 • Warm-season mesoscale convective systems over eastern China: convection-permitting climate model simulation and observation Abstract: Mesoscale convective systems (MCSs) are important warm-season precipitation systems in eastern China. However, our knowledge of their climatology and capability in their simulation is still insufficient. This paper examines their characteristics over the 2008–2017 warm seasons using convection-permitting climate simulations (CPCSs) with a 3-km grid spacing that explicitly resolves MCSs, as well as a high-resolution gauge-satellite merged precipitation product. An object-based tracking algorithm is applied to identify MCSs. Results indicate that the MCS genesis and occurrence are closely related to the progression of the East Asian monsoon and are modulated by the underlying topography. On average, about 243 MCSs are observed each season and contribute 19% and 47% to total and extreme warm-season precipitation. The climatological attributes and variabilities are reasonably reproduced in the CPCS. The major model deficiencies are excessive small MCS occurrence and overmuch MCS rainfall, consequently overestimating the precipitation contributions, whereas observational uncertainties may play a role too. Both the observed and simulated MCS precipitation feature a nocturnal or morning maximum and an eastward delayed diurnal peak east of the Tibetan Plateau, in contrast to the dominant afternoon peak of non-MCS precipitation. The favorable comparison with observations demonstrates the capability of CPCSs in simulating MCSs in the Asian monsoon climate, and its usefulness in projecting the future changes of MCSs under global warming. The finding that non-MCS precipitation is responsible for the high biased afternoon precipitation provides helpful guidance for further model improvement. PubDate: 2021-10-14 • Extreme swell wave energy and its directional characteristics in the Indian Ocean Abstract: Directional characteristics of wind-waves is an important quantity in wave prediction and in the design of offshore engineering facilities. Dissipative and non-dissipative processes contribute to the changes in direction and energy flux of waves, and the resulting sea-state. This study carried out an extreme climatological analysis of swell waves in terms of energy flux (PSW) and directional distribution of energy (DS) using 42 years (1979–2020) of ERA5 reanalysis data. A Generalized Extreme Value (GEV) distribution is used to produce the seasonal and monthly extremes. During JJA (June–July–August), the monsoon swells that dominate over the Arabian Sea and the northern Bay of Bengal are exceptionally stable. The western coasts of Madagascar, Sumatra, Myanmar are also stable in context to swell energy due to wave refraction effects. The Ds for extra-tropical swells are influenced by the presence of several island arcs in the TIO (Tropical Indian Ocean). A comprehensive analysis was performed to establish the dependence of climate indices on PSW and DS using Multi-linear regression analysis. Study reveals that the influence of Southern Annular Mode (SAM) on the swell energy distribution is significant in the extra-tropical regions and moderate over the North Indian Ocean. The directional swell energy flux distribution has a strong dependence with the Indian Ocean Dipole (IOD) and El-Niño Southern Oscillation (ENSO) in the TIO, in particular for the nearshore regions. Interestingly, the positive IOD phase is found to actively influence the DS along the East coast of India. Spectral characteristics of swells are critically influenced by island barriers in the TIO during its propagation, in addition the climate indices also play an important role in the modulation effect. PubDate: 2021-10-12 • Occurrence of heatwave in Korea by the displacement of South Asian high Abstract: The South Asian high (SAH) index was defined using the 200 hPa geopotential height for 1973–2019. Of the movements of the SAH center in the north–south, east–west, northwest-southeast, and southwest-northeast directions, the movements in the northwest-southeast direction showed the highest positive correlation with heatwave days (HWDs) in South Korea. Thirteen years with the highest values in the northwestward shift of the SAH (positive SAH years) and 13 years with the highest values in the southeastward shift of the SAH (negative SAH years) were selected from a time series of SAH indices from which the linear trend was removed, and the differences between these two groups were analyzed. An analysis of vertical meridional circulation averaged along 120°–130° E showed that in the latitude zones containing Korea, anomalous downward flows with anomalous high pressures formed in the entire troposphere and coincided with a positive anomaly of air temperature and specific humidity. An analysis of stream flows and geopotential heights showed that in the positive SAH years, anomalous anticyclones developed in Korea, the North Pacific, North America, Western Europe, and the Iranian Plateau. These anticyclones had the wavenumber-5 pattern and showed more distinct barotropic vertical structures at higher altitudes, which resembled the circumglobal teleconnection (CGT) pattern. The maintenance of CGT depends on the interaction between the CGT circulation and the Indian summer monsoon (ISM), which has a major influence on the mid-latitude atmosphere. Strengthening of the ISM results in the formation of upper-level anomalous anticyclones in the northwestern Iranian Plateau and produces continuous downstream cells along the waveguide due to the Rossby wave dispersion. When diabatic heating by Indian summer monsoon precipitation is strengthened, the SAH is strengthened to the northwest of India, and a positive CGT pattern is formed. As a result, anomalous anticyclones formed in all layers of the Korean troposphere, resulting in heatwaves, tropical nights, and droughts exacerbated in South Korea. PubDate: 2021-10-10 • Decadal-scale variations in extreme precipitation and implications for seasonal scale drought Abstract: This study examines the relationship between low decadal mean precipitation and monthly-scale wet and dry extremes over the global land area. We characterise how precipitation distributions change on decadal timescales, and how these changes are linked to seasonal scale drought. The relationship between decadal mean precipitation and extremes is assessed at the grid point level via correlations between decadal mean and extreme percentiles, and through an analysis of indices of seasonal scale drought. A novel metric is also used that determines how the first three statistical moments change monthly precipitation distributions during dry decades for several drought-prone regions. Changes to monthly-scale wet extremes are most significantly associated with low decadal mean precipitation for almost 80% of the globe. Monthly-scale dry extremes show significant, but generally weaker, relationships to low decadal-mean precipitation for 55% of the globe. Consistent with the strong link between decreasing wet extremes and decadal dryness, we find that dry decades are predominatly modulated by changes in positive skewness in monthly precipitation distributions, whilst shifts in the mean of these distributions play an important, but typically secondary, role. There is a negligible role for changes in variance. Lastly, we show that a decadal-scale decline in mean precipitation is rarely accompanied by an increase in the severity of seasonal-scale drought, while the impact on seasonal-scale drought frequency and duration varies depending on global location. Our results have implications for how we think about seasonal-scale drought in the context of decadal variability in precipitation. PubDate: 2021-10-10 • Convection-permitting regional climate simulations over Tibetan Plateau: re-initialization versus spectral nudging Abstract: Two regional climate simulation experiments (spectral nudging and re-initialization) at convection-permitting scale are conducted using the WRF model driven by the 4th generation Global Reanalysis data (ERA5) from European Centre for Medium-Range Weather Forecasts over the Tibetan Plateau (TP). The surface air temperature (T2m) and the precipitation in summer during 2016–2018 are evaluated against the in-situ station observations and the Global Satellite Mapping of Precipitation (GSMaP) dataset. The results show that both experiments as well as ERA5 can successfully capture the spatial distribution and the daily variation of T2m and precipitation, with reasonable cold bias for temperature and dry bias for precipitation when compared with the station observations. In addition, the diurnal cycle of precipitation is investigated, indicating that both experiments tend to simulate the afternoon precipitation peak in advance and postpone the night precipitation peak. The precipitation bias is reduced by using the spectral nudging technique, especially at night and early morning. Both WRF experiments outperform ERA5 in reproducing the diurnal cycle of precipitation amount. Possible causes for the differences between the two experiments are also analyzed. In the re-initialization experiment, the daytime net shortwave radiation contributes a lot to the cold biases of Tmax, and the stronger vertically integrated moisture flux convergence leads to more precipitation compared with the spectral nudging experiment over the central and southeastern TP. These results can provide valuable guidance for further fine-scale simulation studies over the TP. PubDate: 2021-10-09 • Evaluation of convective parameters derived from pressure level and native ERA5 data and different resolution WRF climate simulations over Central Europe Abstract: The mean climatological distribution of convective environmental parameters from the ERA5 reanalysis and WRF regional climate simulations is evaluated using radiosonde observations. The investigation area covers parts of Central and Eastern Europe. Severe weather proxies are calculated from daily 1200 UTC sounding measurements and collocated ERA5 and WRF pseudo-profiles in the 1985–2010 period. The pressure level and the native ERA5 reanalysis, and two WRF runs with grid spacings of 50 and 10 km are verified. ERA5 represents convective parameters remarkably well with correlation coefficients higher than 0.9 for multiple variables and mean errors close to zero for precipitable water and mid-tropospheric lapse rate. Monthly mean mixed-layer CAPE biases are reduced in the full hybrid-sigma ERA5 dataset by 20–30 J/kg compared to its pressure level version. The WRF model can reproduce the annual cycle of thunderstorm predictors but with considerably lower correlations and higher errors than ERA5. Surface elevation differences between the stations and the corresponding grid points in the 50-km WRF run lead to biases and false error compensations in the convective indices. The 10-km grid spacing is sufficient to avoid such discrepancies. The evaluation of convection-related parameters contributes to a better understanding of regional climate model behavior. For example, a strong suppression of convective activity might explain precipitation underestimation in summer. A decreasing correlation of WRF-derived wind shear away from the western domain boundaries indicates a deterioration of the large-scale circulation as the constraining effect of the driving reanalysis weakens. PubDate: 2021-10-08 • Correction to: Evaluation of a new 12 km regional perturbed parameter ensemble over Europe PubDate: 2021-10-06 • How to determine the statistical significance of trends in seasonal records: application to Antarctic temperatures Abstract: We consider trends in the m seasonal subrecords of a record. To determine the statistical significance of the m trends, one usually determines the p value of each season either numerically or analytically and compares it with a significance level $${{\tilde{\alpha }}}$$ . We show in great detail for short- and long-term persistent records that this procedure, which is standard in climate science, is inadequate since it produces too many false positives (false discoveries). We specify, on the basis of the family wise error rate and by adapting ideas from multiple testing correction approaches, how the procedure must be changed to obtain more suitable significance criteria for the m trends. Our analysis is valid for data with all kinds of persistence. Specifically for long-term persistent data, we derive simple analytical expressions for the quantities of interest, which allow to determine easily the statistical significance of a trend in a seasonal record. As an application, we focus on 17 Antarctic station data. We show that only four trends in the seasonal temperature data are outside the bounds of natural variability, in marked contrast to earlier conclusions. PubDate: 2021-10-05 • Evaluation of the performance of the non-hydrostatic RegCM4 (RegCM4-NH) over Southeastern China Abstract: This study evaluates for the first time the performance of the latest version of the non-hydrostatic RegCM4 (RegCM4-NH) customized over two vast urban agglomerations in China (i.e., the Pearl River Delta, PRD, and the Yangtze River Delta, YRD). A one-way double nesting configuration is used, with a mother domain (20 km grid spacing) driven by ERA-Interim data (0.75°) forcing and two nested domains (4 km grid spacing). The analysis focuses on how the dynamical core (hydrostatic versus non-hydrostatic) employed in the driving mother domain simulation can affect the regional characteristics of temperature and precipitation patterns in the PRD and YRD regions simulated by the 4 km resolution nested RegCM4-NH. In addition, we assess the sensitivity of the 4 km model results to the use of a convective parameterization scheme (CPS), since the 4 km grid size can be considered as a grey-zone resolution at which deep convection is partially resolved and may still need to be parameterized. For mean temperature, a reasonably good performance is shown by all simulations, with the summer season mean bias being mostly less than ± 1 °C when averaged over the PRD and YRD. However, the simulated daily temperature distribution is excessively peaked around the median value, indicating a large probability concentrated on a small temperature range. Although the higher resolution slightly ameliorates this deficiency, the effect of the dynamical core and CPS tends to be marginal. Conversely, precipitation behaves quite differently across simulations. The forcing from the non-hydrostatic mother domain simulation helps to reduce a severe dry bias seen over the PRD due to a reduction in convection inhibition. Also, the use of the Emanuel CPS tends to intensify localized precipitation events over mountainous regions in connection with stronger ascending motions over topographical features. The higher resolution also improves the phase of the diurnal cycle of precipitation, both with and without the use of the CPS. In general, the performance of RegCM4-NH over the PRD and YRD is found to be the best when driven by a non-hydrostatic mother domain simulation and when turning on the Emanuel CPS. With the growing demand for high-resolution climate information, our evaluation study of RegCM4-NH will be a valuable reference to facilitate a wider use of this non-hydrostatic version of the model. PubDate: 2021-10-05 • Interpretation of interannual variability of the zonal contrasting thermal conditions in the winter South China Sea Abstract: The distinct winter temperature difference between the eastern South China Sea (ESCS) and western South China Sea (WSCS) has a crucial impact on regional air–sea interactions. By utilizing satellite and reanalysis data, the zonal contrasting winter thermal conditions and their formation mechanisms are investigated. The second empirical orthogonal function (EOF) mode of winter sea surface temperature (SST) anomalies is responsible for this east–west contrasting temperature pattern (EWCTP), with warming (cooling) in the ESCS (WSCS) and cooling (warming) in the WSCS (ESCS) during the positive (negative) phase events. A mixed layer heat budget analysis reveals that the net heat flux plays a primary role in the WSCS. In the ESCS, the temperature variation is instead mainly dominated by the horizontal heat advection term. In the positive phase events, an anomalous cyclonic circulation promotes an  eastern boundary current (EBC) anomaly, which enhances the northward heat transport and thus warms the ESCS. In contrast, an anomalous anticyclonic circulation pattern weakens the heat transport by the southward EBC anomaly and cools the ESCS in the negative phase events. The water exchange through the Mindoro Strait and the vertical entrainment term also contribute to the ESCS SST anomalies. Further analyses show that although there are many EWCTP events that co-occur with the central Pacific El Niño-Southern Oscillation (CP ENSO) events, they have a complex relationship. The EWCTP could appear without CP ENSO events and some CP ENSO events do not lead to the EWCTP. It is because of the different temperature state in the WSCS and ESCS during October–December months and the different contributions of net heat flux and ocean processes to the temperature changes during October–February months. PubDate: 2021-10-04 • Influence of natural climate variability on extreme wave power over Indo-Pacific Ocean assessed using ERA5 Abstract: In recent decades, wave power (WP) energy from the ocean is one of the cleanest renewable energy sources associated with oceanic warming. In Indo-Pacific Ocean, the WP is significantly influenced by natural climate variabilities, such as El Niño Southern Oscillation (ENSO), Indian Ocean Dipole (IOD), and Pacific Decadal Oscillation (PDO). In this study, the impact of major climate variability modes on seasonal extreme WP is examined over the period 1979–2019 using ERA5 reanalysis data and the non-stationary generalized extreme value analysis is applied to estimate the climatic extremes. Independent ENSO influence after removing the IOD impact (ENSO IOD) on WP are evident over the northeast and central Pacific during December–February, and March–May, respectively, which subsequently shifts towards the western Pacific in June–August (JJA) and September–November (SON). The ENSO PDO impact on WP exhibits similar yet weaker intensity year round compared to ENSO. Extreme WP responses due to the IOD ENSO include widespread decreases over the tropical and eastern Indian Ocean, with localized increases only over South China and Philippine seas and Bay of Bengal during JJA, and the Arabian Sea during SON. Lastly, for the PDO ENSO, the significant increases in WP are mostly confined to the Pacific, and most prominent in the North Pacific. Composite analysis of different phase combinations of PDO (IOD) with El Niño (La Niña) reveals stronger (weaker) influences year-round. The response patterns in significant wave height, peak wave period, sea surface temperatures, and sea level pressure help to explain the seasonal variations in WP. PubDate: 2021-10-04 • Evaluating the impact of climate change on extreme temperature and precipitation events over the Kashmir Himalaya Abstract: The frequency and severity of climatic extremes is expected to escalate in the future primarily because of the increasing greenhouse gas concentrations in the atmosphere. This study aims to assess the impact of climate change on the extreme temperature and precipitation scenarios using climate indices in the Kashmir Himalaya. The analysis has been carried out for the twenty-first century under different representative concentration pathways (RCPs) through the Statistical Downscaling Model (SDSM) and ClimPACT2. The simulation reveals that the climate in the region will get progressively warmer in the future by increments of 0.36–1.48 °C and 0.65–1.07 °C in mean maximum and minimum temperatures respectively, during 2080s (2071–2100) relative to 1980–2010 under RCP8.5. The annual precipitation is likely to decrease by a maximum of 2.09–6.61% (2080s) under RCP8.5. The seasonal distribution of precipitation is expected to alter significantly with winter, spring, and summer seasons marking reductions of 9%, 5.7%, and 1.7%, respectively during 2080s under RCP8.5. The results of extreme climate evaluation show significant increasing trends for warm temperature-based indices and decreasing trends for cold temperature-based indices. Precipitation indices on the other hand show weaker and spatially incoherent trends with a general tendency towards dry regimes. The projected scenarios of extreme climate indices may result in large-scale adverse impacts on the environment and ecological resource base of the Kashmir Himalaya. PubDate: 2021-10-03 • Quantifying the rarity of extreme multi-decadal trends: how unusual was the late twentieth century trend in the North Atlantic Oscillation' Abstract: Climate trends over multiple decades are important drivers of regional climate change that need to be considered for climate resilience. Of particular importance are extreme trends that society may not be expecting and is not well adapted to. This study investigates approaches to assess the likelihood of maximum moving window trends in historical records of climate indices by making use of simulations from climate models and stochastic time series models with short- and long-range dependence. These approaches are applied to assess the unusualness of the large positive trend that occurred in the North Atlantic Oscillation (NAO) index between the 1960s to 1990s. By considering stochastic models, we show that the chance of extreme trends is determined by the variance of the trend process, which generally increases when there is more serial correlation in the index series. We find that the Coupled Model Intercomparison Project (CMIP5 + 6) historical simulations have very rarely (around 1 in 200 chance) simulated maximum trends greater than the observed maximum. Consistent with this, the NAO indices simulated by CMIP models were found to resemble white noise, with almost no serial correlation, in contrast to the observed NAO which exhibits year-to-year correlation. Stochastic model best fits to the observed NAO suggest an unlikely chance (around 1 in 20) for there to be maximum 31-year NAO trends as large as the maximum observed since 1860. This suggests that current climate models do not fully represent important aspects of the mechanism for low frequency variability of the NAO. PubDate: 2021-10-03 • Correction to: Dryline characteristics in North America’s historical and future climates PubDate: 2021-10-01 DOI: 10.1007/s00382-021-05862-1 • Possible semi-circumglobal teleconnection across Eurasia driven by deep convection over the Sahel Abstract: The Sahel region, located between the tropical rainforests of Africa and the Sahara Desert, has rainfall that varies widely from year to year, associated with extremely deep convection. This deep convection, strongly heated by water vapor condensation, suggests the possibility of exerting a remote influence on mid- and high-latitude climate similar to the well-known influences of tropical oceanic convection on global climate. Here we investigate the possibility that deep convection over the Sahel initiates a semi-circumglobal teleconnection extending to eastern Eurasia. Statistical analysis and numerical experiments support the possible existence of this teleconnection at an interannual time scale. We propose that the anomalous heat source due to deep convection over the Sahel in the late monsoon season influences meandering of the mid-latitude jet stream over Europe through the combination of a Matsuno-Gill response and advection of absolute vorticity. This subtropical jet meander may in turn drive an eastward propagation of a Rossby wave across Eurasia as far as East Asia. Because deep convection over other subtropical land areas may exert a similar remote influence upon extratropical extreme weather, further studies of the influence of overland convection may provide us with an expanded comprehension of teleconnections. PubDate: 2021-10-01 DOI: 10.1007/s00382-021-05804-x • Impacts of orography on large-scale atmospheric circulation: application of a regional climate model Abstract: Orography has considerable impacts on the large-scale atmospheric circulation, emphasizing the necessity of adequate representation of the impacts of orography in numerical models. The regional climate model version 4 (RegCM4) is used to investigate the impacts of orography on the large-scale atmospheric circulation. Three numerical experiments in four different years for winter and summer have been conducted over a large geographical area, covering Eurasia, Africa and Oceania. These experiments include control simulations using the real orography, simulations with the removed orography of the whole domain and simulations with the removed orography of the whole domain except the Tibetan Plateau. In winter, the Tibetan Plateau prevents the development of the sea-level high pressure in South Asia and contributes to the intensification of the Siberian high through blocking cold air advection from Siberia toward India. The Tibetan Plateau is also responsible for the southward displacement of low-level easterly flows in the North Indian Ocean, such that elimination of this Plateau is associated with more zonal orientation and intensification of easterly winds, as well as an increase of moisture flux over India and the Arabian Sea. Descending motions associated with lee waves of the Western Ghat Mountains contribute to a decrease of precipitation over the Arabian Sea. In summer, the Tibetan Plateau reinforces the South Asian low-pressure system and pushes the South Asian monsoon to South Asia. Both the tropical easterly jet stream over the southern Tibetan Plateau and the subtropical westerly jet stream over the Tibetan Plateau are weakened when the whole orography is removed. Removal of the whole orography is associated with a considerable equatorward displacement of the intertropical convergence zone over South Asia. In austral winter, low-level subtropical anticyclones in Southern Africa and Australia are intensified when the whole orography is removed. PubDate: 2021-10-01 DOI: 10.1007/s00382-021-05790-0 JournalTOCs School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh, EH14 4AS, UK Email: journaltocs@hw.ac.uk Tel: +00 44 (0)131 4513762
2021-10-19 13:02:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5453981161117554, "perplexity": 7427.84516445115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00082.warc.gz"}
http://mathoverflow.net/revisions/41936/list
Addendum 1: Note that I was talking here about the $\mathbb Z_\ell$ (and not $\mathbb Q_\ell$ Tate modules. You can divide up the classification of elliptic curves in two stages: First you see if the $V_\ell$ are isomorphic (and there it is enough to look at a single $\ell$). If they are, then the curves are isogenous. Then the second step is to look within an isogeny class and try to classify those curves. The way I am talking about here goes directly to looking at the $T_\ell$ for all $\ell$. If they are non-isomorphic (for even a single $\ell$ then the curves are not isomorphic and if they are isomorphic for all $\ell$ they still may or may not be isomorphic, the difference between them is given by a rank $1$ locally free module over the endomorphism ring. In any case they are certainly isogenous. These can be seen a priori as if all $T_\ell$ are isomorphic so are all the $V_\ell$ but also a posteriori essentially because a rank $1$ locally free module becomes free of rank $1$ when tensored with $\mathbb Q$. Of course the a posteriori argument is in some sense cheating because the way you show that the curves differ by a twist by a rank $1$ locally free module is to use the precise form of the Tate conjecture:$$\mathrm{Hom}(E_1,E_2)\bigotimes \mathbb Z_\ell = \mathrm{Hom}_{\mathcal G}(T_\ell(E_1),T_\ell(E_2))$$which for a single $\ell$ gives the isogeny. Note also that the situation is similar (not by chance) to the case of CM-curves. If we look at CM-elliptic curves with a fixed endomorphism ring, then algebraically they can not be put into bijection with the elements of the class group of the endomorphism ring (though they can analytically), you have to fix one elliptic curve to get a bijection. 4 deleted 8 characters in body If all Tate modules (i.e., for all $\ell$) are isomorphic then they differ by the twist by a locally free rank $1$ module over the endomorphism ring of one of them. This is true for all abelian varieties but for elliptic curves we only have two kinds of possibilities for the endomorphism ring; either $\mathbb Z$ or an order in an imaginary quadratic field. In the first case there is only one rank $1$ module so the curves are isomorphic. In the case of an order we get that the numbe of twists is a class number. Addendum: Concretely, we have that $\mathrm{Hom}(E_1,E_2)$ is a rank $1$ projective module over $\mathrm{End}(E_1,E_1)$ (under the assumption that the Tate modules are isomorphic) and then $E_2$ is isomorphic to $\mathrm{Hom}(E_1,E_2)\bigotimes_{\mathrm{End}(E_1,E_1)}E_1$ \mathrm{Hom}(E_1,E_2)\bigotimes_{\mathrm{End}(E_1)}E_1$(the tensor product is defined by presenting$\mathrm{Hom}(E_1,E_2)$as the kernel of an idempotent$n\times n$-matrix with entries in$\mathrm{End}(E_1,E_1)$\mathrm{End}(E_1)$ and $E_2$ is the kernel of the same matrix acting on $E_1^n$. Hence, given $E_1$ $E_2$ is determined by $\mathrm{Hom}(E_1,E_2)$ and every rank $1$ projective module appears in this way. 3 added 1 characters in body If all Tate modules (i.e., for all $\ell$) are isomorphic then they differ by the twist by a locally free rank $1$ module over the endomorphism ring of one of them. This is true for all abelian varieties but for elliptic curves we only have two kinds of possibilities for the endomorphism ring; either $\mathbb Z$ or an order in an imaginary quadratic field. In the first case there is only one rank $1$ module so the curves are isomorphic. In the case of an order we get that the numbe of twists is a class number. Addendum: Concretely, we have that $\mathrm{Hom}(E_1,E_2)$ is a rank $1$ projective module over $\mathrm{End}(E_1,E_1)$ (under the assumption that the Tate modules are isomorphic) and then $E_2$ is isomorphic to $mathrm{Hom}(E_1,E_2)\bigotimes_{\mathrm{End}(E_1,E_1)}E_1$ \mathrm{Hom}(E_1,E_2)\bigotimes_{\mathrm{End}(E_1,E_1)}E_1$(the tensor product is defined by presenting$\mathrm{Hom}(E_1,E_2)$as the kernel of an idempotent$n\times n$-matrix with entries in$\mathrm{End}(E_1,E_1)$and$E_2$is the kernel of the same matrix acting on$E_1^n$. Hence, given$E_1E_2$is determined by$\mathrm{Hom}(E_1,E_2)$and every rank$1\$ projective module appears in this way.
2013-05-24 06:46:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7660099267959595, "perplexity": 137.82563794612602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704253666/warc/CC-MAIN-20130516113733-00090-ip-10-60-113-184.ec2.internal.warc.gz"}
http://wikinotes.ca/MATH_223/past-exam/fall-2007/final
# Fall 2007 Final Student-created solutions for the Fall 2007 final exam for MATH 223. You can find the original exam through the McGill library or docuum anyone has uploaded it there in the meantime, but all the questions will be stated on this page. ## 1Question 1¶ Find a basis for the row, column and null space of the following matrix over the complex numbers: $$\begin{pmatrix} 1 & 1-2i & 1+i \\ i & 2+i & -1+i \\ 2-i & -5i & 4+i \\ 3 & 3-6i & 4+3i \end{pmatrix}$$ ### 1.1Solution¶ Row-reduce the matrix: $$\begin{pmatrix} 1 & 1-2i & 1+i \\ i & 2+i & -1+i \\ 2-i & -5i & 4+i \\ 3 & 3-6i & 4+3i \end{pmatrix} \mapsto \begin{pmatrix} 1 & 1-2i & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$ Row space (take the independent rows in the RREF matrix): $$\{\begin{pmatrix} 1 & 1-2i & 0 \end{pmatrix}, \begin{pmatrix} 0 & 0 & 1 \end{pmatrix}\}$$ Column space (take the columns from the original matrix corresponding to the independent columns in the RREF matrix): $$\left \{ \begin{pmatrix} 1 \\ i \\ 2-i \\ 3 \end{pmatrix}, \begin{pmatrix} 1+i \\ -1+i \\ 4+i \\ 4+3i \end{pmatrix} \right \}$$ Null space (solve the homogenous system): $$\left \{ \begin{pmatrix} -1+2i \\ 1 \\ 0 \end{pmatrix} \right \}$$ ### 1.2Accuracy and discussion¶ The row-reduction was checked using Wolfram. No solutions that I could find for this exam, so it should be accurate but there are no guarantees. - @dellsystem (19:21, 18 April 2011) ## 2Question 2¶ Let V be the real vector space of $3 \times 3$ matrices with real entries. Identify which of the following subsets of V are subspaces of V. Justify your answers. (a) $\{X \in V | tr(X) = 0 \}$ (recall that tr(X) is the trace of X, i.e. the sum of the diagonal entries of X) (b) $\{X \in V | X \begin{pmatrix}1 \\ 2 \\ 3 \end{pmatrix} = X^T \begin{pmatrix} 1\\ 2\\ 3 \end{pmatrix} \}$ (c) $\{ X \in V | det(X) = 0 \}$ ### 2.1Solution¶ (a) This is a subspace - it is closed under scalar multiplication and vector addition, and contains the "zero vector" (the 3 by 3 matrix whose entries are all 0). Closure under scalar multiplication: Let A be any matrix $\in V$ in this subset and let $\alpha$ be a scalar $\in \mathbb{R}$. Let the elements along the main diagonal of A be $a_1, a_2, a_3$. As the trace is 0, we know that $tr(A) = a_1 + a_2+ a_3 = 0$. However, $tr(\alpha A) = \alpha a_1 + \alpha a_2 + \alpha a_3 = \alpha (a_1 + a_2 + a_3) = \alpha \times 0 = 0$ and so $\alpha A$ also has a trace of 0, and thus is part of this subset. So the subset is closed under scalar multiplication. Let A and B be any two matrices $\in V$ in this subset. Let $a_1, a_2, a_3$ and $b_1, b_2, b_3$ be the elements along the diagonal of A and B respectively. Since the trace of both matrices is 0, we know that $a_1 + a_2 + a_3 = b_1+b_2+b_3 = 0$. The sum of A and B would have, along its diagonal, the elements $a_1+b_1, a_2 + b_2, a_3+b_3$, so the trace would be $(a_1 + b_1) + (a_2+b_2) +(a_3+b_3) = (a_1+a_2+a_3) + (b_1+b_2+b_3) = 0$ (by the commutativity of addition of real numbers) and so the sum of any two matrices in the subset would also have a trace of 0 and thus would also be in the subset. So the subset is closed under vector addition. Contains the zero vector: The "zero vector" clearly has a trace of 0. So this subset is a subspace. (b) This is probably a subspace. It turns out X doesn't have to be symmetric as one might suspect upon first glance, but that doesn't matter. Clearly the zero vector is in this subspace, and it is clearly closed under scalar multiplication. It turns out to be closed under vector addition as well: If we let A, B be matrices in this subspace, then $(A+B)\vec v = A\vec v + B \vec v = A^T \vec v + B^T \vec v = (A^T + B^T) \vec v$ so their sum is in the subspace as well. So it is a subspace (unless I made a mistake in my reasoning). (c) This is definitely not a subspace - it's not closed under vector addition. For example: $$A = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix} \quad B = \begin{pmatrix} 4 & 0 & 0 \\ 0 & 0 & 0 & \\ 0 & 0 & 0 \end{pmatrix} \quad A + B = \begin{pmatrix} 4 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix}$$ A and B both clearly have a determinant of 0, and are thus in this subset, but their sum (A+B) has a non-zero determinant and is thus not in this subset. ### 2.2Accuracy and discussion¶ Reasonably certain of (a) and (c). Not sure about (b) - it looks like it's not limited to symmetric matrices (possibly), but what bearing does that have on whether or not it's a subspace? To be continued. - @dellsystem (20:14, 18 April 2011) Finished (b) ... should be right? - @dellsystem (18:20, 19 April 2011) ## 3Question 3¶ (a) Find an invertible matrix P such that $P^{-1}AP$ is diagonal, where $$A = \begin{pmatrix} 5 & 0 & -6 \\ 0 & 1 & 0 \\ 2 & 0 & -3 \end{pmatrix}$$ (b) Find (explicitly) $A^{10}$ where A is from part (a) of this problem. Note that $3^{10} = 59049$. ### 3.1Solution¶ (a) We just need to diagonalise this matrix. First let's find the eigenvalues from the characteristic polynomial, by expanding along the second row: $$\det(\lambda I - A) = \det \begin{pmatrix} \lambda - 5 & 0 & 6 \\ 0 & \lambda - 1 & 0 \\ -2 & 0 & \lambda + 3 \end{pmatrix}= (\lambda-1)(\lambda+3)(\lambda-5) + 12(\lambda-1) = (\lambda-3)(\lambda+1)(\lambda-1)$$ We get eigenvalues of $\lambda_1 = 3, \, \lambda_2 = -1, \, \lambda_3 = 1$. Let's find the associated eigenvectors: \begin{align} & \lambda_1: \begin{pmatrix} 2 & 0 & -6 \\ 0 & -2 & 0 \\ 2 & 0 & -6 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & -3 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \quad \therefore \vec v_1 = \begin{pmatrix} 3 \\ 0 \\ 1 \end{pmatrix} \\ & \lambda_2 : \begin{pmatrix} 6 & 0 & -6 \\ 0 & 2 & 0 \\ 2 & 0 & -2 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \quad \therefore \vec v_2 = \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} \\ & \lambda_3 : \begin{pmatrix} 4 & 0 & -6 \\ 0 & 0 & 0 \\ 2 & 0 & -4 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} \quad \therefore \vec v_3 = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \end{align} The matrix P is thus: $$\begin{pmatrix} \vec v_1 & \vec v_2 & \vec v_3 \end{pmatrix} = \begin{pmatrix} 3 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix}$$ (b) We first need to solve for the inverse of P. We can do it through row-reducing an augmented matrix, with the identity matrix on the right: $$\begin{pmatrix} 3 & 1 & 0 & | & 1 & 0 & 0 \\ 0 & 0 & 1 & | & 0 & 1 & 0 \\ 1 & 1 & 0 & | & 0 & 0 & 1 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & 0 & | & 0.5 & 0 & -0.5 \\ 1 & 1 & 0 & | & 0 & 0 & 1 \\ 0 & 0 & 1 & | & 0 & 1 & 0 \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & 0 & | & 0.5 & 0 & -0.5 \\ 0 & 1 & 0 & | & -0.5 & 0 & 1.5 \\ 0 & 0 & 1 & | & 0 & 1 & 0 \end{pmatrix}$$ $$\therefore P^{-1} = \begin{pmatrix} 0.5 & 0 & -0.5 \\ -0.5 & 0 & 1.5 \\ 0 & 1 & 0 \end{pmatrix}$$ As $P^{-1}AP = D$, where D is the diagonal matrix whose diagonal elements are the eigenvalues of A (in order), we can rearrange the function around a bit: \begin{align} PP^{-1}APP^{-1} = A = PDP^{-1} \quad \therefore A^{10} & = PD^{10}P^{-1} = \begin{pmatrix} 3 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix} \begin{pmatrix} 3 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix}^{10} \begin{pmatrix} 0.5 & 0 & -0.5 \\ -0.5 & 0 & 1.5 \\ 0 & 1 & 0 \end{pmatrix} \\ & = \begin{pmatrix} 3 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix} \begin{pmatrix} 3^{10} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 0.5 & 0 & -0.5 \\ -0.5 & 0 & 1.5 \\ 0 & 1 & 0 \end{pmatrix} \\ & = \begin{pmatrix} 3^{11} & 1 & 0 \\ 0 & 0 & 1 \\ 3^{10} & 1 & 0 \end{pmatrix} \begin{pmatrix} 0.5 & 0 & -0.5 \\ -0.5 & 0 & 1.5 \\ 0 & 1 & 0 \end{pmatrix} = \begin{pmatrix} \frac{3^{11} - 1}{2} & 0 & -\frac{3^{11}-3}{2} \\ 0 & 1 & 0 \\ \frac{3^{10}-1}{2} & 0 & -\frac{3^{10}-3}{2} \end{pmatrix} \\ & = \begin{pmatrix} 88573 & 0 & -88572 \\ 0 & 1 & 0 \\ 29524 & 0 & -29523 \end{pmatrix} \end{align} ### 3.2Accuracy and discussion¶ Confirmed the value of $A^{10}$ using Wolfram, so the method and numbers should be correct, barring typos etc - @dellsystem (20:55, 18 April 2011 ) ## 4Question 4¶ Let $P_3(t)$ be the real vector space of polynomials of degree at most 3, and let $V$ be the subspace of $P_3(t)$ consisting of those polynomials $p(t)$ such that $p(0) = p(1)$. Define the function $L : V \to V$ by $$L(p(t)) = t(t-1)p''(t)$$ where $p''(t)$ denotes the second derivative of $p(t)$ with respect to $t$. (a) Show that $L$ is a linear operator on $V$. (b) Find the matrix $[L]_B$, where $B$ be the basis of V given by $$B = \{ 1, t^2-t, t^3-t^2 \}$$ (c) Find bases for $\ker(L)$ and $\text{im}(L)$.\ (d) Find a basis $B'$ of $V$ such that $[L]_V = D$ is diagonal, and find $D$. ### 4.1Solution¶ (a) First, we show that L respects vector addition and scalar multiplication. Let $p_1(t), p_2(t)$ be polynomials in the vector space. Then, $L((p_1 + p_2)(t)) = t(t-1)(p_1+p_2)''(t) = t(t-1)p_1''(t) + t(t-1)p_2''(t) = L(p_1(t)) + L(p_2(t))$ by distributivity of the vector space (?) Scalar multiplication Let $\alpha \in \mathbb{R}$. Then, $L(\alpha p(t)) = t(t-1)(\alpha p)''(t) = \alpha (t(t-1)p''(t)) = \alpha L(p(t))$, by associativity I think (b) Let's apply the transformation $L$ to each thing in the basis: \begin{align}L(1) & = t(t-1)0 = 0 = 0(1) + 0(t^2-t) + 0(t^3-t^2) \mapsto \begin{pmatrix} 0 & 0 & 0 \end{pmatrix}^T \\ L(t^2-t) & = t(t-1)(2) = 2t^2-2t = 0(1) + 2(t^2-t) + 0(t^3-t^2) \mapsto \begin{pmatrix} 0 & 2 & 0 \end{pmatrix}^T \\ L(t^3-t^2) & = t(t-1)(6t-2) = (t^2-t)(6t-2) = 6t^3 -8t^2 + 2t = 0(1) -2(t^2-t) +6(t^3-t^2) \mapsto \begin{pmatrix} 0 & -2 & 6 \end{pmatrix}^T \\ \therefore [L]_B & = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 2 & -2 \\ 0 & 0 & 6 \end{pmatrix}\end{align} (c) Let's first row-reduce the matrix, then find the null space and column space: $$\begin{pmatrix} 0 & 0 & 0 \\ 0 & 2 & -2 \\ 0 & 0 & 6 \end{pmatrix} \mapsto \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}$$ Null space $\left \{ \begin{pmatrix} 1 & 0 & 0 \end{pmatrix}^T \right \}$ which corresponds to a kernel of $\{ 1 \}$. Column space $\left \{ \begin{pmatrix} 0 \\ 2 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ -2 \\ 6 \end{pmatrix} \right \}$ which corresponds to an image space of $\{ 2(t^2-t), -2(t^2-t) + 6(t^3-t^2) \}$ which is incidentally equivalent to $\{ t^2-t, t^3-t^2 \}$. (d) Basis: $\{ a(t), b(t), c(t) \}$ such that $$L(a(t)) = \alpha a(t) \quad L(b(t)) = \beta b(t) \quad L(c(t)) = \gamma c(t)$$ So the linear operator maps it to a scalar multiple of itself. Based on how two of the vectors in B behave, let's try this: $$a(t) = 1, b(t) = t^2-t, c(t) = t$$ Then: \begin{align}L(a(t)) & = L(1) \mapsto \begin{pmatrix} 0 & 0 & 0 \end{pmatrix}^T \\ L(b(t)) & = L(t^2-t) \mapsto \begin{pmatrix} 0 & 2 & 0 \end{pmatrix}^T \\ L(c(t)) & = L(t) \mapsto \begin{pmatrix} 0 & 0 & 0 \end{pmatrix}^T \end{align} So the basis is $\{ 1, t^2-t, t \}$ and $D = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 0 \end{pmatrix}$ ### 4.2Accuracy and discussion¶ Needs someone to check over the calculations and method. - @dellsystem (21:28, 18 April 2011) Part (d) is wrong ... - @dellsystem (16:57, 20 April 2011) Why is it wrong??? - @dellsystem (14:01, 16 December 2012) ## 5Question 5¶ Suppose that $A$ is an invertible matrix and that $\lambda$ is an eigenvalue of A. Show that $\lambda^{-1}$ is an eigenvalue of $A^{-1}$. ### 5.1Solution¶ An eigenvalue of A satisfies the equation $A \vec v = \lambda \vec v$ where v is a non-zero vector. We know that $A^{-1}A = AA^{-1} = I$. If we multiply the first equation by A inverse on the left, we get $A^{-1} A \vec v = A^{-1} \lambda \vec v \mapsto \vec v = \lambda A^{-1} \vec v$ by associativity of scalar multiplication (as the eigenvalue is a scalar). Now, we know that all of the eigenvalues of A are non-zero, because A is invertible - if A had zero as an eigenvalue, then $\det(A - \lambda I) = \det(A) = 0$ which means that A is singular and thus not invertible. So as A is invertible, none of its eigenvalues are 0, and so we can divide by the eigenvalue, resulting in $\frac{\vec v}{\lambda} = \frac{\lambda}{\lambda} A^{-1} \vec v$ which can also be written as $A^{-1}\vec v = \lambda^{-1} \vec v$ thus proving that $\lambda^{-1}$ is an eigenvalue of A inverse (as it satisfies the equation for an eigenvalue etc). ### 5.2Accuracy and discussion¶ Not sure if this is a sufficient proof, but I can't think of anything else - @dellsystem (21:35, 18 April 2011) Added the fact that you can divide by the eigenvalue because it can't be equal to 0, as A is invertible. - @dellsystem (22:07, 18 April 2011 ) Seems legit to me - @tahnok (00:45, 21 April 2011) ## 6Question 6¶ Suppose the matrices $A = \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & j \end{pmatrix}$ and $B = \begin{pmatrix} a & b & c \\ d* & e* & f* \\ g & h & j \end{pmatrix}$ have complex entries, $\det(A) = 1 + i$ and $\det(B) = 3-2i$. Find the determinant of $$\begin{pmatrix} (1+2i)a & 2id + (1-i)d* & g + (-6+3i)a \\ (1+2i)b & 2ie + (1-i)e* & h+(-6+3i)b \\ (1+2i)c & 2if+(1-i)f* & j+(-6+3i)c \end{pmatrix}$$ ### 6.1Solution¶ Let $C = \begin{pmatrix} a & b & c \\ (1-i)d* & (1-i)e* & (1-i)f* \\ g & h & j \end{pmatrix}$. Since C resulted from multiplying one row of B by (1-i), we can find its determinant easily: $\det(C) = (1-i)\det(B) = (1-i)(3-2i) = 3 - 2i - 3i -2 = 1 -5i$. Let $D = \begin{pmatrix} a & b & c \\ (2i)d & (2i)e & (2i)f \\ g & h & j \end{pmatrix}$. Since D resulted from multiplying one row of A by (2i), we can find its determinant easily as well: $\det(A) = 2i \det(A) = 2i(1+i) = -2 + 2i$ Let $E = \begin{pmatrix} a & b & c \\ 2id + (1-i)d* & 2ie + (1-i)e* & 2if + (1-i)f* \\ g & h & j \end{pmatrix}$. The first and third rows are the same as those of C and D, while the second row is the sum of the corresponding rows in C and D. Since C and D differ by only one row, and that row in E represents the sum of those two rows, we have that $\det(E) = \det(C) + \det(D)$ (no idea what the name of this theorem is or when we learned it). So $\det(E) = (1-5i) + (-2+2i) = -1 -3i$. Now let's perform a simple row operation on E - add a scalar multiple of the first row the third row. This doesn't change the determinant, so we have $F = \begin{pmatrix} a & b & c \\ 2id + (1-i)d* & 2ie + (1-i)e* & 2if + (1-i)f* \\ g +(-6+3i)a & h +(-6+3i)b & j+(-6+3i)c \end{pmatrix}$ and $\det(F) = \det(E) = -1-3i$. Now let's take the transpose of F. Since transposing doesn't change the determinant either, we have that $G = F^T = \begin{pmatrix} a & 2id + (1-i)d & g +(-6+3i)a \\ b & 2ie + (1-i)e* & h +(-6+3i)b \\ c & 2if + (1-i)f* & j+(-6+3i)c \end{pmatrix}$ and $\det(G) = \det(F) = -1-3i$. This is starting to look a lot like the matrix we want to find the determinant of. If we let $H = \begin{pmatrix} (1+2i)a & 2id + (1-i)d & g +(-6+3i)a \\ (1+2i)b & 2ie + (1-i)e* & h +(-6+3i)b \\ (1+2i)c & 2if + (1-i)f* & j+(-6+3i)c \end{pmatrix}$, where H is derived from multiplying the first column of G by a scalar multiple, then we have that $\det(H) = (1+2i)\det(G) = (1+2i)(-1-3i) = -1 -3i -2i +6 = 5 - 5i$. Since H is just the matrix we're trying to find the determinant of, the determinant is just $5-5i$. ### 6.2Accuracy and discussion¶ Method should be correct (see Question 7 from the winter 2010 final, for example, which we were given solutions for), numbers could use some checking. The formatting also sucks, go ahead and fix it if you're so inclined. - @dellsystem (22:01, 18 April 2011) Formatting is slightly better now - @dellsystem (14:00, 16 December 2012) ## 7Question 7¶ Let V be the real vector space of continuous real-valued functions on the interval $[-1, 1]$, and for $f, g \in V$ let $$\langle f, g \rangle = \int_{-1}^1 x^4 f(x)g(x)\,dx.$$ (a) Verify that this defines an inner product on V. (b) Show that, for any $f \in V$, $$\left ( \int_{-1}^1 x^5 f(x) \,dx \right )^2 \le \frac{2}{7} \int_{-1}^1 x^4 [f(x)]^2 \,dx.$$ For which f does equality hold? ### 7.1Solution¶ (a) This part of the question is actually identical to question 3 from the fall 2009 final, so I'm just going to copy and paste. To verify that this is defines an inner product, we have to show that it respects linearity in the first argument, (conjugate) symmetry, and positive definiteness. Linearity in the first argument Preserves vector addition: Let $f_1, f_2 \in V$. Then: $$\langle f_1 + f_2, g \rangle = \int_{-1}^1 x^4 (f_1+f_2)(x)g(x)\,dx = \int_{-1}^1 x^4f_1(x)g(x)\,dx + \int_{-1}^1 x^4 f_2(x)g(x)\,dx = \langle f_1,g \rangle + \langle f_2, g \rangle$$ Preserves scalar multiplication: Let $\alpha \in F$ (a scalar, in the field). Then: $$\langle \alpha f, g \rangle = \int_{-1}^1 x^4 (\alpha f(x))g(x)\,dx = \alpha \int_{-1}^1 x^4 f(x)g(x)\,dx = \alpha \langle f, g \rangle$$ (Conjugate) symmetry $\langle g, f \rangle = \int_{-1}^1 x^4 g(x)f(x)\,dx = \int_{-1}^1 x^4 f(x)g(x) \,dx = \langle f, g \rangle$ as multiplication here is commutative. Positive definiteness For any $f \in V, \, f \neq 0$: $$\langle f, f \rangle = \int_{-1}^1 x^4 [f(x)]^2 \,dx$$ Since $x^4$ is always non-negative, and $[f(x)]^2$ is always positive, the integrand is always equal to or greater than 0. The integral is thus always positive, as the integrand is only equal to 0 at one point in the interval and is positive the rest of the time. So we have positive definiteness. (Still sketchy but meh) (b) Differs slightly from question 3 of fall 2009, but, same template: As this defines an inner product, we can of course use the Cauchy-Schwarz inequality. Therefore: $$| \langle f, x \rangle |^2 = \left ( \int_{-1}^1 x^5f(x)\,dx \right )^2 \le \langle f, f \rangle \langle x, x \rangle$$ $$\therefore \left ( \int_{-1}^1 x^5f(x)\,dx \right )^2 \le \int_{-1}^1 x^4 [f(x)]^2 \,dx \int_{-1}^1 x^6 \,dx$$ Evaluate the integral: $\int_{-1}^1 x^6 \,dx = \left [ \frac{x^7}{7} \right ]_{-1}^{1} = \frac{1}{7} - \frac{-1}{7} = \frac{2}{7}$ which is exactly what we want omg!!!1 So by the Cauchy-Schwarz (don't misspell this) inequality we have $\left ( \int_{-1}^1 x^5f(x)\,dx \right )^2 \le \frac{2}{7} \int_{-1}^1 x^4[f(x)]^2 \,dx$ QED. By Cauchy-Schwarz, we only have equality when $f(x) = x$ or a constant multiple thereof. So there you go. ### 7.2Accuracy and discussion¶ Should be right, unless I made a mistake in copying and pasting or typo'ed or something. - @dellsystem (22:16, 18 April 2011) ## 8Question 8¶ Let W be the subspace of $\mathbb{R}^4$ spanned by $\begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}$ and $\begin{pmatrix} 4 \\ 3 \\ 4 \\ 3 \end{pmatrix}$. (a) Find an orthonormal basis for each of $W$ and $W^{\perp}$. (b) Find the orthogonal projections $Proj_w(v)$ and $Proj_{w^{\perp}}(v)$, where $$v = \begin{pmatrix} 1 \\ 2 \\ 3 \\ 4 \end{pmatrix}.$$ ### 8.1Solution¶ (a) Let $\vec v_1 = \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}$, $\vec v_2 = \begin{pmatrix} 4 \\ 3 \\ 4 \\ 3 \end{pmatrix}$. We set $\vec w_1 = \vec v_1,\, \vec w_2 = \vec v_2 - Proj_{w_1} v_2 =\vec v_2 - \frac{\langle v_2, w_1 \rangle}{\langle w_1, w_1 \rangle}\vec w_1$. $$\langle v_2, w_2 \rangle = 1(4) + 1(4) = 8, \langle w_1, w_1 \rangle = 2$$ $$\therefore \vec w_2 = \vec v_2 - \frac{8}{2} \vec w_1 = \begin{pmatrix} 4 \\ 3 \\ 4 \\ 3 \end{pmatrix} - \begin{pmatrix} 4 \\ 0 \\ 4 \\ 0 \end{pmatrix} = \begin{pmatrix} 0 \\ 3 \\ 0 \\ 3 \end{pmatrix}$$ This is orthogonal to $\vec w_1$. If we normalise both the vectors, we get the following orthonormal basis for W: $$\left \{ \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \frac{1}{\sqrt{18}} \begin{pmatrix} 0 \\ 3 \\ 0 \\ 3 \end{pmatrix} \right \}$$ To find an orthonormal basis for $W^{\perp}$, we solve the system Ax = 0, where $A = \begin{pmatrix} 1 & 0 & 1 & 0 \\ 4 & 3 & 4 & 3 \end{pmatrix}$, which row-reduces to $\begin{pmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \end{pmatrix}$. The solution set is $\begin{pmatrix} -1 \\ 0 \\ 1 \\ 0\end{pmatrix}, \begin{pmatrix} 0 \\ -1 \\ 0 \\ 1 \end{pmatrix}$. This is already orthogonal, but we need to normalise it - this gives us the following orthonormal basis for $W^{\perp}$: $$\left \{ \frac{1}{\sqrt{2}} \begin{pmatrix} -1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \frac{1}{\sqrt{2}} \begin{pmatrix} 0 \\ -1 \\ 0 \\ 1 \end{pmatrix} \right \}$$ (b) Let's use the orthogonal basis for W that we obtained in part (a): $\{ \vec u_1, \vec u_2 \} = \left \{ \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \\ 0 \\ 1 \end{pmatrix} \right \}$. $Proj_{W}\vec v = \frac{\langle v, u_1 \rangle}{\langle u_1, u_1 \rangle} \vec u_1 + \frac{\langle v, u_2 \rangle}{\langle u_2, u_2 \rangle} \vec u_2$. $$\langle v, u_1 \rangle = 4,\, \langle u_1, u_1 \rangle = 2, \, \langle v, u_2 \rangle = 6 ,\, \langle u_2, u_2 \rangle = 2$$ $$\therefore Proj_{W}\vec v = \frac{4}{2} \vec u_1 + \frac{6}{2} \vec u_2 = 2 \vec u_1 + 3 \vec u_2 = \begin{pmatrix} 2 \\ 0 \\ 2 \\ 0 \end{pmatrix} + \begin{pmatrix} 0 \\ 3 \\ 0 \\ 3 \end{pmatrix} = \begin{pmatrix}2 \\ 3 \\ 2 \\ 3 \end{pmatrix}$$ $$Proj_{W^{\perp}} \vec v = \vec v - Proj_{W}\vec v = \begin{pmatrix} -1 \\ -1 \\ 1 \\ 1 \end{pmatrix}$$ ### 8.2Accuracy and discussion¶ Need to learn this shit first, will come back to it - @dellsystem (22:19, 18 April 2011) Learned it - definitely needs someone to look over it though - @dellsystem (19:23, 19 April 2011) Just looked over it, but it's been a year and a half so I have no idea what's going on. Cleaned up the formatting though. Past me is a lot smarter than present me. - @dellsystem (13:59, 16 December 2012) ## 9Question 9¶ Find a unitary matrix U such that $\bar U^{T} H U$ is diagonal, where H is the following Hermitian matrix: $$H = \begin{pmatrix} -3 & i & 1 \\ -i & -3 & -i \\ 1 & i & -3 \end{pmatrix}.$$ [Hint: -4 is an eigenvalue of H.] ### 9.1Solution¶ We assume that we're being told one of the eigenvalues because we don't need to find out the other(s). So let's first find the eigenvectors associated with the eigenvalue we're given: $A + 4\lambda = \begin{pmatrix} 1 & i & 1 \\ -i & 1 & -i \\ 1 & i & 1 \end{pmatrix} \mapsto \begin{pmatrix} 1 & i & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$ (multiply the second row by i) This gives us the following eigenvectors: $\begin{pmatrix}-i \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix}$ Since this eigenvalue has a geometric multiplicity of two, we can infer that it also has an algebraic multiplicity of two, otherwise the matrix would not be diagonalisable. Therefore there is only one other distinct eigenvalue, and we can find it from the trace of this matrix (as the sum of the eigenvalues is equal to the trace). The trace of this matrix is $(-3) + (-3) + (-3)) = -9$ so the remaining eigenvalue is $-9 - (-4) - (-4) = -1$. Let's find the eigenvector for this eigenvalue: $$A + \lambda = \begin{pmatrix} -2 & i & 1 \\ -i & -2 & -i \\ 1 & i & -2 \end{pmatrix} \mapsto \begin{pmatrix}1 & i & -2 \\ 1 & -2i & 1 \\ -2 & i & 1 \end{pmatrix} \mapsto \begin{pmatrix} 1 & i & -2 \\ 0 & -3i & 3 \\ 0 & 3i & -3\end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & i \\ 0 & 0 & 0 \end{pmatrix}$$ which gives us the eigenvector $\begin{pmatrix}1 & -i & 1 \end{pmatrix}^T$ Now that we have the eigenvectors, let's orthogonalise them via Gram-Schmidt. Let $\vec v_1 = \begin{pmatrix} -i \\ 1 \\ 0 \end{pmatrix},\, \vec v_2 = \begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix},\,\vec v_3 = \begin{pmatrix}1 \\ -i \\ 1 \end{pmatrix}$. We note that the third eigenvector is perpendicular to the first two, let's take that one as our first: Let $\vec w_1 = \vec v_3$ Let $\vec w_2 = \vec v_2$ (as it's already perpendicular to the first) Let $\vec w_3 = \vec v_1 - \frac{\langle \vec v_1, \vec w_1 \rangle}{\langle \vec w_1, \vec w_1 \rangle} \vec w_1 - \frac{\langle \vec v_1, \vec w_2 \rangle}{\langle \vec w_2, \vec w_2 \rangle} \vec w_2$ Let's first calculate all the inner products (note that calculating an inner product between two complex vectors requires taking the conjugate of the second vector): $\langle \vec v_1, \vec w_1 \rangle = 0,\, \langle \vec w_1, \vec w_1 \rangle = 3,\,\langle \vec v_1, \vec w_2 \rangle = i,\,\langle \vec w_2, \vec w_2 \rangle = 2$ So $\vec w_3 = \begin{pmatrix} -i \\ 1 \\ 0 \end{pmatrix} -\frac{i}{2} \begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix} = \frac{1}{2} \begin{pmatrix} -i \\ 1 \\ -i \end{pmatrix}$ Now we just need to normalise them: $$\vec w_1 \mapsto \frac{1}{\sqrt{3}} \begin{pmatrix} 1 \\ -i \\ 1 \end{pmatrix} \quad \vec w_2 \mapsto \frac{1}{\sqrt{2}} \begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix} \quad \vec w_3 \mapsto \frac{1}{\sqrt{3}} \begin{pmatrix} -i \\ 1 \\ -1 \end{pmatrix}$$ So the unitary matrix is $\begin{pmatrix} \frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{2}} & -\frac{i}{\sqrt{3}} \\ -\frac{i}{\sqrt{3}} & 0 & \frac{1}{\sqrt{3}} \\ \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{3}} \end{pmatrix}$ ### 9.2Accuracy and discussion¶ lol - @dellsystem (22:20, 18 April 2011) Learned it. Checked eigenvectors with Wolfram but the numbers in the rest might have errors. Also, temporarily broke the site (accidentally dropped a table lol) but it's okay now, and I will stop trying to free up disk space. Not really relevant to this but I wanted to preserve it here so I will forever (assuming I don't accidentally drop any other tables anyway) have a record of my idiocy - @dellsystem (02:18, 20 April 2011) The calculations of the inner products are wrong, if i'm not mistaken eigenvectors from distinct eigenvalues should be already orthogonal. You should only need to do gram schmidt on your v2 - anonymous Should be fixed now, thanks. Needs proofreading. - @dellsystem (16:18, 20 April 2011) I'm pretty sure that eigen-vectors found from different eigenvalues are naturally orthogonal to each other. You shouldn't need to apply G-S on the third vector (the one found with e-value = -1) - anonymous Yep, I know. I only applied Gram-Schmidt to one of the eigenvectors found from the first eigenvalue, because although it is perpendicular to the eigenvector from the other eigenvalue, it's not perpendicular to the other one. Check the numbers, it should be right. - @dellsystem (16:43, 20 April 2011) ## 10Question 10¶ Suppose that V is a real inner product space. Prove the following version of the Pythagorean theorem. If $v, w \in V$ are orthogonal, then $$\left \|v + w \right \|^2 = \left \| v \right \|^2 + \left \| w \right \|^2$$ ### 10.1Solution¶ If v and w are orthogonal, then $\langle v,w \rangle = 0$. $$\left \| u+v \right \|^2 = \langle u+v, u+v \rangle = \langle u,u \rangle + \langle u,v \rangle + \langle v,u \rangle + \langle v,v \rangle$$ Since $\langle u,v \rangle = \langle v,u \rangle$ (as it's a real inner product space): $$\left \| u+v \right \|^2 = \langle u,u \rangle + 2 \langle u,v \rangle + \langle v,v \rangle$$ Since $\langle u, v \rangle = 0$, $\left \| u + v \right \|^2 = \langle u, u \rangle + \langle v, v \rangle$ And as $\langle u, u \rangle = \left \| u \right \|^2,\, \langle v, v \rangle = \left \| v \right \|^2$, $\therefore \left \| u + v \right \|^2 = \left \| u \right \|^2 + \left \| v \right \|^2$ QED ### 10.2Accuracy and discussion¶ Looks cool, will attempt it later - @dellsystem (22:23, 18 April 2011) Proved it, I'll be glad if someone will take their time to convert it to LaTeX though. - Emir (04:01, 20 April 2011) Brilliant, makes perfect sense. Just converted it to LaTeX. You should consider signing up for an account! - @dellsystem (13:00, 20 April 2011)
2017-03-27 14:32:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985359311103821, "perplexity": 279.5802207529697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189474.87/warc/CC-MAIN-20170322212949-00018-ip-10-233-31-227.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Thomas_Banchoff
# Thomas Banchoff Thomas Banchoff at Berkeley in 1973 (photo by George Bergman) Thomas Francis Banchoff (born 1938) is an American mathematician specializing in geometry. He is a professor at Brown University, where he has taught since 1967. He is known for his research in differential geometry in three and four dimensions, for his efforts to develop methods of computer graphics in the early 1990s, and most recently for his pioneering work in methods of undergraduate education utilizing online resources. Banchoff attended the University of Notre Dame and received his Ph.D from UC Berkeley in 1964, where he was a student of Shiing-Shen Chern.[1] Before going to Brown he taught at Harvard University and the University of Amsterdam. In 2012 he became a fellow of the American Mathematical Society.[2] He was a president of the Mathematical Association of America.[3] ## Selected works • with Stephen Lovett: Differential Geometry of Curves and Surfaces, A. K. Peters 2010 • with Terence Gaffney, Clint McCrory: Cusps of Gauss Mappings, Pitman 1982 • with John Wermer: Linear Algebra through Geometry, Springer Verlag 1983 • Beyond the third dimension: geometry, computer graphics, and higher dimensions, Scientific American Library, Freeman 1990 • Triple points and surgery of immersed surfaces. Proc. Amer. Math. Soc. 46 (1974), 407–413. (concerning the number of triple points of immersed surfaces in ${\displaystyle R^{3}}$.) • Critical points and curvature for embedded polyhedra. J. Differential Geometry 1 (1967), 245–256. (Theorem of Gauß-Bonnet for Polyhedra)
2018-09-21 11:39:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3942022919654846, "perplexity": 1319.1782353058904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157070.28/warc/CC-MAIN-20180921112207-20180921132607-00446.warc.gz"}
https://www.mathematik.uni-ulm.de/numerik/hpc/ws18/session07/page07.html
# Access methods for individual elements of a matrix We have the freedom to access the contents of the data array directly but it appears more comfortable to provide access methods that permit to read or update an element of a matrix. This allows to keep the address arithmetic within the struct and we no longer need to clutter our code with constructs like i*incRow + j*incCol whenever we access an element. Such methods could be implemented as follows: struct Matrix { /* ... */ double get(std::size_t i, std::size_t j) const { return data[i*incRow + j*incCol]; } void set(std::size_t i, std::size_t j, double value) { data[i*incRow + j*incCol] = value; } /* ... */ }; The const keyword following the parameter list of get tells that the method supposedly does not change the state of the matrix. This means that this method may be invoked even if we have just read-only access for a Matrix object like a Matrix that has been passed as const reference, i.e. as const Matrix&. These access methods can be used as in following example: if (A.get(2, 3) > 0) { A.set(2, 3, -1); } This, however, can be done far more elegantly in C++ using references. Not just parameters can be passed per reference but return types can be reference types as well. This has the consequence that the result of such a method call can be used on the left side or the right side of an assignment, i.e. as a so-called lvalue or rvalue. When an lvalue is needed, the actual address is taken, and when an rvalue is required, that means the actual value is needed, the pointer of the reference is implicitly dereferenced. The good thing about references as return values is that we can use them in any way we like, the compiler does the right thing in dependence of how we use it. struct Matrix { /* ... */ double& access(std::size_t i, std::size_t j) { return data[i*incRow + j*incCol]; } /* ... */ }; Example for using the access method: if (A.access(2, 3) > 0) { /* using the returned value as rvalue */ A.access(2, 3) = -1; /* using the returned value as lvalue */ } This allows us to use A.access(i, j) like any other variable. In C++, this is usually further simplified by avoiding the method name access and using A(i, j) instead of A.access(i, j). This is possible by overloading the function call operator (). Overloading in C++ means that the same name or operator is used for different types of operands. We are used that + can be used for int and double and know that the operators works differently for these two types. In C++ you have the freedom to extend this overloading to self-defined types like Matrix. All operators in C++ can be overloaded including the function call operator (). Overloaded operators can be defined like normal functions or methods, just instead of a method name we write operator followed by the symbol of the operator. Following example shows how to replace the access method by an overloaded function call operator: struct Matrix { /* ... */ double& operator()(std::size_t i, std::size_t j) { return data[i*incRow + j*incCol]; } /* ... */ }; This operator can be used as follows: if (A(2, 3) > 0) { A(2, 3) = -1; } If we want to support matrices with read-only access as well, we have two provide this operator in two variants: struct Matrix { /* ... */ const double& operator()(std::size_t i, std::size_t j) const { return data[i*incRow + j*incCol]; } double& operator()(std::size_t i, std::size_t j) { return data[i*incRow + j*incCol]; } /* ... */ }; The compiler selects automatically the first method if the matrix is read-only, otherwise the other method is taken. The first method returns a const double& and thereby inhibits any modifications of the returned matrix element. Neither C nor C++ check array indices automatically. Accesses through out-of-range array indices can cause very hard to track errors. These access methods, however, allow us to add such a range check at a central location without cluttering the rest of the application with such checks. For this we use the assert function which is available through #include <cassert>: #include <cassert> /* ... */ struct Matrix { /* ... */ const double& operator()(std::size_t i, std::size_t j) const { assert(i < m && j < n); return data[i*incRow + j*incCol]; } double& operator()(std::size_t i, std::size_t j) { assert(i < m && j < n); return data[i*incRow + j*incCol]; } /* ... */ }; ## Exercise Extend the following example by adding a copy function with following signature. This copy function shall use the above defined access methods: void copy_matrix(const Matrix& A, Matrix& B) { /* copy A to B */ } Consider and check which of the access methods is invoked for which matrix. ## Example #include <cassert> /* needed for assert */ #include <cstddef> /* needed for std::size_t and std::ptrdiff_t */ #include <printf.hpp> /* needed for fmt::printf */ enum class StorageOrder {ColMajor, RowMajor}; struct Matrix { const std::size_t m; /* number of rows */ const std::size_t n; /* number of columns */ const std::ptrdiff_t incRow; const std::ptrdiff_t incCol; double* data; Matrix(std::size_t m, std::size_t n, StorageOrder order) : m(m), n(n), incRow(order == StorageOrder::ColMajor? 1: n), incCol(order == StorageOrder::RowMajor? 1: m), data(new double[m*n]) { } const double& operator()(std::size_t i, std::size_t j) const { assert(i < m && j < n); return data[i*incRow + j*incCol]; } double& operator()(std::size_t i, std::size_t j) { assert(i < m && j < n); return data[i*incRow + j*incCol]; } void init() { for (std::size_t i = 0; i < m; ++i) { for (std::size_t j = 0; j < n; ++j) { data[i*incRow + j*incCol] = j * m + i + 1; } } } void print() { for (std::size_t i = 0; i < m; ++i) { fmt::printf(" "); for (std::size_t j = 0; j < n; ++j) { fmt::printf(" %4.1lf", data[i*incRow + j*incCol]); } fmt::printf("\n"); } } }; int main() { Matrix A(7, 8, StorageOrder::ColMajor); A.init(); A(2, 3) = -1; fmt::printf("A =\n"); A.print(); delete[] A.data; }
2021-12-04 12:55:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40290501713752747, "perplexity": 5722.35653742964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362992.98/warc/CC-MAIN-20211204124328-20211204154328-00522.warc.gz"}
https://testbook.com/question-answer/a-simply-supported-beam-of-span-length-l-and-flexu--60424fcb776e066c327ed6b7
# A simply supported beam of span length L and flexure stiffness EI has another spring support at the centre span of stiffness K as shown in figure. The central deflection of the beam due to a central concentrated load of P would be This question was previously asked in GPSC AE CE 2017 Official Paper (Part B - Civil) View all GPSC Assistant Engineer Papers > 1. $$\frac{{P{L^3}}}{{48EI {}}} +\frac{P}{K}$$ 2. $$\frac{{P{L^3}}}{{48EI + K{L^3}}}$$ 3. $$\frac{{P{L^3}}}{{48EI {}}} \times \frac{P}{K}$$ 4. $$\frac{{P{L^3}}}{{48EI {}}} +{K}$$ Option 2 : $$\frac{{P{L^3}}}{{48EI + K{L^3}}}$$ Free CT 1: Ratio and Proportion 2140 10 Questions 16 Marks 30 Mins ## Detailed Solution Concept: For a simply-supported beam with central load (P), the deflection at the centre is: $$\delta_{centre}=\frac{Pl^3}{48EI}$$ This deflection due to the central load will be resisted by spring due to its stiffness. Net deflection of spring = Net deflection on beam Calculation: Given: Let reaction due to the spring be R. Now, Net deflection on beam will be: (Deflection due to central load P) - (Deflection due to spring force R) $$\Delta_{beam}=\frac{{{\rm{P}}{{\rm{L}}^3}}}{{48{\rm{EI}}}} - \frac{{{\rm{R}}{{\rm{L}}^3}}}{{48{\rm{EI}}}}$$ Deflection of spring will be: $$\Delta_{spring}=\frac{R}{K}$$ Net deflection of spring = Net deflection on beam $$\begin{array}{l} \therefore \frac{{{\rm{P}}{{\rm{L}}^3}}}{{48{\rm{EI}}}} - \frac{{{\rm{R}}{{\rm{L}}^3}}}{{48{\rm{EI}}}} = {{\rm{\Delta }}_{spring}}\\ \Rightarrow \frac{{{\rm{P}}{{\rm{L}}^3}}}{{48{\rm{EI}}}} - \frac{{{\rm{R}}{{\rm{L}}^3}}}{{48{\rm{EI}}}} = \frac{{\rm{R}}}{{\rm{K}}}\\ \Rightarrow {\rm{R}}\left( {\frac{1}{{\rm{K}}} + \frac{{{{\rm{L}}^3}}}{{48{\rm{EI}}}}} \right) = \frac{{{\rm{P}}{{\rm{L}}^3}}}{{48{\rm{EI}}}}\\ \Rightarrow {\rm{R}}\left( {\frac{{48{\rm{EI}} + {\rm{K}}{{\rm{L}}^3}}}{{48{\rm{EIK}}}}} \right) = \frac{{{\rm{P}}{{\rm{L}}^3}}}{{48{\rm{EI}}}}\\ {\rm{R}} = \frac{{{\rm{P}}{{\rm{L}}^3}{\rm{K}}}}{{48{\rm{EI}} + {\rm{K}}{{\rm{L}}^3}}}\\ \therefore {{\rm{\Delta }}_{spring}} = \frac{R}{K} = \frac{{P{L^3}}}{{48EI + K{L^3}}} \end{array}$$
2021-09-27 12:45:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6166757941246033, "perplexity": 4517.684998310146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058450.44/warc/CC-MAIN-20210927120736-20210927150736-00153.warc.gz"}
https://eprint.iacr.org/2018/991
## Cryptology ePrint Archive: Report 2018/991 Reconsidering Generic Composition: the Tag-then-Encrypt case Francesco Berti and Olivier Pereira and Thomas Peters Abstract: Authenticated Encryption ($\mathsf{AE}$) achieves confidentiality and authenticity, the two most fundamental goals of cryptography, in a single scheme. A common strategy to obtain $\mathsf{AE}$ is to combine a Message Authentication Code $(\mathsf{MAC})$ and an encryption scheme, either nonce-based or $\mathsf{iv}$-based. Out of the 180 possible combinations, Namprempre et al.~[25] proved that 12 were secure, 164 insecure and 4 were left unresolved: A10, A11 and A12 which use an $\iv$-based encryption scheme and N4 which uses a nonce-based one. The question of the security of these composition modes is particularly intriguing as N4, A11, and A12 are more efficient than the 12 composition modes that are known to be provably secure.\\ We prove that: $(i)$ N4 is not secure in general, $(ii)$ A10, A11 and A12 have equivalent security, $(iii)$ A10, A11, A12 and N4 are secure if the underlying encryption scheme is either misuse-resistant or message malleable'', a property that is satisfied by many classical encryption modes, $(iv)$ A10, A11 and A12 are insecure if the underlying encryption scheme is stateful or untidy.\\ All the results are quantitative. Category / Keywords: secret-key cryptography / Authenticated Encryption, generic composition, tag-then-encrypt, attacks and proves Date: received 15 Oct 2018 Contact author: francesco berti at uclouvain be Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2018/991 [ Cryptology ePrint archive ]
2019-02-23 04:21:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5576165914535522, "perplexity": 5427.844123231589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249468313.97/warc/CC-MAIN-20190223041554-20190223063554-00131.warc.gz"}
https://www.nature.com/articles/s41598-020-73583-2?error=cookies_not_supported
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # How drain flies manage to almost never get washed away ### Subjects An Author Correction to this article was published on 23 March 2021 ## Abstract Drain flies, Psychodidae spp. (Order Diptera, Family Psychodidae), commonly reside in our homes, annoying us in our bathrooms, kitchens, and laundry rooms. They like to stay near drains where they lay their eggs and feed on microorganisms and liquid carbohydrates found in the slime that builds up over time. Though they generally behave very sedately, they react quite quickly when threatened with water. A squirt from the sink induces them to fly away, seemingly unaffected, and flushing the toilet with flies inside does not necessarily whisk them down. We find that drain flies’ remarkable ability to evade such potentially lethal threats does not stem primarily from an evolved behavioral response, but rather from a unique hair covering with a hierarchical roughness. This covering, that has never been previously explored, imparts superhydrophobicity against large droplets and pools and antiwetting properties against micron-sized droplets and condensation. We examine how this hair covering equips them to take advantage of the relevant fluid dynamics and flee water threats in domestic and natural environments including: millimetric-sized droplets, mist, waves, and pools of water. Our findings elucidate drain flies’ astounding ability to cope with a wide range of water threats and almost never get washed down the drain. ## Introduction Water provides amazing opportunities for life at the interface, but can also pose potentially lethal threats to insects. The danger stems from the physics at fluid interfaces. The small length scale of most insects (described by some characteristic length $$\ell$$) causes the force of surface tension ($$\propto \ell$$) to exceed the force the organism can exert by either its strength ($$\propto \ell ^2$$)1 or weight ($$\propto \ell ^3$$). This means that if an insect’s appendage gets stuck in water, it lacks the strength or weight to extricate itself. The ratio of these forces demonstrates the dominant influence of surface tension as $$\ell$$ becomes small. To mitigate the risk of surface tension, many insects and plants rely on specialized surface features that decrease water’s ability to adhere to or wet them; i.e. they become more hydrophobic so they do not get stuck in water2,3,4,5. Appropriate surface chemistry increases hydrophobicity, but alone is limited to producing a maximum solid-liquid contact angle $$\theta _o$$ of $$120^{\circ }$$6, which quantifies hydrophobicity7. Augmenting chemistry with roughness can increase the apparent contact angle8 $$\theta _a$$, and superhydrophobicity ($$\theta _a > 150^{\circ }$$) occurs when air becomes entrapped in the valleys of roughness elements (Cassie wetting) with increasing air fraction raising $$\theta _a$$9. Many insects take advantage of this wetting physics and attain superhydrophobicity by both coating their bodies with oil or wax to optimize their surface chemistry3,10,11, and using hierarchical roughness structures to maximize the air fraction (minimize the solid-liquid contact) between the liquid and their bodies4. The roughness appears in many forms in insects and plants alike, including: nanopillars on drone fly12 and dragonfly wings13, micropapillae on lotus leaves14, micropapillae with nanofolds on rose petals15, needle shaped setae with nanogrooves on water strider11,16 and crane fly legs17, and arrays of hair with star-shaped cuticular projections on termite wings18. Insect surface chemistry and roughness helps them to stay dry, but their interactions with water brings other challenges as well. Surface tension and superhydrophobicity also allow many insects and spiders to stand and move on the surface of ponds and other bodies of water by supplying them with most of the static and dynamic weight support required19,20,21,22. If insects move too quickly when they are in search of food or to evade predators23,24, the dynamic pressure they generate overcomes the interfacial pressure and they pierce the interface with potentially hazardous results25,26. To deal with this and other challenges, small water walkers have developed several methods of locomotion at the water interface including: rowing26,27,28,29, galloping25,30, sailing31, jumping23, meniscus climbing32, and Marangoni propulsion33,34,35. A challenge with fog arises because fog droplets are approximately the same size as the insect’s roughness elements36. So, water accumulates in the valleys of the roughness37, leading to Wenzel wetting8 (no air trapped in the valleys), which results in lower contact angles $$\theta _a$$ and greater adhesion7. This adds weight and increases energy expenditure38. As the small fog droplets coalesce, capillary forces can fold thin wing membranes rendering them ineffective for flight39. Some fliers, such as mosquitoes, have developed high-acceleration flapping motions that fling droplets from their wings at takeoff to mitigate buildup40. Fog can also disturb an insect’s gyroscopic sensors, causing fliers to lose control and fall to the ground41. Rain drops not only pose threats from wetting and surface tension, but their high free-fall velocities and similar size to insect bodies means their inertia is potentially lethal. Yet, small insects like mosquitoes can survive rain-drop collisions in flight due to their strong exoskeleton and low mass42. Upon impact with a small insect, the drop loses little momentum and hence imparts a low force. The force increases as insect size increases reaching a maximum of $$5\times 10^4$$ dyn, the force on an unyielding surface43. Even avoiding direct impact, passing droplets generate hazardous disturbances in the air that fliers must mitigate44,45. Given all the threats that insects must deal with to live around water, we find that drain flies are particularly well adapted to handle them. Of the nearly 2900 described species of drain flies worldwide46,47, most live in aquatic or semi-aquatic habitats including: in wet woodland patches48, near leaking septic lines, in wet shaded areas where mold and algae grow49, and in our homes. In this study we report on our investigations into the micro- and nano-structures found on the dense array of hairs that cover drain fly bodies and endow them with superhydrophobicity. These specialized hair structures play a central role in their ability to live in damp conditions and escape from the water hazards that confront them daily. ## Results and discussion We first examine the drain fly’s morphology. We then theoretically and experimentally study how this morphology decreases wetting in various wetting circumstances. Finally, we study the interaction of live drain flies with water as we confront them with various water threats that approximately simulate real life encounters. ### Body morphology A drain fly’s body divides into the head region, thorax, and abdomen and is approximately $$2.61 \pm 0.36$$ mm long and $$1.14 \pm 0.19$$ mm wide with a mass of $$1.9 \pm 0.4$$ mg (mean ± standard deviation) (Fig. 1a–c). The head comprises compound eyes, a pair of antennae, and maxillary palps near the mouth (Fig. 1a–c). The thorax bears a single pair of membranous wings (up to 2.9 mm long, 1.5 mm wide, and varying between 0.8 and 8.8 $$\upmu \hbox {m}$$ thick) that are held flat, covering the thorax and abdomen at rest. Six legs extend from the bottom of the thorax. Three different types of hair densely cover the fly’s entire body and appendages. The macrotricha are large hairs that predominantly populate the wings, extending to $$104 \pm 20\,\upmu \hbox {m}$$ in length with a diameter of $$2.7 \pm 0.4\,\upmu \hbox {m}$$. They protrude upward at an angle from sockets on the veins with approximately $$14 \,\upmu \hbox {m}$$ spacing, and overlap to form a crisscross pattern that protects the wing membrane as shown in Fig. 1d,g. Along the edge of the wing, the macrotrichia become longer and much denser with approximately $$7.9 \pm 1.4\,\upmu \hbox {m}$$ spacing (Fig. 1e). The legs also possess sparse scatterings of hairs very similar to the macrotrichia. A second type of large hair is similar to the macrotrichia, but oblong in shape with a flattened, leaf-like appearance ($$84.4 \pm 8.9\,\upmu \hbox {m}$$ long and $$9.2 \pm 1.1\,\upmu \hbox {m}$$ wide) (Fig. 1f,i). These hairs are most prominent on the legs (Fig. 1f), the antennae, and in patches along the widest region of the wing. The smallest hairs are the microtrichia, which protrude from the wing membrane, legs, and antenna and lie underneath the larger macrotrichia and leaf-like hairs (Fig. 1j,i). Their conical shape protrudes $$4.28 \pm 0.24\,\upmu \hbox {m}$$ in length with a diameter of $$0.39 \pm 0.02\,\upmu \hbox {m}$$ at the base and $$0.14 \pm 0.01\,\upmu \hbox {m}$$ at the tip (Fig. 1j). Spaced 3–4 $$\upmu \hbox {m}$$ apart, they bend with random orientations but preferentially toward the distal region of the wing. As the macrotrichia act as the first and most abundant layer of defense on the wings, which comprise most of the upper surface area of the fly, we investigate their microstructure in more detail. Each hair has approximately ten ridges that start small near the base and grow to a height of approximately 790 nm as they run along the axis. The top of each ridge serrates into conical barbs pointing toward the tip of the macrotrichia (Fig. 1e,h). Shallow nanogrooves decorate the valleys and ridge walls with $$98 \pm 10$$ nm spacing (Fig. 1h). Small holes ($$101 \pm 22$$ nm diameter) spaced $$780 \pm 183$$ nm apart perforate the valleys connecting to the interior of the hollow shaft (dark spots visible in Fig. 1e,h). The leaf-like hairs also possess all the same microstructures, but have many more rows of ridges, due to their larger width (Fig. 1i,k). The morphology of the hair covering forms an excellent protective layer that helps to keep drain flies dry. In the following three sections we theoretically and experimentally examine the wetting of the wings as an example, but expect similar results on other portions of the body as well due to their similar hair coverings. ### Wetting #### Millimetric-sized droplets and pools The drain fly’s hair covering acts as a hierarchical roughness that produces superhydrophobicity. We take the wings as an example and theoretically examine the effect of the drain fly’s hierarchical roughness when it comes in contact with a millimetric-sized droplet or pool of water. The individual macrotrichia act as the peaks of the largest roughness elements, such that the tight spacing allows water to form capillary bridges between hairs. The macrotrichia cover approximately 70% of the projected surface area of the wing, providing a solid-liquid contact fraction of $$f_{macro} = 0.7$$. The steepness of the valley walls on each macrotrichia also likely enables capillary bridges to form between the ridges. The peaks of these ridges cover approximately 10% of the outer circumference of the hair; $$f_{ridge} = 0.1$$. Finally, the barbs that serrate the ridges reduce the contact area on the ridges to approximately 25–75% ($$f_{barb} = 0.25 - 0.75$$), with lower values occurring as the barbs become pointier at the hair tip. Hence, the total solid–liquid contact fraction is $$f_s = f_{macro} f_{ridge} f_{barb} = 0.0175 - 0.0525$$. The Cassie–Baxter equation9 for calculating the apparent contact angle $$\theta _a$$ is \begin{aligned} cos\theta _a = f_s cos\theta _o - f_g, \end{aligned} (1) where $$f_g$$ is the liquid–gas contact fraction beneath the drop, and $$\theta _o$$ is the chemical contact angle. Making the simplifying assumption that $$f_g = 1 - f_s$$ this reduces to \begin{aligned} cos\theta _a = f_s(1 + cos\theta _o) - 1. \end{aligned} (2) Using the chemical contact angle of chitin, $$\theta _o= 105^{\circ }$$, which commonly composes insect cuticles2,50,51, the apparent contact angle calculates to $$\theta _a = 164^{\circ } - 171^{\circ }$$. Further assuming that a droplet only contacts the upper one third of each macrotrichia, reduces $$f_s$$ to approximately 0.0058 which yields $$\theta _a = 175^{\circ }$$. When we place a droplet of water on a fly’s wing, as shown in Fig. 2a, we see that the droplet maintains a spherical shape and that $$\theta _a$$ lies near 180$$^{\circ }$$. This shows that a Cassie–Baxter wetting state must exist and that the hierarchical roughness of the drain fly helps it stay dry by minimizing its solid–liquid contact fraction. If the droplet detaches from the syringe, it quickly slides to the side and falls off the wing. We see similar superhydrophobicity on the fly’s body and legs as well, verifying that the covering of leaf-like hairs on the legs follows similar physics resulting in high $$\theta _a$$. As drain flies often encounter impure water, we also test their contact angle when a surfactant (sodium dodecyl sulfate, SDS) is present. At low surfactant concentrations ($$1.00 \times 10^{-3}$$ mol/L and $$4.00 \times 10^{-3}$$ mol/L) below the critical micelle concentration (CMC, $$9.97 \times 10^{-3}$$ mol/L52), the surface tension of the solutions decreases below the surface tension of water (to $$\sigma = 63$$ mN/m and 46 mN/m, respectively52), but $$\theta _a$$ is still near 180$$^{\circ }$$. At 150% of the CMC ($$\sigma \approx 39$$ mN/m) we see the contact angle decrease to 140$$^{\circ }$$. Similarly, dish soap, which contains surfactants and other compounds, does not decrease $$\theta _a$$ at low concentrations (1 drop per 100 mL), but at higher concentrations (6 drop per 100 mL) the wing wets and $$\theta _a$$ decreases to approximately 45$$^{\circ }$$. Hence, we see that drain flies’ hair covering enables them to stay dry even in the presence of low concentrations of surfactants. Drain flies’ liquid repellency appears to be specific to water, with lower surface tension liquids causing wetting. When a droplet of 5 cSt silicone oil ($$\sigma = 20$$ mN/m) comes in contact with the wing, it quickly penetrates the hairs (no entrapped air, i.e. Wenzel wetting), spreading horizontally through them until it covers the entire wing surface and drains over the edges as shown in Fig. 2b. Similar results occur for olive oil ($$\sigma = 36$$ mN/m), ethanol ($$\sigma = 22$$ mN/m), and PP1 (perfluoro-2-methylpentane, $$\sigma = 11$$ mN/m) showing that although the special morphology of drain flies induces superhydrophobicity it does not induce omniphobicity. #### Micron-sized droplets As droplets become sufficiently small they can pass between the hairs and reach the main body and appendages. Once again, we take the upper surface of the wings as an example. Figure 3 shows that the small spacing between the macrotrichia (Fig. 1d,g) inhibits droplets with diameters $$d_d$$ larger than $$25 \,\upmu \hbox {m}$$ from passing. As the droplet size decreases towards zero the probability of passing the macrotrichia layer increases to 30% $$(1 - f_{macro})$$ (Fig. 3, solid black line; calculation explained in caption). Having passed the macrotrichium layer, a droplet must also pass the smaller but more tightly packed microtrichia (Fig. 1j) to wet the wing membrane. The microtrichia inhibits all droplets larger than about $$4.5 \,\upmu \hbox {m}$$ from passing with the probability that smaller droplets can pass and wet the membrane increasing sharply to 91% ($$1 - f_{micro}$$, where $$f_{micro} = 0.09$$ is the projected surface area fraction of the microtrichia covering) (Fig. 3, dashed blue line). The combined probability of the water droplets passing both hair layers is shown in Fig. 3 by the dash-dotted magenta line, which indicates that only droplets smaller than $$4.5 \,\upmu \hbox {m}$$ have any chance of wetting the wing membrane with a maximum chance of 27% ($$(1 - f_{macro})(1 - f_{micro})$$) for the smallest droplets. Droplets that contact the wing membrane (or other body parts) as opposed to the hairs, wet more surface area (shown in the next section) making liquid removal more difficult. #### Condensation One final way for water to collect on a drain fly is by condensation at the dew point. Figure 4 shows the process of condensation of water on various wing surfaces inside an environmental scanning electron microscope. On the macrotrichia (Fig. 1h), condensation initiates in the valleys (Fig. 4a), likely due to the lower energy requirement in nanoscale V-shaped structures53. In the early stages, water collects as spherical sections and elongated filaments as indicated in Fig. 4a by the green and blue arrows respectively and previously studied for synthetic structures54. As water continues to condense, the volume increases so much that a groove cannot confine a growing droplet which bulges out forming a more spherical shape (Fig. 4b). The nanogrooves inside the valleys (Fig. 1h) and the barbs on the ridges pin the contact line in both the axial and circumferential directions allowing droplets to exhibit a range of local contact angles as seen on the left and right of the droplet in Fig. 4b. Further growth causes the droplets to step from one sharp edge to another as indicated by the red arrows in the image sequence shown in Fig. 4d–g. Water also collects as small droplets on the microtrichia and as droplets that grow and combine to form pools on the wing membrane (Fig. 4c). The collection of water on the macrotrichia and microtrichia results in more spherical droplets than collection on the wing membrane. This decreases water’s contact area allowing for easier removal upon experiencing an acceleration. ### Water threats and encounters with live drain flies To observe the wetting properties of drain flies in practice, we investigate the fluid dynamics as water comes in contact with live drain flies and comment on the flies’ typical reaction to various threats, as summarized in Table 1. We first look at millimetric-sized droplets, approximating the fly’s interaction with rain, dripping liquid, and sprays from faucets or shower heads. Next, we study mists to approximate natural fog and steamy bathrooms. Finally, we examine the fly’s interaction with pools and small waves to simulate ponds, puddles, streams, toilets and water in the P-trap of drains. #### Droplets When droplets of water fall towards a fly, we observe different droplet-fly interactions depending on the droplet size, velocity, and quantity. We first examine a single droplet falling towards a fly from above. As the fly in Fig. 5a sits on the floor, a 2.2 mm diameter droplet falls toward it at 0.95 m/s. The fly senses its approach, either visually55 or with sensilla56 that feel the air flow, and begins to raise its wings preparing for takeoff ($$t = 1$$ ms). The droplet impacts the floor behind the fly as it leaps into the air for a successful escape. In Fig. 5b, a similar set of circumstances occurs except this time the droplet falls from directly above the fly with twice the velocity. The fly does not respond to the threat before impact and the droplet smashes the fly into the ground spreading over the upper surfaces of the fly ($$t = 0.6$$–1.8 ms). The fly’s superhydrophobic hair covering causes the droplet to quickly glide off the wing onto the floor and the fly leaps into flight, which is the typical response (Table 1), with the only visible damage being the loss of some hair ($$t = 2.4$$–4.2 ms). A spray of droplets can be more or less harmful to a fly than a single droplet because of the various droplet sizes and velocities present. In Fig. 5c, a spray of numerous droplets approaches a fly standing on the floor. Three small droplets (0.13–0.20 mm diameters) with relatively low velocities (1.87–2.91 m/s) lead the group and impact first, rebounding off the fly ($$t = 0$$–5.0 ms). These impacts do not harm the fly but alert it to potential danger and induce it to flee ($$t = 4.0$$–7.0 ms). Table 1 shows that although it is possible for the flies to sense incoming threats (Fig. 5a) they typically do not react until after droplets impact them (Fig. 5b,c). Once droplet impact occurs the flies react quickly. We measure their reaction time as the time from impact to the moment they begin to raise their wings to flee to be $$4.4 \pm 1.3$$ ms (mean ± standard deviation, from 10 trials). In another case, we observe that sometimes the flies’ reaction time is not fast enough to avoid additional impacts. In Fig. 5d, a spray of droplets approaches a fly standing on the wall. This time the first impact occurs on the fly’s antenna with a 0.5 mm droplet at 7.70 m/s ($$t = 0$$–0.2 ms, blue arrow). A cluster of three similar sized droplets quickly follows ($$t = 0.2$$–0.4 ms, red arrows), impacting the fly on the head, splashing ($$t = 0.8$$ ms) and knocking it off the wall ($$t = 3.4$$–10.6 ms). The fly endures multiple additional impacts (Supplementary video 4). When the spray terminates, the fly gradually works its way out of the newly formed puddle ($$t = 321$$ ms) and walks away without the appearance of major damage, but with a much slower gait than normal. We find that flies can typically recover from this type of attack (Table 1). From the multiple videos of water droplet impacts taken, we see that the drain flies always stay dry (i.e., water does not adhere to them) and can flee and recover from the first few impacts. After several repeated droplet impacts, the flies begin to move slower and often incur damage to their thin appendages, such as torn wings, damaged legs, or broken antennae. In one case we subjected a fly to repeated large droplet impacts (about six), similar to Fig. 5b, until it became unresponsive to additional impacts and eventually died. We expect the threshold for the number of impacts to cause injury or death to vary with droplet size, velocity, and impact location and orientation. If the fly comes in contact with a droplet of another liquid, the results differ greatly. Figure 5e shows a 1.7 mm droplet of 5 cSt silicone oil impacting a fly at a low velocity of 0.48 m/s. As the fly stands against the tank wall ($$t = 0$$ ms), the silicone oil impacts its head ($$t = 3.0$$ ms), wetting the fly ($$t = 6.0$$ ms). The fly jumps, turns in the air, and lands on its head ($$t = 14.0$$–66.2 ms). The droplet adheres the fly to the floor and the fly jerks repeatedly trying to escape without success (Supplementary video 5). The fly died. Similar experiments were performed with olive oil, ethanol, and PP1 with the same results for each (Table 1) except that the fly did not die when impacted with PP1. This likely occurs because the high volatility of PP1 causes the droplet to evaporate quickly, freeing the fly. #### Mist When a drain fly sits in mist, water gradually collects on its hair (through droplet impacts and condensation) and droplets smaller than $$4.5 \,\upmu \hbox {m}$$ have a small chance of passing through the hair covering to wet the wing membrane (discussed in the “Micron-sized droplets” section above). Figure 6a shows a drain fly that has been sitting in mist with a mean droplet diameter of approximately $$10 \,\upmu \hbox {m}$$ for several minutes. Hence, there should be minimal wetting to the wing membrane from micro droplets. This mist approximates natural fog which has a mean droplet diameter between 8 to $$24 \,\upmu \hbox {m}$$36. Small droplets collect on the macrotrichia at the edge of the wings (Fig. 6a, $$t=0$$ ms). As is typical of our observations, the mist eventually induces the fly to move and it takes flight (Table 1). The flapping of its wings generates accelerations, a, up to 400 g at the wing tips. A simple force balance, $$m_d a = \sigma L$$ (where $$m_d$$ is the mass of a spherical droplet, $$\sigma$$ is the surface tension coefficient, and L is the droplet-hair contact length, which we approximate as the macrotrichium diameter) shows that such high accelerations should remove droplets larger than approximately $$45 \,\upmu \hbox {m}$$. Smaller contact lengths would allow the removal of even smaller droplets, further drying the fly and minimizing its effective mass even more. The collection of mist droplets on the hair and the tendency to occasionally take flight help to keep the drain fly dry. The negative effects of wing folding and to flight control that mosquitoes experience39,41 are not seen in drain flies. Mist may also aid drain flies in water intake as it is the only circumstance in which we frequently observed drain flies urinating (caught on video four times, and observed several more). The flies manage to stay dry during urination due to a conical spike and bulge that protrude from the anus. After extending beyond the hair covering as shown in Fig. 6b ($$t = 0$$–1000 ms), the fly excretes a 0.2 mm droplet while it retracts the spike ($$t = 1200$$–1856 ms). The droplet launches away from the fly, with sufficient velocity to avoid further contact and self-wetting ($$t = 1858$$–1860 ms). As the bulge retracts, the spike reemerges followed by the full retraction of both (Supplementary video 7). #### Pools and waves Drain flies’ erratic manner of flight can lead to collisions that knock them out of the air and onto the surface of a pool. Figure 7a shows the typical situation following such an impact. As the fly drifts away from the collision site, it rolls on the pool making several attempts to stand ($$t = 0$$–400 ms). The fly manages to stand in less than a second and quickly leaps into flight ($$t = 500$$–600 ms) by moving its four hind legs in a rowing-like jump, similar to that of a water strider11,28 (Supplementary video 8; the jumping motion is smoother in some other observations). In all observations, the flies never stay on the pool surface for more than a couple of seconds (Table 1) and do not exhibit any kind of water walking motion, seeming to prefer to locomote in the air and rest on solid surfaces. A drain fly’s ability to support its weight on and leap from the pool surface comes from its small size. The Baudoin number is the ratio between the force of gravity and the maximum surface tension (when the force vector points directly up) and is defined as $$Ba=m_f g/\sigma P$$, where $$m_f$$ is the fly’s mass, g is the acceleration of gravity, $$\sigma$$ is the surface tension coefficient, and P is the perimeter of the depression in the pool surface. With $$m_f = 1.9$$ mg and $$P = 7.7$$ mm for the contacting portions of all six legs, $$Ba = 0.034$$ for a fly in the standing position. This means that surface tension can exert a force up to 30 times (1/Ba) the weight of the fly explaining their ability to stand on the surface. The perimeter P increases for other fly orientations on the surface, increasing the safety margin (e.g., on its wings, back, and antennae $$P = 22.2$$ mm and $$1/Ba = 86$$). As a drain fly stands near a lapping pool of water, oncoming waves can threaten to maul or submerge them, if they do not fly away to safety. In Fig. 7b the fly does not move as the wave impacts its right legs, pins them to the floor, and passes over the top ($$t = 0$$–100 ms). The wave continues over the body and wings, only contacting the tops of the hairs, and entrapping a thin air layer, known as a plastron, which appears like a bubble surrounding the fly ($$t = 150$$–300 ms). The plastron allows the fly to breath while submerged57,58, but also pins the fly to the floor or wall of the container, thwarting its escape. If the drain flies remain undisturbed in water, they do not appear to attempt to escape and eventually die (Table 1). For an insect to survive submerged indefinitely, the surface area of its plastron must be large enough that the rate of $$\hbox {O}_2$$ and $$\hbox {CO}_2$$ exchange with the water suffices to meet the insect’s metabolic needs59. We found that the survival rate for the drain flies decreases with submergence time. All drain flies submerged for less than 5 h lived; 3 survived 3 h of submergence, and 4 others survived shorter test durations (100% survival). Following approximately 5 h of submergence, 2 of 4 flies died and 2 lived at least 24 h after release (50% survival rate). Two flies left submerged overnight died. We observed that the drain flies occassionally move their appendages and deform the plastron walls while submerged. Previous researchers60,61 explain that this behavior can increase fluid flow over the plastron, and hence $$\hbox {O}_2$$ and $$\hbox {CO}_2$$ exchange, helping insects survive for a longer submergence duration. Although submergence by a wave can kill drain flies, it does not necessarily constitute a death sentence. In Fig. 7c we see another fly encapsulated in a plastron, submerged in a pool, and pinned to the container wall. To see if the flies have the ability to escape, in this case we stimulate action by tapping lightly on the container encouraging the fly to move. The fly gradually travels diagonally upwards over a distance of approximately 10–20 mm until it arrives in the position seen at $$t = 0$$ ms. One final exertion by the fly at $$t = 16$$ ms dislodges it from the wall allowing it to rise to the surface ($$t = 55$$–155 ms). Upon contacting the surface, the plastron pops, surface tension launches the fly upwards into the air ($$t = 255$$–270 ms), and the fly emerges unharmed. From this event we see that a drain fly with a plastron is buoyant, and simple calculations show that the ratio of the buoyant force to the fly’s weight is $$F_b/m_f g \approx 1.5$$–2.7 (assuming axisymmetry and an equilateral triangle cross section for volume calculations of the submerged fly; e.g., Fig. 7c, $$t = 55$$ ms). A drain fly’s plastron gradually dissolves and when fully dissolved, the fly sinks. This shows that the plastron not only allows the fly to breath while submerged, but enables it to rise to the pool’s surface as long as it does not pin the fly to a solid surface. ## Conclusions We have studied how drain flies survive in wet conditions and manage many water related threats. Like many other insects, superhydrophobicity is key to drain flies’ survival in their preferred aquatic or semi-aquatic habitats. If their hydrophobicity were to be removed, they would easily wet, water’s surface tension would hold them fast, and they would die, as demonstrated herein with other wetting liquids. Drain flies’ superhydrophobicity comes from a combination of their sufficiently high chemical contact angle and the three-fold hierarchical roughness found in their unique hair covering that produces an apparent contact angle near $$180^{\circ }$$. The roughness of the hair covering ranges in scale from the micron-sized arrays of the macrotricha and leaf-like hairs, down to the nanoscale ridges, valleys, barbs, and grooves found on each hair. The microtrichia provide a secondary layer of defense that helps the flies stay dry when they lose some of the larger hairs or are confronted with condensation or the micron-sized droplets found in mist. Live drain flies prove to be very resilient to the various water threats they encounter. Droplets impacting drain flies exhibit rapid lateral spreading, retraction, and even rebound, similar to impact on other superhydrophobic surfaces. If large impacts prove unavoidable the drain flies may sustain minor injuries to their thin appendages and even die. Mist gradually accumulates into larger droplets on the flies, but the hair covering traps the majority of the water. It adheres to the hairs in a spherical shape that more easily detaches when subjected to sufficient accelerations, such as flapping of the drain fly’s wings. When needed, drain flies can also stand on and leap from the surface of a pool of water. If they become submerged, their hair covering entraps a thin air layer (a plastron) over the surface of their entire body. This air layer decreases their effective density enabling them to float to the surface of the pool, where they can escape. If they become pinned to a solid beneath the water surface, the air layer combined with the exchange of $$\hbox {CO}_2$$ and $$\hbox {O}_2$$ with the water, enables the flies to survive submerged for up to approximately 5 h. We see that drain flies’ specialized hair covering combined with their rapid flight from threats enables them to stay dry and safe in both the wet environments of our homes and in nature. This new understanding of the unique morphology of drain flies’ hair structure may aid in the design of future superhydrophobic surfaces or the development of appropriate wetting pesticides. ## Materials and methods ### Insect collection and sample preparation Adult drain flies were collected from houses (bathroom, kitchens or near other damp locations) at KAUST using narrow mouth plastic bottles. They were transported to the laboratory and maintained at $$20^{\circ }\hbox {C}$$. Live insects were used for water threat and encounter studies within 48 h of capture. To collect wing samples for imaging purposes, specimens were sedated using formaldehyde fumes. The samples were then dried at room temperature inside a laminar flow cabinet and stored in paraffin sealed petri plates for later use. Wings were then carefully dissected using a surgical scalpel and reattached to glass slides using double-sided tape. To remove hair from the wing surface, the wing was sprayed with ample amounts of Milli-Q water. ### Photography and optical microscopy Wings flattened on microscope slides were imaged with a Zeiss Axioscope microscope with a Zeiss AxioCam ERc 5s camera attached. For higher magnification images of the body surface and hair, we used a Nikon SMZ25 Stereomicroscope. Images were also taken with a Nikon D3X SLR camera and Leica Z16 APO lens. Wing dimensions and other body measurements were estimated using the built-in scale, ImageJ, or MATLAB with the mean and standard deviation of several measurements reported herein. High speed photography was accomplished with a color Phantom V710 camera. Focus stacking for the images in Fig. 1a,c,d was accomplished in Adobe Photoshop. The histograms of many of the SLR and high-speed photography images were adjusted in Adobe Lightroom to correct the white balance and provide minor brightening. ### Scanning electron microscopy and environmental scanning electron microscopy Body surfaces and hair structures of the drain fly were examined with scanning electron microscopy (SEM). For the analysis, body parts (head, antennae, legs and wing) were dissected carefully from dead individuals and mounted on aluminum stubs using a carbon tape and then sputter-coated with a 4 nm Au layer. The images were obtained on Quanta 600 microscopes. Measurements were made from digital images using the ImageJ software. The detailed analysis of the wetting phenomena and behaviors of condensed water droplets on the wing surface were observed using Quanta 600 SEM (Thermo Fisher Scientific) equipped with Environmental SEM (ESEM) mode including built-in cooling stage and gaseous secondary electron detector. The wing samples were attached to aluminum stubs using double-sided copper tape. We initiated water droplet formation on the wing surface by gradually increasing the water vapor pressure inside the SEM chamber from 600 Pa to 820 Pa at $$2^{\circ }\hbox {C}$$ stage temperature. Images were captured every 2 s intervals at an accelerating voltage of 7 kV. ### Measurement of the contact angle The advancing apparent contact angle $$\theta _a$$ of several liquids was measured on drain flies’ wings using the sessile drop method. Water ($$\sigma = 72$$ mN/m), 5 cSt silicone oil (from Clearco Products, $$\sigma = 20$$ mN/m), olive oil ($$\sigma = 36$$ mN/m), ethanol ($$\sigma = 22$$ mN/m), and PP1 (perfluoro-2-methylpentane, $$\hbox {C}_6\hbox {F}_{14}$$, from FLUTEX, $$\sigma = 11$$ mN/m), where used. Contact angles of solutions of water with SDS (sodium dodecyl sulfate) and water with Pril dish soap (by Henkel, contains anionic and amphoteric surfactants), were also measured. The measurement was performed as follows. A droplet of liquid was formed just above the drain fly and expanded until it came in contact with the fly. Images of the contact angle with water were taken with a Nikon D3X SLR camera with a Leica Z16 APO microscope lens. Images of the contact angle for 5 cSt silicone oil, olive oil, PP1, ethanol, and water-surfactant mixtures were taken with a color Phantom V710 high-speed camera. ### Simulating water encounters and threats To investigate the dynamic wetting behavior of the liquid and the flies’ response to various water threats, we contained each fly in a glass or acrylic tank ranging in size from $$3.3 \times 3.5 \times 3.5\;\hbox { cm}^3$$ to $$11.6 \times 11.6 \times 21.6\;\hbox { cm}^3$$ to a Petri dish of 8.7 cm diameter and 1.5 cm height. After giving the fly sufficient time to acclimate to the surroundings we confronted the fly with one of various water threats and filmed the encounter with a color Phantom V710 high-speed camera at up to 5000 frames per second. From these videos, we observed the flies’ response, wetting properties, and measured droplet diameters and velocities. Multiple drain flies were used for the various tests and several repetitions of each of the tests were performed as summarized in Table 1. #### Experiments on fly interactions with droplets and mist We studied how the flies interact with droplets in three different ways. First, we introduced single-droplet threats by inserting a syringe of liquid inside the tank above the fly and squeezing out a droplet that landed on or near the fly. Second, we shot a spray of droplets at the fly using a household spray bottle that formed a range of droplet diameters and velocities. Finally, to generate fog we used a Proton Ultrasonic household humidifier to make micrometer-sized droplets that flowed into the tank through tubing, filling the entire volume of the tank with a cloud. #### Experiments on fly interactions with pools and waves We also investigated how drain flies interact with a pool in three different ways. First, we filled the bottom of the tank with water. As the flies often land on the sides of the tank and sit for long periods of time, we lightly tapped the tank walls to induce them to fly. Due to their erratic flying behavior they would often land, fall, or crash into the pool surface. Second, after placing a fly in a Petri dish, we inclined it and filled the lower section with water. When the fly landed on the dry ground, we laid the dish down flat, forming a wave directed at the fly. Finally, after submerging a fly in the pool we observed its behavior. ## References 1. McMahon, T. & Bonner, J. T. On Size and Life 1st edn. (Scientific American Books - W. H. Freeman & Co., San Francisco, 1983). 2. Holdgate, M. W. The wetting of insect culticles by water. J. Exp. Biol. 32, 591–617 (1955). 3. Bush, J. W., Hu, D. L. & Prakash, M. The integument of water-walking arthropods: Form and function. In Insect Mechanics and Control, vol. 34 of Advances in Insect Physiology (eds. Casas, J. & Simpson, S.) 117 – 192. https://doi.org/10.1016/S0065-2806(07)34003-4 (Academic Press, Cambridge, 2007). 4. Byun, D. et al. Wetting characteristics of insect wing surfaces. J. Bionic Eng. 6, 63–70. https://doi.org/10.1016/S1672-6529(08)60092-X (2009). 5. Darmanin, T. & Guittard, F. Superhydrophobic and superoleophobic properties in nature. Mater. Today 18, 273–285. https://doi.org/10.1016/j.mattod.2015.01.001 (2015). 6. Shafrin, E. G. & Zisman, W. A. Upper Limits to the Contact Angles of Liquids on Solids Vol. 43, 145–157 (American Chemical Society, Washington, DC, 1964). 7. Quéré, D. Wetting and roughness. Annu. Rev. Mater. Res. 38, 71–99. https://doi.org/10.1146/annurev.matsci.38.060407.132434 (2008). 8. Wenzel, R. N. Resistance of solid surfaces to wetting by water. Ind. Eng. Chem. 28, 988–994 (1936). 9. Cassie, A. B. D. & Baxter, S. Wettability of porous surfaces. Trans. Faraday Soc. 40, 546–551 (1944). 10. Beament, J. W. L. The role of wax layers in the waterproofing of insect cuticle and egg-shell. Discuss. Faraday Soc. 3, 177–182 (1948). 11. Mahadik, G. A. et al. Superhydrophobicity and size reduction enabled Halobates (insecta: Heteroptera, Gerridae) to colonize the open ocean. Sci. Rep. 10, 7785. https://doi.org/10.1038/s41598-020-64563-7 (2020). 12. Hayes, M. J., Levine, T. P. & Wilson, R. H. Identification of nanopillars on the cuticle of the aquatic larvae of the drone fly (Diptera: Syrphidae). J. Insect Sci. 16, 36. https://doi.org/10.1093/jisesa/iew019 (2016). 13. Bandara, C. D. et al. Bactericidal effects of natural nanotopography of dragonfly wing on Escherichia coli. ACS Appl. Mater. Interfaces 9, 6746–6760. https://doi.org/10.1021/acsami.6b13666 (2017). 14. Barthlott, W. & Neinhuis, C. Purity of the sacred lotus, or escape from contamination in biological surfaces. Planta 202, 1–8. https://doi.org/10.1007/s004250050096 (1997). 15. Feng, L. et al. Petal effect: A superhydrophobic state with high adhesive force. Langmuir 24, 4114–4119. https://doi.org/10.1021/la703821h (2008). 16. Gao, X. & Jiang, L. Water-repellent legs of water striders. Nature 432, 36–36. https://doi.org/10.1038/432036a (2004). 17. Hu, H.-M.S., Watson, G. S., Cribb, B. W. & Watson, J. A. Non-wetting wings and legs of the cranefly aided by fine structures of the cuticle. J. Exp. Biol. 214, 915–920. https://doi.org/10.1242/jeb.051128 (2011). 18. Watson, G. S., Cribb, B. W. & Watson, J. A. How micro/nanoarchitecture facilitates anti-wetting: An elegant hierarchical design on the termite wing. ACS Nano 4, 129–136. https://doi.org/10.1021/nn900869b (2010). 19. Mansfield, E. H., Sepangi, H. R. & Eastwood, E. A. Equilibrium and mutual attraction or repulsion of objects supported by surface tension. Philos. Trans. R. Soc. Math. Phys. Eng. Sci. 355, 869–919 (1997). 20. Keller, J. B. Surface tension force on a partly submerged body. Phys. Fluids 10, 3009–3010. https://doi.org/10.1063/1.869820 (1998). 21. Vella, D., Metcalfe, P. D. & Whittaker, R. J. Equilibrium conditions for the floating of multiple interfacial objects. J. Fluid Mech. 549, 215–224. https://doi.org/10.1017/S0022112005008013 (2006). 22. Bush, J. W. M. & Hu, D. L. Walking on water: Biolocomotion at the interface. Annu. Rev. Fluid Mech. 38, 339–369. https://doi.org/10.1146/annurev.fluid.38.050304.092157 (2006). 23. Suter, R. B. & Gruenwald, J. Predator avoidance on the water surface? Kinematics and efficacy of vertical jumping by Dolomedes (Araneae, Pisauridae). J. Arachnol. 28, 201–210 (2000). 24. Suter, R. B. Trichobothrial mediation of an aquatic escape response: Directional jumps by the fishing spider, Dolomedes triton, foil frog attack. J. Insect Sci. 3, 19. https://doi.org/10.1093/jis/3.1.19 (2003). 25. Suter, R. & Wildman, H. Locomotion on the water surface: Hydrodynamic constraints on rowing velocity require a gait change. J. Exp. Biol. 202, 2771–2785 (1999). 26. Hu, D. L. & Bush, J. W. M. The hydrodynamics of water-walking arthropods. J. Fluid Mech. 644, 5–33. https://doi.org/10.1017/S0022112009992205 (2010). 27. Suter, R., Rosenberg, O., Loeb, S., Wildman, H. & Long, J. Locomotion on the water surface: Propulsive mechanisms of the fisher spider. J. Exp. Biol. 200, 2523–2538 (1997). 28. Hu, D. L., Chan, B. & Bush, J. W. M. The hydrodynamics of water strider locomotion. Nature 424, 663–666. https://doi.org/10.1038/nature01793 (2003). 29. Denny, M. W. Paradox lost: Answers and questions about walking on water. J. Exp. Biol. 207, 1601–1606. https://doi.org/10.1242/jeb.00908 (2004). 30. Suter, R. B., Stratton, G. & Miller, P. R. Water surface locomotion by spiders: Distinct gaits in diverse families. J. Arachnol. 31, 428–432. https://doi.org/10.1636/m02-22 (2003). 31. Suter, R. B. Cheap transport for fishing spiders (Araneae, Pisauridae): The physics of sailing on the water surface. J. Arachnol. 27, 489–496 (1999). 32. Hu, D. L. & Bush, J. W. M. Meniscus-climbing insects. Nature 437, 733–736. https://doi.org/10.1038/nature03995 (2005). 33. Andersen, N. M. A comparative study of locomotion on the water surface in semi-aquatic bugs (Insects, Hemiptera, Gerromorpha). Vidensk Medd fra Dansk naturh Foren. 139, 337–396 (1976). 34. Andersen, N. M. The Semiaquatic Bugs (Hemiptera, Gerromorpha): Phylogeny, Adaptations, Biogeography and Classification (Klampenborg, Den.: Scand. Sci., 1982). 35. Betz, O. Performance and adaptive value of tarsal morphology in rove beetles of the genus Stenus (Coleoptera, Staphylinidae). J. Exp. Biol. 205, 1097–1113 (2002). 36. Piliè, R., Eadie, W., Mack, E., Rogers, C. & Kocmond, W. Project fog drops, part i: Investigations of warm fog properties. Contractor report CR-2078, NASA (1972). 37. Watson, G. S. et al. A gecko skin micro/nano structure: A low adhesion, superhydrophobic, anti-wetting, self-cleaning, biocompatible, antibacterial surface. Acta Biomater. 21, 109–122. https://doi.org/10.1016/j.actbio.2015.03.007 (2015). 38. Voigt, C. C., Schneeberger, K., Voigt-Heucke, S. L. & Lewanzik, D. Rain increases the energy cost of bat flight. Biol. Lett. 7, 793–795. https://doi.org/10.1098/rsbl.2011.0313 (2011). 39. Dickerson, A. K., Liu, X., Zhu, T. & Hu, D. L. Fog spontaneously folds mosquito wings. Phys. Fluids 27, 021901. https://doi.org/10.1063/1.4908261 (2015). 40. Dickerson, A. K. & Hu, D. L. Mosquitoes actively remove drops deposited by fog and dew. Integr. Comp. Biol. 54, 1008–1013. https://doi.org/10.1093/jis/3.1.190 (2014). 41. Dickerson, A. K., Shankles, P. G., Berry, B. E. & Hu, D. L. Fog and dense gas disrupt mosquito flight due to increased aerodynamic drag on halteres. J. Fluids Struct. 55, 451–462. https://doi.org/10.1093/jis/3.1.191 (2015). 42. Dickerson, A. K., Shankles, P. G., Madhavan, N. M. & Hu, D. L. Mosquitoes survive raindrop collisions by virtue of their low mass. Proc. Nat. Acad. Sci. 109, 9822–9827. https://doi.org/10.1093/jis/3.1.192 (2012). 43. Dickerson, A. K., Shankles, P. G. & Hu, D. L. Raindrops push and splash flying insects. Phys. Fluids 26, 027104. https://doi.org/10.1093/jis/3.1.193 (2014). 44. Combes, S. A. & Dudley, R. Turbulence-driven instabilities limit insect flight performance. Proc. Nat. Acad. Sci. 106, 9105–9108. https://doi.org/10.1093/jis/3.1.194 (2009). 45. Ristroph, L. et al. Discovering the flight autostabilizer of fruit flies by inducing aerial stumbles. Proc. Nat. Acad. Sci. 107, 4820–4824. https://doi.org/10.1093/jis/3.1.195 (2010). 46. Kvifte, G. M. & Andersen, T. Moth flies (Diptera, Psychodidae) from Finnmark, northern Norway. Norwegian J. Entomol. 59, 108–119 (2012). 47. Curler, G. R. & Courtney, G. W. A revision of the world species of the genus Neotelmatoscopus Tonnoir (Diptera: Psychodidae). Syst. Entomol. 34, 63–92. https://doi.org/10.1111/j.1365-3113.2008.00439.x (2009). 48. Crisp, G. & Lloyd, L. The community of insects in a patch of woodland mud. Trans. R. Entomol. Soc. Lond. 105, 269–313. https://doi.org/10.1111/j.1365-2311.1954.tb00766.x (1954). 49. Sansone, C., Minzenmayer, R. & Drees, B. M. Drain flies. Tech. Rep. E-184, Texas A&M University System (2018). 50. Deparis, O., Mouchet, S. R., Dellieu, L., Colomer, J. F. & Sarrazin, M. Nanostructured surfaces: bioinspiration for transparency, coloration and wettability. Mater. Today Proc. 1, 122–129 (2014). 51. Dellieu, L., Sarrazin, M., Simonis, P., Deparis, O. & Vigneron, J. P. A two-in-one superhydrophobic and anti-reflective nanodevice in the grey cicada Cicada orni (Hemiptera). J. Appl. Phys. 116, 024701. https://doi.org/10.1063/1.4889849 (2014). 52. Lin, S.-Y., Lin, Y.-Y., Chen, E.-M., Hsu, C.-T. & Kwan, C.-C. A study of the equilibrium surface tension and the critical micelle concentration of mixed surfactant solutions. Langmuir 15, 4370–4376. https://doi.org/10.1021/la981149f (1999). 53. Pan, Z. et al. The upside-down water collection system of Syntrichia caninervis. Nat. Plants 2, 16076. https://doi.org/10.1038/nplants.2016.76 (2016). 54. Herminghaus, S., Brinkmann, M. & Seemann, R. Wetting and dewetting of complex surface geometries. Annu. Rev. Mater. Res. 38, 101–121. https://doi.org/10.1146/annurev.matsci.38.060407.130335 (2008). 55. Jia, L.-P. & Liang, A.-P. An apposition compound eye adapted for nocturnal vision in the moth midge Clogmia albipunctata (Williston) (Diptera: Psychodidae). J. Insect Physiol. 98, 188–198. https://doi.org/10.1016/j.jinsphys.2017.01.006 (2017). 56. Gaino, E. & Rebora, M. Larval antennal sensilla in water-living insects. Microsc. Res. Technol. 47, 440–457 (1999). 57. Thorpe, W. H. & Crisp, D. J. Studies on plastron respiration. part 2. J. Exp. Biol. 24, 270–303 (1947). 58. Thorpe, W. H. Plastron respiration in aquatic insects. Biol. Rev. 25, 344–390. https://doi.org/10.1111/j.1469-185X.1950.tb01590.x (1950). 59. Flynn, M. R. & Bush, J. W. M. Underwater breathing: The mechanics of plastron respiration. J. Fluid Mech. 608, 275–296. https://doi.org/10.1017/S0022112008002048 (2008). 60. de Ruiter, L., Wolvekamp, H. P., van Tooren, A. J. & Vlasblom, A. Experiments on the efficiency of the physical gill (Hydrous piceus L., Naucoris cimicoides L., and Notonecta glauca L. Acta Physiol. Pharmacol. Neerl. 2, 180 (1951). 61. Gittelman, S. H. Physical gill efficiency and winter dormancy in the pigmy backswimmer, Neoplea striola (Hemiptera: Pleidae). Ann. Entomol. Soc. Am. 68, 1011–1017. https://doi.org/10.1093/aesa/68.6.1011 (1975). ## Acknowledgements This work was supported by King Abdullah University of Science and Technology (KAUST) under Grant URF/1/3727-01-01. ## Author information Authors ### Contributions S.T.T. conceived the experiments. N.B.S. and G.A.M. conducted the experiments. N.B.S. and G.A.M. analysed the results and wrote the manuscript. All authors reviewed the manuscript. ### Corresponding author Correspondence to Nathan B. Speirs. ## Ethics declarations ### Competing interests The authors declare no competing interests. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Supplementary information Supplementary Video 1. Supplementary Video 2. Supplementary Video 3. Supplementary Video 4. Supplementary Video 5. Supplementary Video 6. Supplementary Video 7. Supplementary Video 8. Supplementary Video 9. Supplementary Video 10. ## Rights and permissions Reprints and Permissions Speirs, N.B., Mahadik, G.A. & Thoroddsen, S.T. How drain flies manage to almost never get washed away. Sci Rep 10, 17829 (2020). https://doi.org/10.1038/s41598-020-73583-2 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-020-73583-2
2022-08-14 12:24:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5253303050994873, "perplexity": 6043.492737300978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00307.warc.gz"}
https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.18/share/doc/Macaulay2/Macaulay2Doc/html/___Coherent__Sheaf_sp%5E_sp__Z__Z.html
CoherentSheaf ^ ZZ -- direct sum Synopsis • Operator: ^ • Usage: F^n • Inputs: • Outputs: • , the direct sum of n copies of F Description i1 : R = QQ[a..d]/(a*d-b*c) o1 = R o1 : QuotientRing i2 : Q = Proj R o2 = Q o2 : ProjectiveVariety i3 : OO_Q^5 5 o3 = OO Q o3 : coherent sheaf on Q, free i4 : IL = sheaf module ideal(a,b) o4 = image | a b | 1 o4 : coherent sheaf on Q, subsheaf of OO Q i5 : IL^3 o5 = image | a b 0 0 0 0 | | 0 0 a b 0 0 | | 0 0 0 0 a b | 3 o5 : coherent sheaf on Q, subsheaf of OO Q See also • Proj -- make a projective variety • sheaf -- make a coherent sheaf
2021-09-22 21:08:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7373818755149841, "perplexity": 3329.2604947018963}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057388.12/warc/CC-MAIN-20210922193630-20210922223630-00069.warc.gz"}
https://www.darwinproject.ac.uk/letter/?docId=letters/DCP-LETT-7066.xml;query=darwin;brand=default;hit.rank=5
# From E. A. Darwin   19 [December 1870]1 19th Dear Charles I have received the enclosed which I suppose is all right tho’ I never read the resolution referred to. I see I have 2 packets of Certificates both £12-10-0 shares and an awful heap they are. Please to return Secy. letter & let me know if it is all right as far as you know.2 What an escape George has had and it is very well if the instruments have escaped uninjured3 ED ## Footnotes The month and year are established by the reference to George Howard Darwin’s escape (see n. 3, below). The Maryport and Carlisle Railway had passed a resolution to convert their 4 per cent and 4$\frac{1}{2}$ per cent shares into stock (The Times, 4 November 1870, p. 7). The ‘two packets of certificates’ were part of Emma Darwin’s trust (Erasmus was a trustee); see letter from E. A. Darwin, 21 [December 1870], and CD’s Investment book (Down House MS), p. 124. The secretary was John Addison (Bradshaw’s Railway Manual, Shareholders’ Guide and Official Directory 1870). George was a member of the expedition to observe the solar eclipse from Sicily (The Times, 9 December 1870, p. 7). Their ship, Psyche, struck a rock off Catania, Italy, on 15 December 1870; the instruments and the expedition party were saved, but their personal effects lost (see The Times, 19 December 1870, p. 5, and 28 December 1870, p. 3). There is a letter from Henrietta Emma Darwin to George, written shortly after she received news of the wreck, in DAR 245: 297. ## Summary Has received a letter, and two packets of securities. ## Letter details Letter no. DCP-LETT-7066 From Erasmus Alvey Darwin To Charles Robert Darwin Sent from unstated Source of text DAR 105: B69–70 Physical description 3pp
2019-12-12 09:32:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5406957268714905, "perplexity": 8649.29373614722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540542644.69/warc/CC-MAIN-20191212074623-20191212102623-00107.warc.gz"}
https://shanzi.gitbooks.io/algorithm-notes/content/problem_solutions/new_years_eye.html
# New Years Eye This is a interesting problem. Instead of calculate the the amount of wite a bottle holds at last, we calculate the total amount of wine that flows over a bottle and distribute the flow of current bottle of this level to next level. At last, if a bottle has an amount of flow that is greater or equal to 250.0ml, the amount it holds at last is 250.0ml. Or the amount left is the amount of flow itself. import java.io.*; import java.util.*; public class NewYearsEye { private static double solve(int B, int L, int N) { double[][] dp = new double[L][L]; double[][] last = new double[L][L]; last[0][0] = 750.0 * B; for (int l = 1; l < L; l++) { for (int i = 0; i < l; i++) { for (int j = 0; j <= i; j++) { if (last[i][j] > 250.0) { double share = (last[i][j] - 250.0) / 3.0; dp[i][j] += share; dp[i + 1][j] += share; dp[i + 1][j + 1] += share; } } } double[][] tmp = last; last = dp; dp = tmp; for (int i = 0; i <= l; i++) { for (int j = 0; j <= i; j++) { dp[i][j] = 0.0; } } } int id = 0; for (int i = 0; i < L; i++) { for (int j = 0; j <= i; j++) { if ((++id) == N) { return Math.min(250.0, last[i][j]); } } } return 0.0; } public static void main(String[] args) { Scanner in = new Scanner(System.in); int T = in.nextInt(); for (int t = 0; t < T; t++) { int B = in.nextInt(), L = in.nextInt(), N = in.nextInt(); System.out.printf("Case #%d: %.7f\n", t + 1, solve(B, L, N)); } } }
2022-06-29 07:41:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29132279753685, "perplexity": 3653.2627460954654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103624904.34/warc/CC-MAIN-20220629054527-20220629084527-00039.warc.gz"}
https://wumbo.net/symbols/left-parenthesis/
# Left Parenthesis Symbol ( Symbol Format Data ( Code Point U+0028 TeX ( SVG ## Usage The left parenthesis symbol is used in math to group expressions together, group function arguments and to implicitly represent the multiplication operator. Typically, the symbol appears together with the right parenthesis symbol and is used in an expression like this: By including the parentheses in the expression above, the order of operations changes so that the addition is performed before the multiplication. ## Related The right parenthesis is used to mark the end of a group. The left square bracket symbol is used in math to delineate the start of a matrix. The right square bracket symbol is used in math to delineate the end of a matrix. Left curly brace symbol. Right curly brace.
2022-08-13 11:53:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8577598333358765, "perplexity": 981.062896202387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00119.warc.gz"}
https://proxies-free.com/tag/obtain/
pseudocode – Use magic-pivot as a black-box to obtain a deterministic quick-sort algorithm with worst-case runningtime of O(nlogn) Suppose we have an array A(1 :n) of n distinct numbers. For any element A(i), we define the rank of A(i), denoted by rank (A(i)), as the number of elements in A that are strictly smaller than A(i) plus one; so rank (A(i)) is also the correct position of A(i) in the sorted order of A. Suppose we have an algorithm magic-pivot that given any array B (1 :m) (for anym >0), returns an element B(i) such that m/3≤rank(B(i))≤2m/3 and has worst-case runtime O(n)1. Example:if B= (1,7,6,2,13,3,5,11,8), then magic-pivot(B) will return one arbitrary number among {3,5,6,7}(since sorted order of B is (1,2,3,5,6,7,8,11,13)) r – How to obtain a plot from dataframe Good afternoon ! Assume we have the following dataframe : ``````(1,) 1 0.2943036 0.5019925 0.6485579 0.7519886 0.7719700 0.8390800 (2,) 2 0.2943036 0.5019925 0.6485579 0.7519886 0.8002251 0.8390800 (3,) 3 0.2943036 0.5019925 0.6485579 0.7519886 0.7719700 0.8163203 (4,) 4 0.2943036 0.5019925 0.6485579 0.7519886 0.8002251 0.8390800 (5,) 5 0.2943036 0.5019925 0.6485579 0.7519886 0.8002251 0.8390800 (6,) 6 0.2943036 0.5019925 0.6485579 0.7169111 0.7719700 0.8163203 (1,) 0.8423178 0.8593898 (2,) 0.8703779 0.8955885 (3,) 0.8703779 0.8593898 (4,) 0.8703779 0.8955885 (5,) 0.8703779 0.8955885 (6,) 0.8423178 0.8593898 res_final=structure(c(1, 2, 3, 4, 5, 6, 0.294303552937154, 0.294303552937154, 0.294303552937154, 0.294303552937154, 0.294303552937154, 0.294303552937154, 0.501992524602876, 0.501992524602876, 0.501992524602876, 0.501992524602876, 0.501992524602876, 0.501992524602876, 0.648557894001512, 0.648557894001512, 0.648557894001512, 0.648557894001512, 0.648557894001512, 0.648557894001512, 0.751988554448582, 0.751988554448582, 0.751988554448582, 0.751988554448582, 0.751988554448582, 0.71691105972394, 0.771969986695426, 0.800225140644398, 0.771969986695426, 0.800225140644398, 0.800225140644398, 0.771969986695426, 0.839080029787269, 0.839080029787269, 0.816320316445505, 0.839080029787269, 0.839080029787269, 0.816320316445505, 0.842317781397926, 0.870377899917965, 0.870377899917965, 0.870377899917965, 0.870377899917965, 0.842317781397926, 0.859389755570637, 0.895588541263924, 0.859389755570637, 0.895588541263924, 0.895588541263924, 0.859389755570637), .Dim = c(6L, 9L), .Dimnames = list( NULL, c("res_final", "", "", "", "", "", "", "", ""))) `````` I’m wanting to plot each row as a continous curve , i tried: ``````apply(res_final,1,function(x) plot(res_final(x,) , xlab = "Number of sensors ", ylab = "Cumulative detection probability", main = "Evolution of cumulative probabilities for 6-targets based on the Number of sensors")) `````` However , this doesn’t allow to plot all of the 6 continious curves within the same plot. I’m also searching to distinguish between the curves using different colors. I hope my question is clear. Thank you a lot for your help ! Is it possible to obtain the historical exchange rate time series of trading pair such as BTC-USD on exchanges that do not provide this trading pair? On exchanges such as Poloniex and Binance, BTC-USD is not offered as trading pair. However, one can buy BTC with USD on some of such platforms (e.g. one can buy BTC with USD on Poloniex and Binance). Are there some ways of obtaining the historical exchange rate time series between BTC and USD on such exchanges? india – Is there a simple procedure or what are the procedures to obtain a permit to visit Lakshadweep islands? I wanted to know why Indian citizens need a permit to visit Lakshadweep, and an answer on Quora says: 95% of whom are Scheduled Tribes, the Administrator has, with previous approval of Central Govt. and in exercise of powers vested under the Laccadive, Minicoy and Amindivi Islands (Restriction on Entry and Residence) Rules, 1967. As per these rules, every person, who is not a native of these islands, shall have to obtain a permit in the Prescribed Form from the Competent authority, for entering into and residing in these islands”. Lakshadweep also has a full-fledged naval Cited from: Wayback Machine, 2015 archive of the department of revenue and the naval base establishment. Since the requirement of a permit exists since 2004, and the naval base came up in 2012, it left me wondering if things could have changed as of 2019, that such strict restrictions and permits would not be necessary for a simple vacation to relax, snorkel, kayak and scuba dive? Some websites say that a police clearance is required from near the residence of the visitor and then another clearance is required from a government department in Kochi. Some people write on their blogs that it’s not that much of a hassle to visit. Some recommend the government tour packages, some recommend the private packages, but there’s a lack of a clear, step-by-step explanation on how to get the permit and book a vacation to the islands. Could anyone help with providing these steps? For example: • Where are the offices located, from where I can obtain the permits? What precautions should I take and what insider information do I need to know to avoid hassles, scams and delays? Both for people who know the local language and for those who don’t. • Is a police verification from a police station close to my house required (if Indian citizen)? What are the procedures followed for this? Hassles, scams, delays, precautions, necessary documents? • If a resort or agent can arrange for the permits, how to I know which ones can be trusted and where are they located? Precautions, scams, delays, reliability and extra charges? • If I book one of the government’s vacation packages like “Samudram”, do they arrange for the permits and police verification or am I expected to do it? • Any other info that would be helpful for someone who would prefer to package offers? A wish I hope could be conveyed to organizers of Lakshadweep tours: To have a simple, reliable online procedure to get the permits and tickets booked, without having to visit any offices and without having to go through any agents. To be able to decide the duration of the stay and activities without having to follow an itinerary decided by someone else. Update: A person I know, went to Lakshadweep for official work. He asked around a lot, but even he couldn’t get a clear step-by-step procedure of how to go there on vacation. Somebody there told him that tourism is restricted so that they can preserve the local population’s simple, humble culture and also preserve the pristine natural beauty of the place (which is a sensible thing to do). Also, apparently since the islands are small, they cannot support the load and resources required for a large number of tourists. Perhaps the Lakshadweep admin office or the ticket counter at Kochi could provide more info about procedures, if someone visits and asks for details. Why would obtain an SSL certificate for SEO purposes? Why would obtain an SSL certificate for SEO purposes? . copyright – How do you obtain official brand logos for design prototypes? I need to use company logos as a part of my designs. These companies are institutions that are stakeholders for my client. Some sites have SVGs you can grab from the website but how does one go about getting those that are not available on the site in a useable format like a PNG or SVG? Should I ask my client to ask these companies for design assets? Does my client need permission for these logos as well? How to obtain an analytical solution of the following system of Ordinary Differential Equations? The system of differential equations are of the form: $$v(y) + C_{1} v(y)^3 = C_2 dfrac{du(y)}{dy}$$ $$dfrac{dv(y)}{dy} + C_3 u(y) = C_4$$ The boundary and symmetry conditions are: $$v(y=0)=0$$ $$u(y=pm H)=0$$ fourier analysis – How to analyse raw data to obtain frequency and phase from FFT? I have some data from an experiment which has multiple “nodes” (dominant sine wave components) and some noise. I want to use FFT to find the amplitude and phase of these dominant components. Currently, by editing the output of the Fourier function, i have managed to make a function to get the dominant frequencies. I was wondering if there is a way to find the phase as well? (as well as sort it to only see the phase for the dominant components) This is the code for the FFT function: ``````FFT(list_, tstep_, (Omega)max_, n_) := Module({FTlist = Abs(Fourier(list)), totaltime = Length(list) tstep, (CapitalDelta)(Omega), completeFFTlist, plot, peaks, (CapitalOmega)peak}, (CapitalDelta)(Omega) = N((2 (Pi))/totaltime); completeFFTlist = Delete(Flatten( Last(Reap( For(i = 1, (2 (Pi) (i - 1))/totaltime > (Omega)max (Nor) i > Length(FTlist), i++, Sow({N((2 (Pi) (i - 1))/totaltime), FTlist((i))/ Max(FTlist)})))), 1), 1); plot = ListLinePlot(completeFFTlist, PlotRange -> {{0, (Omega)max}, All}, Frame -> True, PlotStyle -> {SBlue, Thickness(0.005)}, "Power (Arbitrary units)"}, LabelStyle -> Directive(50), PlotTheme -> "Classic"); peaks = Sort(FindPeaks(completeFFTlist((All, 2))), #1((2)) > #2((2)) &); peaks = If(peaks((1, 1)) == 1, Delete(peaks, 1), peaks); (CapitalOmega)peak = Table(completeFFTlist((peaks((i, 1)), 1)), {i, 1, n}); Return({(CapitalOmega)peak, (CapitalDelta)(Omega), completeFFTlist, plot})); `````` This just makes a nice plot and tells me the n frequencies with the highest intensity/amplitude. logging – Obtain log of when audio starts and stops I want to get a list of timestamped events for when the audio player starts and stops. By “audio player” I mean the one that is common for all apps that you see when you pull down the notification bar in Android 11. So when audio that is not system notifications start or stop. Any way of doing it would be okay. Either an app or through adb and system logs, or something else. As far as I can tell, Android destroys logs from before that last reboot, but I might be wrong about this. If I can access old logs, that would be a solution. Can I obtain my bitcoin wallet files back from my HDD from the old OS? So i guess now its good time again to give my lost wallet files another attempt. I’m an idiot for doing such a thing but then I heard perhaps I can unformat my HDD and retrieve my files back. I definitely have an idea of the passwords I used and that won’t be the issue. I was buying btc around 2014-2015. I then decided to install a new mobo and I believe I wasn’t able to unless I reinstalled Windows. I used the windows usb to re-install everything but I’m not sure. It was a very long time ago. I also don’t know if it was Bitcoin Core or MultiBit app that I was using on the desktop. Either way, I believe the files should be in the same place i.e. APP DATA > ROAMING > etc etc. Is it possible to use a program like EaseUs to recover these files? Is anybody willing to help me figure out if the files are completely gone forever or its still hidden under the old partition (which hopefully isn’t over written). Would it be perhaps under the Window.old folder? (i’m not sure if I have that folder, I need to double check back at work). I was using WINDOWS 10 and nothing else I think. If anyone can help me retrieve, I will personally send 0.3 BTC to you from that wallet. 🙂
2021-02-27 14:24:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33771812915802, "perplexity": 2692.282562659608}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358956.39/warc/CC-MAIN-20210227114444-20210227144444-00150.warc.gz"}
https://zbmath.org/?q=an:1265.08002&format=complete
# zbMATH — the first resource for mathematics Abelian differential modes are quasi-affine. (English) Zbl 1265.08002 The subject of the paper is differential modes, i.e., modes with a single $$n$$-ary operation, possessing a congruence such that all its blocks and the factor are left projection algebras. The main result says that abelian differential modes are quasi-affine, hence embed into a reduct of a module. The proof is based on the axiomatization of quasi-affine algebras developed in [M. M. Stronkowski and the author, Proc. Am. Math. Soc. 138, No. 8, 2687–2699 (2010; Zbl 1206.08002)]. ##### MSC: 08A05 Structure theory of algebraic structures 15A78 Other algebras built from modules Full Text:
2021-01-26 09:15:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3412036895751953, "perplexity": 1569.2025369266573}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00400.warc.gz"}
https://codereview.stackexchange.com/questions/5115/using-a-function-to-emulate-f-match-in-c
# Using a Function to emulate F# Match in C# So this was inspired by: this question But doesn't actually answer it: What do you think of: public class Case<TRes> { public Case(bool condition, Func<TRes> result) { Condition = condition; Result = result; } public bool Condition {get; private set;} public Func<TRes> Result { get; private set; } public static Case<TRes> Default(Func<TRes> value) { return new Case<TRes>(true, value); } } public static class Logic { public static T Match<T> (params Case<T>[] cases) { return cases.First(c => c.Condition).Result(); } } Which you could invoke: int i = Logic.Match ( new Case<int>(IsCloudy(skyImage), () => { ChangeWeatherIcon(Icons.Clouds); return 13; }), new Case<int>(a < b, () => a), Case<int>.Default(() => {throw new Exception(); }) ); it's purpose isn't quiet to emulate F#'s Match. (Because F# match doens't actually take conditions, it is a stuctural/pattern match) So really its kind of like the trinary operator, meets the nonOrdinal Select statment. Anyone got any idea's on how to make it work cleaner? Maybe getting a way to get rid of the new's and the brackets • If we swap Func(T) for T, we get rid of the brackets etc, at the expense of nolonger being able to specify things to be done. (so it would be more like the trinary operator than a If) – Lyndon White Oct 2 '11 at 14:04 • You might be interested in reading this blog post which has been doing something similar. It uses expressions and analyzes them to perform the matching. It's more or less the approach I had in mind. Take a look. – Jeff Mercado Oct 3 '11 at 18:46 To be honest, it's pretty much impossible to do elegant pattern matching in a language that doesn't support it (e.g., C#). The problem with what you've written there is it's more complicated than the equivalent if-then-else chain: if (IsCloudy(skyImage)) { ChangeWeatherIcon(Icons.Clouds); return 13; } if (a < b) { throw new Exception(); } Now, I would dearly love to see some later version of C# adopt algebraic data types (a la F#) with pattern matching, but until then I very much doubt you'll be able to do better than if-then-else chains. • I'm not sure if you realise, but your if/else change is not (at all) equivelent. Equivelent is: int i; if (IsCloudy(skyImage) { ChangeWeatherIcon(Icons.Clouds); i = 13; } else if (a < b) i= a; else throw new Exception(); It would be alot nicer spread over multiple lines – Lyndon White Oct 3 '11 at 11:15 • You're right, but - meh - that's beside my point, which is that in attempting to do something more elegant than if-then-elses, you've made your code harder to read. – Rafe Oct 3 '11 at 23:12 I generally agree with Rafe: it's not worth trying to do this in C# for the general case. Using expressions, etc, simply don't cut it, IMHO pattern matching is something that must be built into the language (or use a malleable language like Lisp). The main point of pattern matching is that it's exhaustive (or at least you get a warning if it's not), and therefore total. If you can't guarantee that, then you don't gain anything. However, I do think it's fruitful to use pattern matching for specific cases. For example, in FSharpx we have pattern matching for C# on options, lists and choices (i.e. anonymous discriminated unions).
2019-06-25 20:47:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3032524287700653, "perplexity": 2756.8214087691963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999946.25/warc/CC-MAIN-20190625192953-20190625214953-00009.warc.gz"}
http://mathhelpforum.com/algebra/28163-polynomial-f-x-print.html
# polynomial f(x) • Feb 13th 2008, 06:03 AM ihmth polynomial f(x) Need help pls a.) Show that for any polynomial f(x) there exists a polynomial g(x) such that f(x) = x*g(x) + c, where c is a constant. b.) Use the result in letter a.) and the remainder theorem to prove the factor theorem. --tnx • Feb 13th 2008, 06:13 AM ThePerfectHacker Quote: Originally Posted by ihmth Need help pls a.) Show that for any polynomial f(x) there exists a polynomial g(x) such that f(x) = x*g(x) + c, where c is a constant. b.) Use the result in letter a.) and the remainder theorem to prove the factor theorem. Let $f(x) = a_nx^n+...+a_1x+a_0$ then $f(x) = xg(x)+a_0$ where $g(x) = a_nx^{n-1}+...+a_1$.
2016-09-30 16:23:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6995201706886292, "perplexity": 1389.5713758219968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662251.92/warc/CC-MAIN-20160924173742-00124-ip-10-143-35-109.ec2.internal.warc.gz"}
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_-_The_Central_Science_(Brown_et_al.)/11%3A_Liquids_and_Intermolecular_Forces/11.2%3A_Intermolecular_Forces
# 11.2: Intermolecular Forces Skills to Develop • To describe the intermolecular forces in liquids. The properties of liquids are intermediate between those of gases and solids, but are more similar to solids. In contrast to intramolecular forces, such as the covalent bonds that hold atoms together in molecules and polyatomic ions, intermolecular forces hold molecules together in a liquid or solid. Intermolecular forces are generally much weaker than covalent bonds. For example, it requires 927 kJ to overcome the intramolecular forces and break both O–H bonds in 1 mol of water, but it takes only about 41 kJ to overcome the intermolecular attractions and convert 1 mol of liquid water to water vapor at 100°C. (Despite this seemingly low value, the intermolecular forces in liquid water are among the strongest such forces known!) Given the large difference in the strengths of intra- and intermolecular forces, changes between the solid, liquid, and gaseous states almost invariably occur for molecular substances without breaking covalent bonds. The properties of liquids are intermediate between those of gases and solids but are more similar to solids. Intermolecular forces determine bulk properties such as the melting points of solids and the boiling points of liquids. Liquids boil when the molecules have enough thermal energy to overcome the intermolecular attractive forces that hold them together, thereby forming bubbles of vapor within the liquid. Similarly, solids melt when the molecules acquire enough thermal energy to overcome the intermolecular forces that lock them into place in the solid. Intermolecular forces are electrostatic in nature; that is, they arise from the interaction between positively and negatively charged species. Like covalent and ionic bonds, intermolecular interactions are the sum of both attractive and repulsive components. Because electrostatic interactions fall off rapidly with increasing distance between molecules, intermolecular interactions are most important for solids and liquids, where the molecules are close together. These interactions become important for gases only at very high pressures, where they are responsible for the observed deviations from the ideal gas law at high pressures. (For more information on the behavior of real gases and deviations from the ideal gas law,.) In this section, we explicitly consider three kinds of intermolecular interactions: There are two additional types of electrostatic interaction that you are already familiar with: the ion–ion interactions that are responsible for ionic bonding and the ion–dipole interactions that occur when ionic substances dissolve in a polar substance such as water. The first two are often described collectively as van der Waals forces. ### Dipole–Dipole Interactions Polar covalent bonds behave as if the bonded atoms have localized fractional charges that are equal but opposite (i.e., the two bonded atoms generate a dipole). If the structure of a molecule is such that the individual bond dipoles do not cancel one another, then the molecule has a net dipole moment. Molecules with net dipole moments tend to align themselves so that the positive end of one dipole is near the negative end of another and vice versa, as shown in Figure $$\PageIndex{1a}$$. Figure $$\PageIndex{1}$$: Attractive and Repulsive Dipole–Dipole Interactions. (a and b) Molecular orientations in which the positive end of one dipole (δ+) is near the negative end of another (δ) (and vice versa) produce attractive interactions. (c and d) Molecular orientations that juxtapose the positive or negative ends of the dipoles on adjacent molecules produce repulsive interactions. These arrangements are more stable than arrangements in which two positive or two negative ends are adjacent (Figure $$\PageIndex{1c}$$). Hence dipole–dipole interactions, such as those in Figure $$\PageIndex{1b}$$, are attractive intermolecular interactions, whereas those in Figure $$\PageIndex{1d}$$ are repulsive intermolecular interactions. Because molecules in a liquid move freely and continuously, molecules always experience both attractive and repulsive dipole–dipole interactions simultaneously, as shown in Figure $$\PageIndex{2}$$. On average, however, the attractive interactions dominate. Figure $$\PageIndex{2}$$: Both Attractive and Repulsive Dipole–Dipole Interactions Occur in a Liquid Sample with Many Molecules Because each end of a dipole possesses only a fraction of the charge of an electron, dipole–dipole interactions are substantially weaker than the interactions between two ions, each of which has a charge of at least ±1, or between a dipole and an ion, in which one of the species has at least a full positive or negative charge. In addition, the attractive interaction between dipoles falls off much more rapidly with increasing distance than do the ion–ion interactions. Recall that the attractive energy between two ions is proportional to 1/r, where r is the distance between the ions. Doubling the distance (r → 2r) decreases the attractive energy by one-half. In contrast, the energy of the interaction of two dipoles is proportional to 1/r3, so doubling the distance between the dipoles decreases the strength of the interaction by 23, or 8-fold. Thus a substance such as $$\ce{HCl}$$, which is partially held together by dipole–dipole interactions, is a gas at room temperature and 1 atm pressure, whereas $$\ce{NaCl}$$, which is held together by interionic interactions, is a high-melting-point solid. Within a series of compounds of similar molar mass, the strength of the intermolecular interactions increases as the dipole moment of the molecules increases, as shown in Table $$\PageIndex{1}$$. Table $$\PageIndex{1}$$: Relationships between the Dipole Moment and the Boiling Point for Organic Compounds of Similar Molar Mass Compound Molar Mass (g/mol) Dipole Moment (D) Boiling Point (K) C3H6 (cyclopropane) 42 0 240 CH3OCH3 (dimethyl ether) 46 1.30 248 CH3CN (acetonitrile) 41 3.9 355 The attractive energy between two ions is proportional to 1/r, whereas the attractive energy between two dipoles is proportional to 1/r6. Example $$\PageIndex{1}$$ Arrange ethyl methyl ether (CH3OCH2CH3), 2-methylpropane [isobutane, (CH3)2CHCH3], and acetone (CH3COCH3) in order of increasing boiling points. Their structures are as follows: Given: compounds Asked for: order of increasing boiling points Strategy: Compare the molar masses and the polarities of the compounds. Compounds with higher molar masses and that are polar will have the highest boiling points. Solution: The three compounds have essentially the same molar mass (58–60 g/mol), so we must look at differences in polarity to predict the strength of the intermolecular dipole–dipole interactions and thus the boiling points of the compounds. The first compound, 2-methylpropane, contains only C–H bonds, which are not very polar because C and H have similar electronegativities. It should therefore have a very small (but nonzero) dipole moment and a very low boiling point. Ethyl methyl ether has a structure similar to H2O; it contains two polar C–O single bonds oriented at about a 109° angle to each other, in addition to relatively nonpolar C–H bonds. As a result, the C–O bond dipoles partially reinforce one another and generate a significant dipole moment that should give a moderately high boiling point. Acetone contains a polar C=O double bond oriented at about 120° to two methyl groups with nonpolar C–H bonds. The C–O bond dipole therefore corresponds to the molecular dipole, which should result in both a rather large dipole moment and a high boiling point. Thus we predict the following order of boiling points: 2-methylpropane < ethyl methyl ether < acetone. This result is in good agreement with the actual data: 2-methylpropane, boiling point = −11.7°C, and the dipole moment (μ) = 0.13 D; methyl ethyl ether, boiling point = 7.4°C and μ = 1.17 D; acetone, boiling point = 56.1°C and μ = 2.88 D. Exercise $$\PageIndex{1}$$ Arrange carbon tetrafluoride (CF4), ethyl methyl sulfide (CH3SC2H5), dimethyl sulfoxide [(CH3)2S=O], and 2-methylbutane [isopentane, (CH3)2CHCH2CH3] in order of decreasing boiling points. dimethyl sulfoxide (boiling point = 189.9°C) > ethyl methyl sulfide (boiling point = 67°C) > 2-methylbutane (boiling point = 27.8°C) > carbon tetrafluoride (boiling point = −128°C) ### London Dispersion Forces Thus far we have considered only interactions between polar molecules, but other factors must be considered to explain why many nonpolar molecules, such as bromine, benzene, and hexane, are liquids at room temperature, and others, such as iodine and naphthalene, are solids. Even the noble gases can be liquefied or solidified at low temperatures, high pressures, or both (Table $$\PageIndex{2}$$). What kind of attractive forces can exist between nonpolar molecules or atoms? This question was answered by Fritz London (1900–1954), a German physicist who later worked in the United States. In 1930, London proposed that temporary fluctuations in the electron distributions within atoms and nonpolar molecules could result in the formation of short-lived instantaneous dipole moments, which produce attractive forces called London dispersion forces between otherwise nonpolar substances. Table $$\PageIndex{2}$$: Normal Melting and Boiling Points of Some Elements and Nonpolar Compounds Substance Molar Mass (g/mol) Melting Point (°C) Boiling Point (°C) Ar 40 −189.4 −185.9 Xe 131 −111.8 −108.1 N2 28 −210 −195.8 O2 32 −218.8 −183.0 F2 38 −219.7 −188.1 I2 254 113.7 184.4 CH4 16 −182.5 −161.5 Consider a pair of adjacent He atoms, for example. On average, the two electrons in each He atom are uniformly distributed around the nucleus. Because the electrons are in constant motion, however, their distribution in one atom is likely to be asymmetrical at any given instant, resulting in an instantaneous dipole moment. As shown in part (a) in Figure $$\PageIndex{3}$$, the instantaneous dipole moment on one atom can interact with the electrons in an adjacent atom, pulling them toward the positive end of the instantaneous dipole or repelling them from the negative end. The net effect is that the first atom causes the temporary formation of a dipole, called an induced dipole, in the second. Interactions between these temporary dipoles cause atoms to be attracted to one another. These attractive interactions are weak and fall off rapidly with increasing distance. London was able to show with quantum mechanics that the attractive energy between molecules due to temporary dipole–induced dipole interactions falls off as 1/r6. Doubling the distance therefore decreases the attractive energy by 26, or 64-fold. Figure $$\PageIndex{3}$$: Instantaneous Dipole Moments. The formation of an instantaneous dipole moment on one He atom (a) or an H2 molecule (b) results in the formation of an induced dipole on an adjacent atom or molecule. Instantaneous dipole–induced dipole interactions between nonpolar molecules can produce intermolecular attractions just as they produce interatomic attractions in monatomic substances like Xe. This effect, illustrated for two H2 molecules in part (b) in Figure $$\PageIndex{3}$$, tends to become more pronounced as atomic and molecular masses increase (Table $$\PageIndex{2}$$). For example, Xe boils at −108.1°C, whereas He boils at −269°C. The reason for this trend is that the strength of London dispersion forces is related to the ease with which the electron distribution in a given atom can be perturbed. In small atoms such as He, the two 1s electrons are held close to the nucleus in a very small volume, and electron–electron repulsions are strong enough to prevent significant asymmetry in their distribution. In larger atoms such as Xe, however, the outer electrons are much less strongly attracted to the nucleus because of filled intervening shells. As a result, it is relatively easy to temporarily deform the electron distribution to generate an instantaneous or induced dipole. The ease of deformation of the electron distribution in an atom or molecule is called its polarizability. Because the electron distribution is more easily perturbed in large, heavy species than in small, light species, we say that heavier substances tend to be much more polarizable than lighter ones. For similar substances, London dispersion forces get stronger with increasing molecular size. The polarizability of a substance also determines how it interacts with ions and species that possess permanent dipoles. Thus London dispersion forces are responsible for the general trend toward higher boiling points with increased molecular mass and greater surface area in a homologous series of compounds, such as the alkanes (part (a) in Figure $$\PageIndex{4}$$). The strengths of London dispersion forces also depend significantly on molecular shape because shape determines how much of one molecule can interact with its neighboring molecules at any given time. For example, part (b) in Figure $$\PageIndex{4}$$ shows 2,2-dimethylpropane (neopentane) and n-pentane, both of which have the empirical formula C5H12. Neopentane is almost spherical, with a small surface area for intermolecular interactions, whereas n-pentane has an extended conformation that enables it to come into close contact with other n-pentane molecules. As a result, the boiling point of neopentane (9.5°C) is more than 25°C lower than the boiling point of n-pentane (36.1°C). Figure $$\PageIndex{4}$$: Mass and Surface Area Affect the Strength of London Dispersion Forces. (a) In this series of four simple alkanes, larger molecules have stronger London forces between them than smaller molecules and consequently higher boiling points. (b) Linear n-pentane molecules have a larger surface area and stronger intermolecular forces than spherical neopentane molecules. As a result, neopentane is a gas at room temperature, whereas n-pentane is a volatile liquid. All molecules, whether polar or nonpolar, are attracted to one another by London dispersion forces in addition to any other attractive forces that may be present. In general, however, dipole–dipole interactions in small polar molecules are significantly stronger than London dispersion forces, so the former predominate. Example $$\PageIndex{2}$$ Arrange n-butane, propane, 2-methylpropane [isobutene, (CH3)2CHCH3], and n-pentane in order of increasing boiling points. Given: compounds Asked for: order of increasing boiling points Strategy: Determine the intermolecular forces in the compounds and then arrange the compounds according to the strength of those forces. The substance with the weakest forces will have the lowest boiling point. Solution: The four compounds are alkanes and nonpolar, so London dispersion forces are the only important intermolecular forces. These forces are generally stronger with increasing molecular mass, so propane should have the lowest boiling point and n-pentane should have the highest, with the two butane isomers falling in between. Of the two butane isomers, 2-methylpropane is more compact, and n-butane has the more extended shape. Consequently, we expect intermolecular interactions for n-butane to be stronger due to its larger surface area, resulting in a higher boiling point. The overall order is thus as follows, with actual boiling points in parentheses: propane (−42.1°C) < 2-methylpropane (−11.7°C) < n-butane (−0.5°C) < n-pentane (36.1°C). Exercise $$\PageIndex{2}$$ Arrange GeH4, SiCl4, SiH4, CH4, and GeCl4 in order of decreasing boiling points. GeCl4 (87°C) > SiCl4 (57.6°C) > GeH4 (−88.5°C) > SiH4 (−111.8°C) > CH4 (−161°C) ### Hydrogen Bonds Molecules with hydrogen atoms bonded to electronegative atoms such as O, N, and F (and to a much lesser extent Cl and S) tend to exhibit unusually strong intermolecular interactions. These result in much higher boiling points than are observed for substances in which London dispersion forces dominate, as illustrated for the covalent hydrides of elements of groups 14–17 in Figure $$\PageIndex{5}$$. Methane and its heavier congeners in group 14 form a series whose boiling points increase smoothly with increasing molar mass. This is the expected trend in nonpolar molecules, for which London dispersion forces are the exclusive intermolecular forces. In contrast, the hydrides of the lightest members of groups 15–17 have boiling points that are more than 100°C greater than predicted on the basis of their molar masses. The effect is most dramatic for water: if we extend the straight line connecting the points for H2Te and H2Se to the line for period 2, we obtain an estimated boiling point of −130°C for water! Imagine the implications for life on Earth if water boiled at −130°C rather than 100°C. Figure $$\PageIndex{5}$$: The Effects of Hydrogen Bonding on Boiling Points. These plots of the boiling points of the covalent hydrides of the elements of groups 14–17 show that the boiling points of the lightest members of each series for which hydrogen bonding is possible (HF, NH3, and H2O) are anomalously high for compounds with such low molecular masses. Why do strong intermolecular forces produce such anomalously high boiling points and other unusual properties, such as high enthalpies of vaporization and high melting points? The answer lies in the highly polar nature of the bonds between hydrogen and very electronegative elements such as O, N, and F. The large difference in electronegativity results in a large partial positive charge on hydrogen and a correspondingly large partial negative charge on the O, N, or F atom. Consequently, H–O, H–N, and H–F bonds have very large bond dipoles that can interact strongly with one another. Because a hydrogen atom is so small, these dipoles can also approach one another more closely than most other dipoles. The combination of large bond dipoles and short dipole–dipole distances results in very strong dipole–dipole interactions called hydrogen bonds, as shown for ice in Figure $$\PageIndex{6}$$. A hydrogen bond is usually indicated by a dotted line between the hydrogen atom attached to O, N, or F (the hydrogen bond donor) and the atom that has the lone pair of electrons (the hydrogen bond acceptor). Because each water molecule contains two hydrogen atoms and two lone pairs, a tetrahedral arrangement maximizes the number of hydrogen bonds that can be formed. In the structure of ice, each oxygen atom is surrounded by a distorted tetrahedron of hydrogen atoms that form bridges to the oxygen atoms of adjacent water molecules. The bridging hydrogen atoms are not equidistant from the two oxygen atoms they connect, however. Instead, each hydrogen atom is 101 pm from one oxygen and 174 pm from the other. In contrast, each oxygen atom is bonded to two H atoms at the shorter distance and two at the longer distance, corresponding to two O–H covalent bonds and two O⋅⋅⋅H hydrogen bonds from adjacent water molecules, respectively. The resulting open, cagelike structure of ice means that the solid is actually slightly less dense than the liquid, which explains why ice floats on water rather than sinks. Figure $$\PageIndex{6}$$: The Hydrogen-Bonded Structure of Ice. Each water molecule accepts two hydrogen bonds from two other water molecules and donates two hydrogen atoms to form hydrogen bonds with two more water molecules, producing an open, cagelike structure. The structure of liquid water is very similar, but in the liquid, the hydrogen bonds are continually broken and formed because of rapid molecular motion. Hydrogen bond formation requires both a hydrogen bond donor and a hydrogen bond acceptor. Because ice is less dense than liquid water, rivers, lakes, and oceans freeze from the top down. In fact, the ice forms a protective surface layer that insulates the rest of the water, allowing fish and other organisms to survive in the lower levels of a frozen lake or sea. If ice were denser than the liquid, the ice formed at the surface in cold weather would sink as fast as it formed. Bodies of water would freeze from the bottom up, which would be lethal for most aquatic creatures. The expansion of water when freezing also explains why automobile or boat engines must be protected by “antifreeze” and why unprotected pipes in houses break if they are allowed to freeze. Example $$\PageIndex{3}$$ Considering CH3OH, C2H6, Xe, and (CH3)3N, which can form hydrogen bonds with themselves? Draw the hydrogen-bonded structures. Given: compounds Asked for: formation of hydrogen bonds and structure Strategy: 1. Identify the compounds with a hydrogen atom attached to O, N, or F. These are likely to be able to act as hydrogen bond donors. 2. Of the compounds that can act as hydrogen bond donors, identify those that also contain lone pairs of electrons, which allow them to be hydrogen bond acceptors. If a substance is both a hydrogen donor and a hydrogen bond acceptor, draw a structure showing the hydrogen bonding. Solution: A Of the species listed, xenon (Xe), ethane (C2H6), and trimethylamine [(CH3)3N] do not contain a hydrogen atom attached to O, N, or F; hence they cannot act as hydrogen bond donors. B The one compound that can act as a hydrogen bond donor, methanol (CH3OH), contains both a hydrogen atom attached to O (making it a hydrogen bond donor) and two lone pairs of electrons on O (making it a hydrogen bond acceptor); methanol can thus form hydrogen bonds by acting as either a hydrogen bond donor or a hydrogen bond acceptor. The hydrogen-bonded structure of methanol is as follows: Exercise $$\PageIndex{3}$$ Considering CH3CO2H, (CH3)3N, NH3, and CH3F, which can form hydrogen bonds with themselves? Draw the hydrogen-bonded structures. CH3CO2H and NH3; Although hydrogen bonds are significantly weaker than covalent bonds, with typical dissociation energies of only 15–25 kJ/mol, they have a significant influence on the physical properties of a compound. Compounds such as HF can form only two hydrogen bonds at a time as can, on average, pure liquid NH3. Consequently, even though their molecular masses are similar to that of water, their boiling points are significantly lower than the boiling point of water, which forms four hydrogen bonds at a time. Example $$\PageIndex{4}$$: Buckyballs Arrange C60 (buckminsterfullerene, which has a cage structure), NaCl, He, Ar, and N2O in order of increasing boiling points. Given: compounds Asked for: order of increasing boiling points Strategy: Identify the intermolecular forces in each compound and then arrange the compounds according to the strength of those forces. The substance with the weakest forces will have the lowest boiling point. Solution: Electrostatic interactions are strongest for an ionic compound, so we expect NaCl to have the highest boiling point. To predict the relative boiling points of the other compounds, we must consider their polarity (for dipole–dipole interactions), their ability to form hydrogen bonds, and their molar mass (for London dispersion forces). Helium is nonpolar and by far the lightest, so it should have the lowest boiling point. Argon and N2O have very similar molar masses (40 and 44 g/mol, respectively), but N2O is polar while Ar is not. Consequently, N2O should have a higher boiling point. A C60 molecule is nonpolar, but its molar mass is 720 g/mol, much greater than that of Ar or N2O. Because the boiling points of nonpolar substances increase rapidly with molecular mass, C60 should boil at a higher temperature than the other nonionic substances. The predicted order is thus as follows, with actual boiling points in parentheses: He (−269°C) < Ar (−185.7°C) < N2O (−88.5°C) < C60 (>280°C) < NaCl (1465°C). Exercise $$\PageIndex{4}$$ Arrange 2,4-dimethylheptane, Ne, CS2, Cl2, and KBr in order of decreasing boiling points. KBr (1435°C) > 2,4-dimethylheptane (132.9°C) > CS2 (46.6°C) > Cl2 (−34.6°C) > Ne (−246°C) Example $$\PageIndex{5}$$: Identify the most significant intermolecular force in each substance. 1. C3H8 2. CH3OH 3. H2S Solution a. Although C–H bonds are polar, they are only minimally polar. The most significant intermolecular force for this substance would be dispersion forces. b. This molecule has an H atom bonded to an O atom, so it will experience hydrogen bonding. c. Although this molecule does not experience hydrogen bonding, the Lewis electron dot diagram and VSEPR indicate that it is bent, so it has a permanent dipole. The most significant force in this substance is dipole-dipole interaction. Exercise $$\PageIndex{6}$$ Identify the most significant intermolecular force in each substance. 1. HF 2. HCl
2019-05-21 08:57:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4872961938381195, "perplexity": 1332.3254627703711}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256314.25/warc/CC-MAIN-20190521082340-20190521104340-00237.warc.gz"}
https://vlsi.jp/OpenMPWSRAM_eng.html
# Using SRAM with OpenMPW If you don’t have time to read it, we have prepared a repository that will serve as an example, so please refer to it as appropriate and do your best. https://github.com/Cra2yPierr0t/sky130_sram_test I had a really bad time when I used self-created memory in OpenMPW. Apparently, you can be happy if you use OpenRAM-generated memory, so here are the results of my various trial and error attempts. OpenRAM is an OSS that can generate SRAM of any configuration. https://openram.org It is a nice OSS, but I am not going to use it directly this time. Or rather, I don’t know how to use it. Actually, 1kB and 2kB SRAMs built with OpenRAM are installed together with the installation of SKY130 PDK using Caravel, so we will use them this time. Verilog and GDSII of SRAM are prepared in the following directory. $PDK_ROOT/sky130B/libs.ref/sky130_sram_macros/ This directory provides 32x256, 8x1024 1kB memory and 32x512 2kB memory. The order in which this memory is used is as follows 1. Place memory in user_project_wrapper.v 2. Edit config.tcl 3. Specify location in macro.cfg 4. Build The following is an example of how to use 1kB of 32x256 memory. Basically, other memory can be used in the same way, but there are some points to keep in mind for 8x1024 1kB memory, which will be explained later. ## 1. Place memory in user_project_wrapper.v First, set up the memory in user_project_wrapper.v as follows. As with other modules, you can name the instances as you like, and you can instantiate multiple instances. You should also write the part enclosed by ifdef~endif. // write : when web0 = 0, csb0 = 0 // read : when web0 = 1, csb0 = 0, maybe 3 clock delay...? // read : when csb1 = 0, maybe 3 clock delay...? sky130_sram_1kbyte_1rw1r_32x256_8 rx_mem( ifdef USE_POWER_PINS .vccd1 (vccd1 ), .vssd1 (vssd1 ), endif // RW .clk0 (clk0), // clock .csb0 (csb0), // active low chip select .web0 (web0), // active low write control .wmask0 (wmask0), // write mask (4 bit) .addr0 (addr0), // addr (8 bit) .din0 (din0), // data in (32 bit) .dout0 (dout0), // data out (32 bit) // R .clk1 (clk1), // clock .csb1 (csb1), // active low chip select .addr1 (addr1), // addr (8 bit) .dout1 (dout1) // data out (32 bit) ); The role of each IO in the SRAM is described below. The numbers behind are omitted. Name I/O Width Role clk I 1bit Clock input, write and read on falling edge. csb I 1bit Chip select signal bar, 0 for select, 1 for deselect web I 1bit Write enable bar; 0 to write, 1 to read wmask I 4bit (by SRAM) Byte enable; 8bit unit addr I 8-bit (by SRAM) address din I 32bit (by SRAM) Data Input dout O 32bit (by SRAM) data output The data write and read methods are as shown in the waveform below, and there is a delay in the readout. Also, the period of clk needs to be 20ns or longer.1 As for the delay in data readout, it is better to wait a little because the description that causes the delay is written in the SRAM code. I am not familiar with semiconductor design, so I can only guess, but in OpenMPW, mega cells like SRAM can only be instantiated in user_project_wrapper.v. ## 2. Edit config.tcl Once you have connected your module and SRAM, the next step is to edit the config.tcl of the user_project_wrapper. An example of the edited config.tcl can be found here. https://github.com/Cra2yPierr0t/Vthernet-SoC/blob/main/openlane/user_project_wrapper/config.tcl The four variables to be edited are FP_PDN_MACRO_HOOKS, VERILOG_FILES_BLACKBOX, EXTRA_LEFS, and EXTRA_GDS_FILES. ### FP_PDN_MACRO_HOOKS Set instance name <vdd_net> <gnd_net> <vdd_pin> <gnd_pin> for FP_PDN_MACRO_HOOKS. Basically, write instance name vccd1 vssd1 vccd1 vssd1. set ::env(FP_PDN_MACRO_HOOKS) "\ rx_mem vccd1 vssd1 vccd1 vssd1" If there are multiple instantiations, separate them with a comma and a space as follows. If a space is not placed after the comma, an error will occur. set ::env(FP_PDN_MACRO_HOOKS) "\ rx_mem1 vccd1 vssd1 vccd1 vssd1, \ rx_mem2 vccd1 vssd1 vccd1 vssd1" ### VERILOG_FILES_BLACKBOX The VERILOG_FILES_BLACKBOX is set to the path of the Verilog file in SRAM. This file is not used for synthesis, but is referenced for simulation. Below is an example of using 1kB of 32x256 memory. $::env(PDK_ROOT)/sky130B/libs.ref/sky130_sram_macros/verilog/sky130_sram_1kbyte_1rw1r_32x256_8.v \ ### EXTRA_LEFS The EXTRA_LEFS should be set to the path of the LEF file of the SRAM to be used. The following is an example of using 1kB of 32x256 memory. $::env(PDK_ROOT)/sky130B/libs.ref/sky130_sram_macros/lef/sky130_sram_1kbyte_1rw1r_8x1024_8.lef \ ### EXTRA_GDS_FILES The EXTRA_GDS_FILES should be set to the path of the GDSII file of the SRAM to be used. The following is an example of using 1kB of 32x256 memory. $::env(PDK_ROOT)/sky130B/libs.ref/sky130_sram_macros/gds/sky130_sram_1kbyte_1rw1r_8x1024_8.gds \ ### DRC error avoidance Added by suggestion. Add the following three lines to config.tcl to avoid DRC errors caused by magic. set ::env(MAGIC_DRC_USE_GDS) 0 set ::env(RUN_MAGIC_DRC) 0 set ::env(QUIT_ON_MAGIC_DRC) 0 If you use OpenRAM SRAMs, magic will inevitably generate DRC errors. magic will not do DRC, but will pass the precheck (in most cases). This completes the minimal editing of config.tcl. ## 3. Specify location in macro.cfg Next, use macro.ctg to specify the location on the chip where the memory will be placed. The format is instance name <X coordinate> <Y coordinate> <Orientation>. I don’t know what Orientation means, but basically it is OK if you set it to N. The unit of the coordinate is μm, and it specifies the position of the lower left corner of the megacell. Set the path of macro.cfg to MACRO_PLACEMENT_CFG. This probably already exists in config.tcl. The following is an example of a case where there are multiple instances. rx_mem1 100 100 N rx_mem2 700 100 N The following is the generated layout, with the lower left corner of the memory at (100, 100) as specified. The manual says that MACRO_PLACEMENT_CFG does not need to be set, but it did not work as far as the author tried. ## 4. Build Just run the normal Caravel build of user_project_wrapper make user_project_wrapper If there are no problems, memory should be generated on the layout. ## Notes on using sky130_sram_1kbyte_1rw1r_8x1024_8. At the writing stage (mpw-7a), the value of NUM_MASKS must be set to 2, otherwise an error will occur. The OpenRAM developer said he fixed it, so you may not need it anymore, but try using sky130_sram_1kbyte_1rw1r_8x1024_8 if the error occurs. If you want to be safe, you should use 32x256. Here is an example. I made some changes to NUM_MASKS and wmask0. sky130_sram_1kbyte_1rw1r_8x1024_8 #(.NUM_WMASKS(2)) rx_mem( ... .web0 (rx_data_vb ), // active low write control ... ## If you want a bigger memory I want it too. I think you can generate it using OpenRAM or you can make a module that puts a lot of 1kB SRAM and make it addressable nice. I want to be able to use OpenRAM properly because I feel that how to create memory in OpenMPW is a matter of life and death.
2023-03-25 00:44:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4206537902355194, "perplexity": 2017.824245183197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00380.warc.gz"}
https://dmoj.ca/problem/sac22cc2p5
## SAC '22 Code Challenge 2 P5 - Even Colouring View as PDF Points: 10 (partial) Time limit: 1.0s Java 1.5s Python 2.0s Memory limit: 256M Author: Problem type Ever since Mr. DeMello forced you to clean the campus, he has been feeling remorseful, so he only asks you to maintain his new array. Initially, he has elements and will make queries of types: 1 i j: Update the element at index to have a value of . 2 L R: Output the sum of every second element starting at (and including ) between . #### Input Specification The first line will contain and , the number of elements and queries. The next line will contain space-separated integers, , the elements of the array. The next lines will contain one of the queries listed above. Note: Fast I/O might be required to fully solve this problem (e.g., BufferedReader for Java). #### Output Specification For every type query, output the sum as specified. #### Sample Input 5 5 1 5 6 9 3 2 1 5 2 2 5 1 2 -4 2 1 5 2 2 5 #### Sample Output 10 14 10 5
2022-08-11 17:35:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24842767417430878, "perplexity": 3927.1458235174805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571483.70/warc/CC-MAIN-20220811164257-20220811194257-00169.warc.gz"}
http://www.sevenforums.com/general-discussion/222669-there-way-hide-files-only-one-particular-extension-type.html
Join Forum | Login | Today's Posts | Tutorials | Vista Forum | Windows 8 Forum Welcome to Windows 7 Forums. Our forum is dedicated to helping you find support and solutions for any problems regarding your Windows 7 PC be it Dell, HP, Acer, Asus or a custom build. We also provide an extensive Windows 7 tutorial section that covers a wide range of tips and tricks. # Windows 7: Is there a way to hide files of only one particular extension/type? 03 Apr 2012 #1 Microsoft Windows 7 Home Premium 64-bit 7601 Multiprocessor Free Service Pack 1 149 posts Is there a way to hide files of only one particular extension/type? I use a program that saves a data file for every image that I process. It's called DxO Optics. This is unlike Photoshop Lightroom 4, which stores that info in a database. It's inconvenient to scroll thru these files which show up as icons. Is there a way to make files invisible by type? These ones have the extension .dop and just show a white page. So far all I see is hide/show in tools/view but I hope there's a way to just hide those files in explorer. Any chance of that? My System Specs . 03 Apr 2012 #2 Windows 7 Home Premium 64 Bit 11,308 posts Colorado You can create a batch script (call it hide.bat for instance) in the folder and put in the batch file: attrib +h *.dop Then just run the batch file when you want to hide the files. If you need step by step instructions for how to create a batch file:Open the folder in question Click on Organize (upper left corner of the window under the back and forward buttons) Folder and search options View tab uncheck hide extensions for know files types click OK Right click in an empty area of the folder (careful not to right click on a file) Click New -> Text Document Delete all the text in the text document name including the .txt part Name the file hide.bat Right click hide.bat and click Edit Type in the command I gave above for attrib +h *.dop Save, and exit. That's it; you can hide extensions to known file types again if you want. My System Specs 03 Apr 2012 #3 Microsoft Windows 7 Home Premium 64-bit 7601 Multiprocessor Free Service Pack 1 149 posts Quote: Originally Posted by writhziden You can create a batch script (call it hide.bat for instance) in the folder and put in the batch file: attrib +h *.dop Then just run the batch file when you want to hide the files. If you need step by step instructions for how to create a batch file:Open the folder in question Click on Organize (upper left corner of the window under the back and forward buttons) Folder and search options View tab uncheck hide extensions for know files types click OK Right click in an empty area of the folder (careful not to right click on a file) Click New -> Text Document Delete all the text in the text document name including the .txt part Name the file hide.bat Right click hide.bat and click Edit Type in the command I gave above for attrib +h *.dop Save, and exit. That's it; you can hide extensions to known file types again if you want. Wow! That's great and yes, I did need step-by-step instructions. Two questions: 1) Do I have to leave the spaces in attrib +h *.dop between attrib and + and h and *? 2) Will it work in all subfolders (I have a LOT of folders in which these pesky files are visible)? Oh and 3) If I need to see them again, just restart the computer? My System Specs . 03 Apr 2012 #4 Microsoft Windows 7 Home Premium 64-bit 7601 Multiprocessor Free Service Pack 1 149 posts Oops! I just realized that the file extension is .CR2.dop and I'm worried that I'd hide all the .CR2 files.... My System Specs 03 Apr 2012 #5 Winbdows 7 ultimate x64 | Ubuntu 12.04 x64 LTS 1,088 posts Quote: 1) Do I have to leave the spaces in attrib +h *.dop between attrib and + and h and *? Yes, there is a indeed a space between them. Code: attrib +h *.dop Quote: 2) Will it work in all subfolders (I have a LOT of folders in which these pesky files are visible)? You'll have to edit the bat file to change to the directory first i.e. add Code: Code: cd \path\to\directory before the attrib line. Someone correct me if i'm wrong. Quote: Quote: 3) If I need to see them again, just restart the computer? Restarting won't do as the 'hidden' attribute has been added to the files. You'll have to either select 'Show hidden files and folders' from Tools>Folder options or edit the bat file w/ the +h replaced w/ -h. Quote: Oops! I just realized that the file extension is .CR2.dop and I'm worried that I'd hide all the .CR2 files.... The extension of this type of file would still be .dop. Windows treats the words after the last dot as extension. My System Specs 03 Apr 2012 #6 Windows 7 Home Premium 64 Bit 11,308 posts Colorado Another option is to put the batch file in the top level directory. You could make two batch files, one with a +h called hide.bat and one with a -h called unhide.bat for instance. In the top level, create the hide.bat file with this command:attrib /s /d +h *.dopand the unhide.bat file with:attrib /s /d -h *.dopThese will hide/unhide all files in the current directory and every subdirectory within that directory. My System Specs 04 Apr 2012 #7 Microsoft Windows 7 Home Premium 64-bit 7601 Multiprocessor Free Service Pack 1 149 posts Quote: Originally Posted by EzioAuditore Quote: 1) Do I have to leave the spaces in attrib +h *.dop between attrib and + and h and *? Yes, there is a indeed a space between them. Code: attrib +h *.dop Quote: 2) Will it work in all subfolders (I have a LOT of folders in which these pesky files are visible)? You'll have to edit the bat file to change to the directory first i.e. add Code: Code: cd \path\to\directory before the attrib line. Someone correct me if i'm wrong. Restarting won't do as the 'hidden' attribute has been added to the files. You'll have to either select 'Show hidden files and folders' from Tools>Folder options or edit the bat file w/ the +h replaced w/ -h. Quote: Oops! I just realized that the file extension is .CR2.dop and I'm worried that I'd hide all the .CR2 files.... The extension of this type of file would still be .dop. Windows treats the words after the last dot as extension. Thanks not only for an answer but a great deal of insight into how these things work. My System Specs 04 Apr 2012 #8 Microsoft Windows 7 Home Premium 64-bit 7601 Multiprocessor Free Service Pack 1 149 posts Quote: Originally Posted by writhziden Another option is to put the batch file in the top level directory. You could make two batch files, one with a +h called hide.bat and one with a -h called unhide.bat for instance. In the top level, create the hide.bat file with this command:attrib /s /d +h *.dopand the unhide.bat file with:attrib /s /d -h *.dopThese will hide/unhide all files in the current directory and every subdirectory within that directory. Thanks for a very elegant and simple solution. I take it s / d means subdirectory... very nice! My System Specs 04 Apr 2012 #9 Windows 7 Home Premium 64 Bit 11,308 posts Colorado Quote: Originally Posted by pxfragonard Thanks for a very elegant and simple solution. I take it s / d means subdirectory... very nice! The /s means all files within the directory containing that extension will have the attribute applied. /d, as you guessed, means all directories and subdirectories will be checked for files with that extension. The directories themselves will not be hidden, but the files with that extension contained in the directories will have the hidden attribute applied or removed. My System Specs 04 Apr 2012 #10 Winbdows 7 ultimate x64 | Ubuntu 12.04 x64 LTS 1,088 posts Quote: Originally Posted by pxfragonard    Quote: Originally Posted by writhziden Another option is to put the batch file in the top level directory. You could make two batch files, one with a +h called hide.bat and one with a -h called unhide.bat for instance. In the top level, create the hide.bat file with this command:attrib /s /d +h *.dopand the unhide.bat file with:attrib /s /d -h *.dopThese will hide/unhide all files in the current directory and every subdirectory within that directory. Thanks for a very elegant and simple solution. I take it s / d means subdirectory... very nice! Here's what they mean: /S Processes files in all directories in the specified path. /D Process folders as well. You can type attrib /? in a command window to know as what switches does it support and what they do. If you just type /? you'll get a list of all the commands with their respective brief explanation... You can do wonders (as i like to say) by mastering the commandline.. :P My System Specs ## Is there a way to hide files of only one particular extension/type? Similar help and support threads for2: Is there a way to hide files of only one particular extension/type? Thread Forum Tutorials General Discussion General Discussion Customization Browsers & Mail General Discussion General Discussion Our Sites Site Links About Us Find Us Windows 7 Forums is an independent web site and has not been authorized, sponsored, or otherwise approved by Microsoft Corporation. "Windows 7" and related materials are trademarks of Microsoft Corp. © Designer Media Ltd All times are GMT -5. The time now is 07:57 AM.
2014-09-22 12:57:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41693738102912903, "perplexity": 4368.791181744599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137046.16/warc/CC-MAIN-20140914011217-00331-ip-10-234-18-248.ec2.internal.warc.gz"}
https://proofwiki.org/wiki/Mathematician:George_David_Birkhoff
# Mathematician:George David Birkhoff ## Mathematician American mathematician best known for what is now known as the Ergodic Theorem. The father of Garrett Birkhoff. American ## History • Born: 21 March 1884, Overisel, Michigan • Died: 12 November 1944, Cambridge, Massachusetts ## Theorems and Definitions Axioms named for George David Birkhoff can be found here. ## Publications • July 1904: On the Integral Divisors of $a^n-b^n$ (Ann. Math. Vol. 5: 173 – 180) (with Harry Schultz Vandiverwww.jstor.org/stable/2007263 • 1906: General mean value and remainder theorems with applications to mechanical differentiation and quadrature • 1907: Asymptotic Properties of Certain Ordinary Differential Equations with Applications to Boundary Value and Expansion Problems (PhD thesis) • 1913: Proof of Poincaré's geometric theorem • 1917: Dynamical Systems with Two Degrees of Freedom • 1923: Relativity and Modern Physics (with R. E. Langer) • 1927: Dynamical Systems • 1931: Proof of the ergodic theorem • 1932: A Set of Postulates for Plane Geometry (Based on Scale and Protractors) • 1933: Aesthetic Measure • 1937: Representability of Lie algebras and Lie groups by matrices • 1938: Electricity as a Fluid • 1941: Basic Geometry (which contains Birkhoff's Axioms)
2019-06-25 02:31:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35286349058151245, "perplexity": 7130.106360285976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999783.49/warc/CC-MAIN-20190625011649-20190625033649-00167.warc.gz"}
https://www.physicsforums.com/threads/why-absolute-space-cannot-be-represented-as-fibers-on-the-axis-the-absolute-time.975388/
# I Why absolute space cannot be represented as fibers on the axis the absolute time? #### Thytanium Summary The relative space of Galileo and Newton can be considered as fibers of the axis of absolute time but the absolute space time of Aristotle cannot. Aristotle's absolute space and time can be represented as ordered pairs (s, t) but not as fibers π(s) = t of time as is the case of Galileo and Newton's space time. That is to say that the space of Galileo and Newton is the projection π(s) = t on the time axis. The time space of Galileo and Newton cannot be represented as ordered pairs. How is that understood? Related Differential Geometry News on Phys.org #### fresh_42 Mentor 2018 Award Can you give us any references of what you mean? And what are fibers of axis? #### martinbn Penrose makes distinction between Aristotelian and Galilean space-times (and Newtonian), for example in "Structure of Space-time". May be in "The Road to Reality" as well, but if you didn't understand it, you probably need some more background in mathematics. I don't see how discussing it here could help you. In any case it would help if you cite your sources and ask more specific questions than "How is that understood?". #### Thytanium Thanks for answering friends of the forum: fresh_42 and martinbn. My question arises from Bernard Schutz's book entitled: "Geometrical Metothods of Matematical Physics," Chapter 2, Section 2.11, which deals with fiber spaces. There is a figure 2.13 that shows space as fibers of time. I have also read an article entitled "Fibrated spaces and connections in relativity" that appears on the Web and whose author is the physicist Williams Pitter of the Zulia University and in Spanish language that refers to the cartesian product of the absolute space and time of Aristotle and mentions also the space time of Galileo and Newton and other examples. But in reality what I do not understand is that absolute time space cannot be represented as fibers, and relative time spaces have geometric structure of fibers, that is what I do not understand. Thank you friends for your help and for answering my questions. Grateful to you. #### fresh_42 Mentor 2018 Award It would help a lot if we had a description of the spacetime models of Aristoteles and Newton. AFAIK they are all classical Euclidean spaces, i.e. trivial fiber bundles, and then we have absolute spacetime and time as a fiber of a spatial point. In general relativity there is no distinction between time and space anymore, except that they are different coordinates. We do not have a global coordinate system anymore, but that was not what you asked. Here is a description of what we are talking about mathematically: Since you have posted this under differential geometry, the question is: What is $(E,B,p)$ in your various models? #### Thytanium Thanks friend fresh_42. Grateful to you. #### Thytanium Hello friends of the Forum. I have continued researching on this issue of the space time of Aristotle, Galileo and Newton and I have achieved some results to know what you think about that. For Aristotle, space was absolute because he considered that the earth was motionless and the center of the universe and space outside the earth or cosmic was motionless and subject to the earth, so the mainland was the only valid reference system, and excludes the mobile reference systems and therefore the space referred to the mainland was absolute. With Galileo and Newton things changed because the transformations of them for inertial systems leave the acceleration invariant in all those systems moving at constant speed and in a straight line so there are infinite mobile inertial reference systems in addition to the mainland and therefore the relative space of Galileo and Newton can be considered fibres of the time axis as it appears in the figure of attach file in which an object at rest would be represented as a paralell time line and a mobile would be represented as a diagonal line. This fibre structure could not be done with the absolute time space of Aristotle because it admits infinite inertial referential systems and for Aristotle there is only one and it is the mainland. Aristotle's absolute time-space events can then only be represented as the Cartesian product of two different sets that are space and time. Excuse me because it isn't Theme of differential geometry. Excuse me. Thanks for all. #### Attachments • 12.6 KB Views: 11 "Why absolute space cannot be represented as fibers on the axis the absolute time?" ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
2019-10-14 13:46:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5839602947235107, "perplexity": 823.4028863217402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653247.25/warc/CC-MAIN-20191014124230-20191014151730-00081.warc.gz"}
http://tug.org/pipermail/texhax/2011-March/017146.html
# [texhax] Placing unknown images arbitrarily by lower left corner Thu Mar 31 15:44:35 CEST 2011 ```On Mar 31, 2011, at 9:25 AM, Peter Davis wrote: > Hmmm. I tried this with XeLaTeX, but the images are still coming out at the bottom of the page, rather than on the 396bp line. I believe you need to move the \raisebox call to where the box is used. If you want to store additional spacing information around the box you could do this by placing invisible rules in the box. William --
2018-05-27 12:07:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113045930862427, "perplexity": 1401.2499517179103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868248.78/warc/CC-MAIN-20180527111631-20180527131631-00352.warc.gz"}
https://www.jobilize.com/trigonometry/test/evaluating-logarithms-logarithmic-functions-by-openstax
6.3 Logarithmic functions  (Page 3/9) Page 3 / 9 Write the following exponential equations in logarithmic form. 1. ${3}^{2}=9$ 2. ${5}^{3}=125$ 3. ${2}^{-1}=\frac{1}{2}$ 1. ${3}^{2}=9\text{\hspace{0.17em}}$ is equivalent to $\text{\hspace{0.17em}}{\mathrm{log}}_{3}\left(9\right)=2$ 2. ${5}^{3}=125\text{\hspace{0.17em}}$ is equivalent to $\text{\hspace{0.17em}}{\mathrm{log}}_{5}\left(125\right)=3$ 3. ${2}^{-1}=\frac{1}{2}\text{\hspace{0.17em}}$ is equivalent to $\text{\hspace{0.17em}}{\text{log}}_{2}\left(\frac{1}{2}\right)=-1$ Evaluating logarithms Knowing the squares, cubes, and roots of numbers allows us to evaluate many logarithms mentally. For example, consider $\text{\hspace{0.17em}}{\mathrm{log}}_{2}8.\text{\hspace{0.17em}}$ We ask, “To what exponent must $\text{\hspace{0.17em}}2\text{\hspace{0.17em}}$ be raised in order to get 8?” Because we already know $\text{\hspace{0.17em}}{2}^{3}=8,$ it follows that $\text{\hspace{0.17em}}{\mathrm{log}}_{2}8=3.$ Now consider solving $\text{\hspace{0.17em}}{\mathrm{log}}_{7}49\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}{\mathrm{log}}_{3}27\text{\hspace{0.17em}}$ mentally. • We ask, “To what exponent must 7 be raised in order to get 49?” We know $\text{\hspace{0.17em}}{7}^{2}=49.\text{\hspace{0.17em}}$ Therefore, $\text{\hspace{0.17em}}{\mathrm{log}}_{7}49=2$ • We ask, “To what exponent must 3 be raised in order to get 27?” We know $\text{\hspace{0.17em}}{3}^{3}=27.\text{\hspace{0.17em}}$ Therefore, $\text{\hspace{0.17em}}{\mathrm{log}}_{3}27=3$ Even some seemingly more complicated logarithms can be evaluated without a calculator. For example, let’s evaluate $\text{\hspace{0.17em}}{\mathrm{log}}_{\frac{2}{3}}\frac{4}{9}\text{\hspace{0.17em}}$ mentally. • We ask, “To what exponent must $\text{\hspace{0.17em}}\frac{2}{3}\text{\hspace{0.17em}}$ be raised in order to get $\text{\hspace{0.17em}}\frac{4}{9}?\text{\hspace{0.17em}}$ ” We know $\text{\hspace{0.17em}}{2}^{2}=4\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}{3}^{2}=9,$ so $\text{\hspace{0.17em}}{\left(\frac{2}{3}\right)}^{2}=\frac{4}{9}.\text{\hspace{0.17em}}$ Therefore, $\text{\hspace{0.17em}}{\mathrm{log}}_{\frac{2}{3}}\left(\frac{4}{9}\right)=2.$ Given a logarithm of the form $\text{\hspace{0.17em}}y={\mathrm{log}}_{b}\left(x\right),$ evaluate it mentally. 1. Rewrite the argument $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ as a power of $\text{\hspace{0.17em}}b:\text{\hspace{0.17em}}$ ${b}^{y}=x.\text{\hspace{0.17em}}$ 2. Use previous knowledge of powers of $\text{\hspace{0.17em}}b\text{\hspace{0.17em}}$ identify $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ by asking, “To what exponent should $\text{\hspace{0.17em}}b\text{\hspace{0.17em}}$ be raised in order to get $\text{\hspace{0.17em}}x?$ Solving logarithms mentally Solve $\text{\hspace{0.17em}}y={\mathrm{log}}_{4}\left(64\right)\text{\hspace{0.17em}}$ without using a calculator. First we rewrite the logarithm in exponential form: $\text{\hspace{0.17em}}{4}^{y}=64.\text{\hspace{0.17em}}$ Next, we ask, “To what exponent must 4 be raised in order to get 64?” We know ${4}^{3}=64$ Therefore, $\mathrm{log}{}_{4}\left(64\right)=3$ Solve $\text{\hspace{0.17em}}y={\mathrm{log}}_{121}\left(11\right)\text{\hspace{0.17em}}$ without using a calculator. ${\mathrm{log}}_{121}\left(11\right)=\frac{1}{2}\text{\hspace{0.17em}}$ (recalling that $\text{\hspace{0.17em}}\sqrt{121}={\left(121\right)}^{\frac{1}{2}}=11$ ) Evaluating the logarithm of a reciprocal Evaluate $\text{\hspace{0.17em}}y={\mathrm{log}}_{3}\left(\frac{1}{27}\right)\text{\hspace{0.17em}}$ without using a calculator. First we rewrite the logarithm in exponential form: $\text{\hspace{0.17em}}{3}^{y}=\frac{1}{27}.\text{\hspace{0.17em}}$ Next, we ask, “To what exponent must 3 be raised in order to get $\text{\hspace{0.17em}}\frac{1}{27}?$ We know $\text{\hspace{0.17em}}{3}^{3}=27,$ but what must we do to get the reciprocal, $\text{\hspace{0.17em}}\frac{1}{27}?\text{\hspace{0.17em}}$ Recall from working with exponents that $\text{\hspace{0.17em}}{b}^{-a}=\frac{1}{{b}^{a}}.\text{\hspace{0.17em}}$ We use this information to write $\begin{array}{l}{3}^{-3}=\frac{1}{{3}^{3}}\hfill \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\frac{1}{27}\hfill \end{array}$ Therefore, $\text{\hspace{0.17em}}{\mathrm{log}}_{3}\left(\frac{1}{27}\right)=-3.$ Evaluate $\text{\hspace{0.17em}}y={\mathrm{log}}_{2}\left(\frac{1}{32}\right)\text{\hspace{0.17em}}$ without using a calculator. ${\mathrm{log}}_{2}\left(\frac{1}{32}\right)=-5$ Using common logarithms Sometimes we may see a logarithm written without a base. In this case, we assume that the base is 10. In other words, the expression $\text{\hspace{0.17em}}\mathrm{log}\left(x\right)\text{\hspace{0.17em}}$ means $\text{\hspace{0.17em}}{\mathrm{log}}_{10}\left(x\right).\text{\hspace{0.17em}}$ We call a base-10 logarithm a common logarithm . Common logarithms are used to measure the Richter Scale mentioned at the beginning of the section. Scales for measuring the brightness of stars and the pH of acids and bases also use common logarithms. Definition of the common logarithm A common logarithm    is a logarithm with base $\text{\hspace{0.17em}}10.\text{\hspace{0.17em}}$ We write $\text{\hspace{0.17em}}{\mathrm{log}}_{10}\left(x\right)\text{\hspace{0.17em}}$ simply as $\text{\hspace{0.17em}}\mathrm{log}\left(x\right).\text{\hspace{0.17em}}$ The common logarithm of a positive number $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ satisfies the following definition. For $\text{\hspace{0.17em}}x>0,$ We read $\text{\hspace{0.17em}}\mathrm{log}\left(x\right)\text{\hspace{0.17em}}$ as, “the logarithm with base $\text{\hspace{0.17em}}10\text{\hspace{0.17em}}$ of $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ ” or “log base 10 of $\text{\hspace{0.17em}}x.$ The logarithm $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ is the exponent to which $\text{\hspace{0.17em}}10\text{\hspace{0.17em}}$ must be raised to get $\text{\hspace{0.17em}}x.$ Given a common logarithm of the form $\text{\hspace{0.17em}}y=\mathrm{log}\left(x\right),$ evaluate it mentally. 1. Rewrite the argument $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ as a power of $\text{\hspace{0.17em}}10:\text{\hspace{0.17em}}$ ${10}^{y}=x.$ 2. Use previous knowledge of powers of $\text{\hspace{0.17em}}10\text{\hspace{0.17em}}$ to identify $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ by asking, “To what exponent must $\text{\hspace{0.17em}}10\text{\hspace{0.17em}}$ be raised in order to get $\text{\hspace{0.17em}}x?$ How look for the general solution of a trig function stock therom F=(x2+y2) i-2xy J jaha x=a y=o y=b root under 3-root under 2 by 5 y square The sum of the first n terms of a certain series is 2^n-1, Show that , this series is Geometric and Find the formula of the n^th cosA\1+sinA=secA-tanA why two x + seven is equal to nineteen. The numbers cannot be combined with the x Othman 2x + 7 =19 humberto 2x +7=19. 2x=19 - 7 2x=12 x=6 Yvonne because x is 6 SAIDI what is the best practice that will address the issue on this topic? anyone who can help me. i'm working on my action research. simplify each radical by removing as many factors as possible (a) √75 how is infinity bidder from undefined? what is the value of x in 4x-2+3 give the complete question Shanky 4x=3-2 4x=1 x=1+4 x=5 5x Olaiya hi can you give another equation I'd like to solve it Daniel what is the value of x in 4x-2+3 Olaiya if 4x-2+3 = 0 then 4x = 2-3 4x = -1 x = -(1÷4) is the answer. Jacob 4x-2+3 4x=-3+2 4×=-1 4×/4=-1/4 LUTHO then x=-1/4 LUTHO 4x-2+3 4x=-3+2 4x=-1 4x÷4=-1÷4 x=-1÷4 LUTHO A research student is working with a culture of bacteria that doubles in size every twenty minutes. The initial population count was  1350  bacteria. Rounding to five significant digits, write an exponential equation representing this situation. To the nearest whole number, what is the population size after  3  hours? v=lbh calculate the volume if i.l=5cm, b=2cm ,h=3cm Need help with math Peya can you help me on this topic of Geometry if l help you litshani ( cosec Q _ cot Q ) whole spuare = 1_cosQ / 1+cosQ A guy wire for a suspension bridge runs from the ground diagonally to the top of the closest pylon to make a triangle. We can use the Pythagorean Theorem to find the length of guy wire needed. The square of the distance between the wire on the ground and the pylon on the ground is 90,000 feet. The square of the height of the pylon is 160,000 feet. So, the length of the guy wire can be found by evaluating √(90000+160000). What is the length of the guy wire? the indicated sum of a sequence is known as
2020-05-31 16:04:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 73, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.868099570274353, "perplexity": 879.8922659612467}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413551.52/warc/CC-MAIN-20200531151414-20200531181414-00482.warc.gz"}
http://codeforces.com/problemset/problem/659/A
A. Round House time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output Vasya lives in a round building, whose entrances are numbered sequentially by integers from 1 to n. Entrance n and entrance 1 are adjacent. Today Vasya got bored and decided to take a walk in the yard. Vasya lives in entrance a and he decided that during his walk he will move around the house b entrances in the direction of increasing numbers (in this order entrance n should be followed by entrance 1). The negative value of b corresponds to moving |b| entrances in the order of decreasing numbers (in this order entrance 1 is followed by entrance n). If b = 0, then Vasya prefers to walk beside his entrance. Illustration for n = 6, a = 2, b =  - 5. Help Vasya to determine the number of the entrance, near which he will be at the end of his walk. Input The single line of the input contains three space-separated integers n, a and b (1 ≤ n ≤ 100, 1 ≤ a ≤ n,  - 100 ≤ b ≤ 100) — the number of entrances at Vasya's place, the number of his entrance and the length of his walk, respectively. Output Print a single integer k (1 ≤ k ≤ n) — the number of the entrance where Vasya will be at the end of his walk. Examples Input 6 2 -5 Output 3 Input 5 1 3 Output 4 Input 3 2 7 Output 3 Note The first example is illustrated by the picture in the statements.
2021-10-19 21:41:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30329740047454834, "perplexity": 1019.5470651011537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585281.35/warc/CC-MAIN-20211019202148-20211019232148-00246.warc.gz"}
https://math.meta.stackexchange.com/questions/3417/a-new-approach-to-the-goldbach-conjecture-please-re-open?noredirect=1
# A new approach to the Goldbach conjecture/ please re-open I would like my question, "A new approach to the Goldbach conjecture", to be reopened. The reasons presented for closing do not make sense. Usually you are helpful and improve my questions. Someone has always edited my questions. I feel that this question is very important and misunderstood. For this reason please re-open the question so that we can have further discussion of the material presented. • I believe your question violates the section "What kind of questions should I not ask here?" of the FAQ. In particular, "if your motivation for asking the question is “I would like to participate in a discussion about ______”, then you should not be asking here." – user856 Jan 3 '12 at 1:47 • As far as I can tell, there is no question in that post that is suitable to this site. Instead, you present an oration, and issue your "conclusions" on the basis of your oration, as well as possibly "suggesting" that others help you (do for you?) do work that you think is interesting. That is all good and well for your blog, not for this site. Issuing statements like "I propose that from now on Goldbach's conjecture should be renamed...", for example, are not "questions". I fully support the "Not a real question" closing. – Arturo Magidin Jan 3 '12 at 1:50 • Dear Vassili, Your question simply recalls (or perhaps rediscovers) and illustrates a known heuristic for estimating the number of solutions to the Goldbach equation $p_1 + p_2 = N$. It doesn't raise any question about that heuristic, and also contains several grandiose and unwarranted claims. It's not reasonable to expect that it will be reopened. Perhaps you could read the wikipedia article, which discusses many substantial results towards Goldbach, including some that build on the kind of ideas present in your question, and ask ... – Matt E Jan 3 '12 at 3:43 • ... about some of those results and methods. This would be more appropriate for this site. (Especially if you make some effort to learn something about these results and methods yourself first.) Regards, – Matt E Jan 3 '12 at 3:44 I have just read what you wrote. A few observations: (i) Math.SE is quite not the place to propose the renaming of century-old conjectures. (ii) Your second conclusion is a magnificent non sequitur. (iii) I have no idea what your third point is supposed to be. The main problem with what you wrote is that it is not a question, and this site is devoted to questions. You are of course free to be pursue the line of investigation you sketched in what you wrote, and we will all celebrate your success if it gets to that, but it should be quite obvious from reading essentially all questions and answers present in this site, and the FAQ, that this is not the correct place to propose research lines. As far as I can see, there is no reason to reopen the question. • It is sad none of you has presented a single mathematical argument.The conclusion of the material is we obtain infinite presentations for each even number. The only way to counter argue that is to say some even numbers have infinite presentations and some others not at all and live with this assumption. Is that sufficient reason to reopen the question? – Vassilis Parassidis Jan 3 '12 at 2:51 • @Vassili: Appropriateness of your post for this website is not a mathematical problem, and therefore no mathematical argument will be given for closing it. – Jonas Meyer Jan 3 '12 at 3:15 • Now I know your way of thinking it is very sad for me anyway to post the material on this site. I hope some of you some day will meet a true mathematician and explain to him your type of logic. If someone had given me mathematical arguments I would had crushed him. This concludes my response to all of you. – Vassilis Parassidis Jan 3 '12 at 3:41 • @Vassili: Dear Vassili, As I have indicated in various comments, known results in the direction of Goldbach take the ideas of your question (among other things) and push them much much further than you have. You can learn something about this from the wikipedia article on Goldbach. This is a mathematical reason for not reopening your question: it doesn't take into account what is already known about the Goldbach problem, and (in large part because of this) makes various unwarranted claims about its own importance, and the relative ... – Matt E Jan 3 '12 at 3:48 • ... state of affairs with regard to Goldbach and people's understanding thereof. Regards, – Matt E Jan 3 '12 at 3:48 • @Vassili: My statement that a mathematical reason will not be given was incorrect. To clarify, what I meant was that suitability of a post for this particular website depends on more than just the mathematics involved. The main reason cited for closing your post was that it is not a question; any other post that is not a question could also be closed for this reason, regardless of the mathematical content. – Jonas Meyer Jan 3 '12 at 3:54 • @Jonas: Dear Jonas, Sorry, I didn't mean to undercut your statement, which was quite correct. It's rather that Vassili was reacting as if his question was closed on a technicality rather than on its merits, so I wanted to address its merits as well. Regards, – Matt E Jan 3 '12 at 3:59 • @Jonas: P.S. I should add that I didn't anticipate that his sole reason for wanting an argument on the merits was in order to "crush" the person putting forward the argument. Perhaps I should have though ... . – Matt E Jan 3 '12 at 4:00 • @Matt: Dear Matt, There is no need to apologize. I never suspected that your helpful comment was meant to undercut my statement. I guess I couldn't resist mentioning the technicality, and then I hoped that elaborating on my prior comment would help clarify what I had originally intended. Sincerely, – Jonas Meyer Jan 3 '12 at 4:05
2020-12-03 17:39:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.51712965965271, "perplexity": 700.1709255877234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141729522.82/warc/CC-MAIN-20201203155433-20201203185433-00433.warc.gz"}
https://deepai.org/publication/balancing-suspense-and-surprise-timely-decision-making-with-endogenous-information-acquisition
DeepAI # Balancing Suspense and Surprise: Timely Decision Making with Endogenous Information Acquisition We develop a Bayesian model for decision-making under time pressure with endogenous information acquisition. In our model, the decision maker decides when to observe (costly) information by sampling an underlying continuous-time stochastic process (time series) that conveys information about the potential occurrence or non-occurrence of an adverse event which will terminate the decision-making process. In her attempt to predict the occurrence of the adverse event, the decision-maker follows a policy that determines when to acquire information from the time series (continuation), and when to stop acquiring information and make a final prediction (stopping). We show that the optimal policy has a rendezvous structure, i.e. a structure in which whenever a new information sample is gathered from the time series, the optimal "date" for acquiring the next sample becomes computable. The optimal interval between two information samples balances a trade-off between the decision maker's surprise, i.e. the drift in her posterior belief after observing new information, and suspense, i.e. the probability that the adverse event occurs in the time interval between two information samples. Moreover, we characterize the continuation and stopping regions in the decision-maker's state-space, and show that they depend not only on the decision-maker's beliefs, but also on the context, i.e. the current realization of the time series. • 21 publications • 120 publications 10/22/2020 ### Predicting Human Decision Making in Psychological Tasks with Recurrent Neural Networks Unlike traditional time series, the action sequences of human decision m... 03/30/2022 ### Theory of Acceleration of Decision Making by Correlated Times Sequences Photonic accelerators have been intensively studied to provide enhanced ... 10/21/2020 ### A study of the Multicriteria decision analysis based on the time-series features and a TOPSIS method proposal for a tensorial approach A number of Multiple Criteria Decision Analysis (MCDA) methods have been... 10/06/2018 ### Discretizing Logged Interaction Data Biases Learning for Decision-Making Time series data that are not measured at regular intervals are commonly... 04/27/2022 ### Stopping time detection of wood panel compression: A functional time series approach We consider determining the optimal stopping time for the glue curing of... 03/29/2017 ### Optimal Policies for Observing Time Series and Related Restless Bandit Problems The trade-off between the cost of acquiring and processing data, and unc... 05/12/2019 ### Note on Thompson sampling for large decision problems There is increasing interest in using streaming data to inform decision ... ## 1 Introduction The problem of timely risk assessment and decision-making based on a sequentially observed time series is ubiquitous, with applications in finance, medicine, cognitive science and signal processing [1-7]. A common setting that arises in all these domains is that a decision-maker, provided with sequential observations of a time series, needs to decide whether or not an adverse event (e.g. financial crisis, clinical acuity for ward patients, etc) will take place in the future. The decision-maker’s recognition of a forthcoming adverse event needs to be timely, for that a delayed decision may hinder effective intervention (e.g. delayed admission of clinically acute patients to intensive care units can lead to mortality [5]). In the context of cognitive science, this decision-making task is known as the two-alternative forced choice (2AFC) task [15]. Insightful structural solutions for the optimal Bayesian 2AFC decision-making policies have been derived in [9-16], most of which are inspired by the classical work of Wald on sequential probability ratio tests (SPRT) [8]. In this paper, we present a Bayesian decision-making model in which a decision-maker adaptively decides when to gather (costly) information from an underlying time series in order to accumulate evidence on the occurrence/non-occurrence of an adverse event. The decision-maker operates under time pressure: occurrence of the adverse event terminates the decision-making process. Our abstract model is motivated and inspired by many practical decision-making tasks such as: constructing temporal patterns for gathering sensory information in perceptual decision-making [1], scheduling lab tests for ward patients in order to predict clinical deterioration in a timely manner [3, 5], designing breast cancer screening programs for early tumor detection [7], etc. We characterize the structure of the optimal decision-making policy that prescribes when should the decision-maker acquire new information, and when should she stop acquiring information and issue a final prediction. We show that the decision-maker’s posterior belief process, based on which policies are prescribed, is a supermartingale that reflects the decision-maker’s tendency to deny the occurrence of an adverse event in the future as she observes the survival of the time series for longer time periods. Moreover, the information acquisition policy has a ”rendezvous” structure; the optimal ”date” for acquiring the next information sample can be computed given the current sample. The optimal schedule for gathering information over time balances the information gain (surprise) obtained from acquiring new samples, and the probability of survival for the underlying stochastic process (suspense). Finally, we characterize the continuation and stopping regions in the decision-maker’s state-space and show that, unlike previous models, they depend on the time series ”context” and not just the decision-maker’s beliefs. Related Works  Mathematical models and analyses for perceptual decision-making based on sequential hypothesis testing have been developed in [9-17]. Most of these models use tools from sequential analysis developed by Wald [8] and Shiryaev [21, 22]. In [9,13,14], optimal decision-making policies for the 2AFC task were computed by modelling the decision-maker’s sensory evidence using diffusion processes [20]. These models assume an infinite time horizon for the decision-making policy, and an exogenous supply of sensory information. The assumption of an infinite time horizon was relaxed in [10] and [15], where decision-making is assumed to be performed under the pressure of a stochastic deadline; however, these deadlines were considered to be drawn from known distributions that are independent of the hypothesis and the realized sensory evidence, and the assumption of an exogenous information supply was maintained. In practical settings, the deadlines would naturally be dependent on the realized sensory information (e.g. patients’ acuity events are correlated with their physiological information [5]), which induces more complex dynamics in the decision-making process. Context-based decision-making models were introduced in [17], but assuming an exogenous information supply and an infinite time horizon. The notions of “suspense” and “surprise” in Bayesian decision-making have also been recently introduced in the economics literature (see [18] and the references therein). These models use measures for Bayesian surprise, originally introduced in the context of sensory neuroscience [19], in order to model the explicit preference of a decision-maker to non-instrumental information. The goal there is to design information disclosure policies that are suspense-optimal or surprise-optimal. Unlike our model, such models impose suspense (and/or surprise) as a (behavioral) preference of the decision-maker, and hence they do not emerge endogenously by virtue of rational decision making. ## 2 Timely Decision Making with Endogenous Information Acquisition Time Series Model  The decision-maker has access to a time-series modeled as a continuous-time stochastic process that takes values in , and is defined over the time domain , with an underlying filtered probability space . The process is naturally adapted to , and hence the filtration abstracts the information conveyed in the time series realization up to time . The decision-maker extracts information from to guide her actions over time. We assume that is a stationary Markov process111Most of the insights distilled from our results would hold for more general dependency structures. However, we keep this assumption to simplify the exposition and maintain the tractability and interpretability of the results., with a stationary transition kernel , where is a realization of a latent Bernoulli random variable (unobservable by the decision-maker), with . The distributional properties of the paths of are determined by , since the realization of decides which Markov kernel ( or ) generates . If the realization is equal to , then an adverse event occurs almost surely at a (finite) random time , the distribution of which is dependent on the realization of the path . The decision-maker’s ultimate goal is to sequentially observe , and infer before the adverse event happens; inference is obsolete if it is declared after . Since is latent, the decision-maker is unaware whether the adverse event will occur or not, i.e. whether her access to is temporary ( for ) or permanent ( for ). In order to model the occurrence of the adverse event; we define as an -stopping time for the process , for which we assume the following: • The stopping time is finite almost surely, whereas is infinite almost surely, i.e. , and . • The stopping time is accessible222Our analyses hold if the stopping time is totally inaccessible., with a Markovian dependency on history, i.e. , where is an injective map from to and is non-decreasing in . Thus, unlike the stochastic deadline models in [10] and [15], the decision deadline in our model (i.e. occurrence of the adverse event) is context-dependent as it depends on the time series realization (i.e. is not independent of as in [15]). We use the notation where to denote the stopped process to which the decision-maker has access. Throughout the paper, the measures and assign probability measures to the paths and respectively, and we assume that 333The absolute continuity of with respect to means that no sample path of should be fully revealing of the realization of .. Information  The decision-maker can only observe a set of (costly) samples of rather than the full continuous path. The samples observed by the decision-maker are captured by partitioning over specific time intervals: we define with , as a size- partition of over the interval , where is the total number of samples in the partition . The decision-maker observes the values that takes at the time instances in ; thus the sequence of observations is given by the process where is the Dirac measure. The space of all partitions over the interval is denoted by . We denote the probability measures for partitioned paths generated under and with a partition as and respectively. Since the decision-maker observes through the partition , her information at time is conveyed in the -algebra . The stopping event is observable by the decision-maker even if . We denote the -algebra generated by the stopping event as . Thus, the information that the decision-maker has at time is expressed by the filtration , and it follows that any decision-making policy needs to be -measurable. Figure 1 depicts a Brownian path (a sample path of a Wiener process, which satisfies all the assumptions of our model)444In Figure 1, the stopping event was simulated as a totally inaccessible first jump of a Poisson process., with an exemplary partition over the time interval . The decision-maker observes the samples in sequentially, and reasons about the realization of the latent variable based on these samples and the process survival, i.e. at , the decision-maker’s information resides in the -algebra generated by the samples in , and the -algebra generated by the process’ survival . Policies and Risks  The decision-maker’s goal is to come up with a (timely) decision , that reflects her prediction for whether the actual realization is or , before the process potentially stops at the unknown time . The decision-maker follows a policy: a (continuous-time) mapping from the observations gathered up to every time instance to two types of actions: • A sensing action : if , then the decision-maker decides to observe a new sample from the running process at time . • A continuation/stopping action : if , then the decision-maker decides to stop gathering samples from , and declares a final decision (estimate) for . Whenever the decision-maker continues observing and postpones her declaration for the estimate of . A policy is a (-measurable) mapping rule that maps the information in to an action tuple at every time instance . We assume that every single observation that the decision-maker draws from entails a fixed cost, hence the process has to be a point process under any optimal policy555Note that the cost of observing any local continuous path is infinite, hence any optimal policy must have being a point process to keep the number of observed samples finite.. We denote the space of all such policies by . A policy generates the following random quantities as a function of the paths on the probability space : 1- A stopping time : The first time at which the decision-maker declares its estimate for , i.e. . 2- A decision (estimate of ) : Given by . 3- A random partition : A realization of the point process , comprising a finite set of strictly increasing -stopping times at which the decision-maker decides to sample the path . A loss function is associated with every realization of the policy , representing the overall cost incurred when following that policy for a specific path . The loss function is given by (1) where is the cost of type I error (failure to anticipate the adverse event), is the cost of type II error (falsely predicting that an adverse event will occur), is the cost of the delay in declaring the estimate , is the cost incurred when the adverse event occurs before an estimate is declared (cost of missing the deadline), and is the cost of every observation sample (cost of information). The risk of each policy is defined as its expected loss R(π)≜E[ℓ(π;Θ)], (2) where the expectation is taken over the paths of . In the next section, we characterize the structure of the optimal policy . ## 3 Structure of the Optimal Policy Since the decision-maker’s posterior belief at time , defined as , is an important statistic for designing sequential policies [10, 21-22], we start our characterization for by investigating the belief process . ### 3.1 The Posterior Belief Process Recall that the decision-maker distills information from two types of observations: the realization of the partitioned time series (i.e. the information in ), and 2) the survival of the process up to time (i.e. the information in ). In the following Theorem, we study the evolution of the decision-maker’s beliefs as she integrates these pieces of information over time666All proofs are provided in the supplementary material. Theorem 1 (Information and beliefs).   Every posterior belief trajectory associated with a policy that creates a partition of is a càdlàg path given by μt=⎧⎪⎨⎪⎩1,fort≥τ(1+1−ppd~Po(Pπt)d~P1(Pπt))−1,for0≤t<τ where is the Radon-Nikodym derivative777Since we impose the condition and fix a partition , then the Radon-Nikodym derivative exists. of the measure with respect to , and is given by the following elementary predictable process 1d~Po(Pπt)d~P1(Pπt)=N(Pπt)−1∑k=1P(X(Pπt)|Θ=1)P(X(Pπt)|Θ=0)% Likelihood ratioP(τ>t|σ(X(Pπt),Θ=1)Survival probability1{Pπt(k)≤t≤Pπt(k+1)}, for and for . Moreover, the path has exactly jumps at the time indexes in . Proof: The posterior belief process is given by μt =P(Θ=1|~Ft) (a)=P(Θ=1|σ(X(Pπt)),St) =1{t≥τ}⋅P(Θ=1|σ(X(Pπt)),t≥τ)+1{t<τ}⋅P(Θ=1|σ(X(Pπt)),t<τ) (b)=1{t≥τ}+1{t<τ}⋅P(Θ=1|σ(X(Pπt)),t<τ), (3) where we have used the fact that in (a), and the fact that the event is -measurable in (b), and hence . Therefore, we can write the posterior belief process in the following form μt={1,fort≥τP(Θ=1|σ(X(Pπt)),t<τ),for0≤t<τ. Now we focus on computing . Note that using Bayes’ rule, we have that P(Θ=1|σ(X(Pπt)),t<τ) =P(Θ=1,σ(X(Pπt)),t<τ)P(σ(X(Pπt)),t<τ) =P(Θ=1,σ(X(Pπt)),t<τ)∑θ∈{0,1}P(Θ=θ,σ(X(Pπt)),t<τ) =dP(σ(X(Pπt)),t<τ|Θ=1)P(Θ=1)∑θ∈{0,1}dP(σ(X(Pπt)),t<τ|Θ=θ)P(Θ=θ) =dP(σ(X(Pπt)),t<τ|Θ=1)P(Θ=1)dP(σ(X(Pπt)),t<τ|Θ=0)P(Θ=0)+dP(σ(X(Pπt)),t<τ|Θ=1)P(Θ=1) =pdP(σ(X(Pπt)),t<τ|Θ=1)(1−p)dP(σ(X(Pπt)),t<τ|Θ=0)+pdP(σ(X(Pπt)),t<τ|Θ=1) =(1+1−pp⋅dP(σ(X(Pπt)),t<τ|Θ=0)dP(σ(X(Pπt)),t<τ|Θ=1))−1 =(1+1−pp⋅d~Po(Pπt)d~P1(Pπt))−1, (4) where the existence of the Radon-Nykodim derivative follows from the fact that . Hence, we have that μt=⎧⎪⎨⎪⎩1,fort≥τ(1+1−pp⋅d~Po(Pπt)d~P1(Pπt))−1,for0≤t<τ. Now we focus on evaluating . Using a further application of Bayes’ rule we have that =dP(σ(X(Pπt)),t<τ|Θ=1)dP(σ(X(Pπt)),t<τ|Θ=0) =P(t<τ|X(Pπt),Θ=1)⋅dP(X(Pπt)|Θ=1)P(t<τ|X(Pπt),Θ=0)⋅dP(X(Pπt)|Θ=0) =dP(X(Pπt)|Θ=1)dP(X(Pπt)|Θ=0)⋅P(t<τ|X(Pπt),Θ=1), (5) where we have used the fact that . For any partition , the likelihood ratio is an elementary predictable process that takes an initial value that is equal to the prior (when no samples are initially observed), and then takes constant values of in the interval between any two samples in the partition (only when a new sample is observed, the likelihood is updated). Hence, we have that dP(X(Pπt)|Θ=1)dP(X(Pπt)|Θ=0)=p1{t=0}+N(Pπt)−1∑k=1P(X(Pπt)|Θ=1)P(X(Pπt)|Θ=0)1{Pπt(k−1)≤t≤Pπt(k)}. The process is predictable since the likelihood remains constant as long as no new samples are observed. Modulated by the survival probability, can be written as pP(τ>t|Θ=1)1{tt|σ(X(Pπt),Θ=1)1{Pπt(k)≤t≤Pπt(k+1)}. Under usual regularity conditions on it is easy to see that will have jumps only at the time instances in the partition and at the stopping time , i.e. a total of jumps at the time indexes in . Theorem 1 says that every belief path is right-continuous with left limits, and has jumps at the time indexes in the partition , whereas between each two jumps, the paths are predictable (i.e. they are known ahead of time once we know the magnitudes of the jumps preceding them). This means that the decision-maker obtains ”active” information by probing the time series to observe new samples (i.e. the information in ), inducing jumps that revive her beliefs, whereas the progression of time without witnessing a stopping event offers the decision-maker ”passive information” that is distilled just from the costless observation of the process’ survival. Both sources of information manifest themselves in terms of the likelihood ratio, and the survival probability in the expression of above. In Figure 2, we plot the càdlàg belief paths for policies and where (i.e. policy observe a subset of the samples observed by ). We also plot the (predictable) belief path of a wait-and-watch policy that observes no samples. We can see that , which has more jumps of ”active information”, copes faster with the truthful belief over time. Between each two jumps, the belief process exhibits a non-increasing predictable path until fed with a new piece of information. The wait-and-watch policy has its belief drifting away from the prior towards the wrong belief since it only distills information from the process survival, which favors the hypothesis . This discussion motivates the introduction of the following key quantities. Information gain (surprise) : The amount of drift in the decision-maker’s belief at time with respect to her belief at time , given the information available up to time , i.e. . Posterior survival function (suspense) : The probability that a process generated with survives up to time given the information observed up to time , i.e. . The function is a non-increasing function in i.e. . That is, the information gain is the amount of “surprise” that the decision-maker experiences in response to a new information sample expressed in terms of the change in here belief, i.e. the jumps in , whereas the survival probability (suspense) is her assessment for the risk of having the adverse event taking places in the next time interval. As we will see in the next subsection, the optimal policy would balance the two quantities when scheduling the times to sense . We conclude our analysis for the process by noting that the lack of information samples creates bias towards the belief that (e.g. see the belief path of the wait-and-watch policy in Figure 2). We formally express this behavior in the following Corollary. Corollary 1 (Leaning towards denial).   For every policy , the posterior belief process is a supermartingale with respect to , where E[μt+Δt|~Ft]=μt−μ2tSt(Δt)(1−St(Δt))≤μt,∀Δt∈R+. Proof: Recall that from Theorem 1, we know that the posterior belief process can be written as μt=1{t≥τ}+1{t<τ}P(Θ=1|~Ft). Hence, the expected posterior belief at time given the information in the filtration can be written as E[μt+Δt∣∣~Ft] =E[1{t+Δt≥τ}+1{t+Δt<τ}P(Θ=1|~Ft+Δt)∣∣~Ft] =P(Θ=1,t+Δt≥τ|~Ft)+P(t+Δt<τ|~Ft)⋅E[P(Θ=1|~Ft+Δt)∣∣~Ft∨{t+Δt<τ}], (6) and hence can be written as P(t+Δt≥τ|~Ft,Θ=1)⋅P(Θ=1|~Ft)+P(t+Δt<τ|~Ft)⋅E[P(Θ=1|~Ft+Δt)∣∣~Ft∨{t+Δt<τ}], which is equivalent to E[μt+Δt∣∣~Ft] =(1−St(Δt))⋅μt+P(t+Δt<τ|~Ft)⋅E[P(Θ=1|~Ft+Δt)∣∣~Ft∨{t+Δt<τ}]. (7) Furthermore, the term in the expression above can be expressed as P(t+Δt<τ|~Ft) =P(t+Δt<τ|~Ft,Θ=1)⋅P(Θ=1|~Ft)+P(t+Δt<τ|~Ft,Θ=0)⋅P(Θ=0|~Ft) (8) =St(Δt)⋅μt+(1−μt). Therefore, can be written as E[μt+Δt∣∣~Ft] =(1−St(Δt))⋅μt+(1−μt+St(Δt)⋅μt)⋅E[P(Θ=1|~Ft+Δt)∣∣~Ft∨{t+Δt<τ}]. (9) Now it remains to evaluate the term in order to find . We first note that E[P(Θ=1|~Ft+Δt)∣∣~Ft∨{t+Δt<τ}]=E[P(Θ=1|σ(Xτ(Pπt+Δt)),t+Δt<τ)∣∣~Ft]. We start evaluating the above by first looking at the term . Using Bayes’ rule, we have that P(Θ=1|Xτ(Pπt+Δt),t+Δt<τ) =P(Θ=1,Xτ(Pπt+Δt),t+Δt<τ)P(Xτ(Pπt+Δt),t+Δt<τ), (10) where can be expanded using successive applications of Bayes’ rule as P(Θ=1|Xτ(Pπt),t<τ)⋅P(Xτ(Pπt),t<τ)⋅P(t+Δt<τ|Θ=1,Xτ(Pπt),t<τ) ⋅dP(Xτ(t+Δt)|Θ=1,Xτ(Pπt),t+Δt<τ), which is equivalent to P(Θ=1,Xτ(Pπt+Δt),t+Δt<τ)=μt⋅St(Δt)⋅P(Xτ(Pπt),t<τ)⋅dP(Xτ(t+Δt)|Θ=1,Xτ(Pπt),t+Δt<τ) (11) Similarly, it is easy to see that P(Θ=0,Xτ(Pπt+Δt),t+Δt<τ)=(1−μt)⋅P(Xτ(Pπt),t<τ)⋅dP(Xτ(t+Δt)|Θ=0,Xτ(Pπt),t+Δt<τ), (12) where again, we have used the fact that . Now we re-formulate (10) using Bayes rule to arrive at the following P(Θ=1|Xτ(Pπt+Δt),t+Δt<τ) =P(Θ=1,Xτ(Pπt+Δt),t+Δt<τ)∑θ∈{0,1}P(Θ=θ,Xτ(Pπt+Δt),t+Δt<τ), (13) then using (11) and (12), (13) can be further reduced to μt⋅St(Δt)⋅dP(Xτ(t+Δt)|Θ=1,Xτ(Pπt),t+Δt<τ)μt⋅St(Δt)⋅dP(Xτ(t+Δt)|Θ=1,Xτ(Pπt),t+Δt<τ)+(1−μt)⋅dP(Xτ(t+Δt)|Θ=0,Xτ(Pπt),t+Δt<τ). (14) Finally, we use the expression in (14) to evaluate the term as follows E[P(Θ=1|σ(Xτ(Pπt+Δt)),t+Δt<τ)∣∣~Ft]= ∑θ∈{0,1}∫P(Θ=1|Xτ(Pπt+Δt),t+Δt<τ)⋅dP(Xτ(t+Δt)|Θ=θ,Xτ(Pπt),t+Δt<τ), which, using (14), can be written as Since ∑θ∈{0,1}dP(Xτ(t+Δt)|Θ=θ,Xτ(Pπt),t+Δt<τ)= μt⋅St(Δt)⋅dP(Xτ(t+Δt)|Θ=1,Xτ(Pπt),t+Δt<τ)+(1−μt)⋅dP(Xτ(t+Δt)|Θ=0,Xτ(Pπt),t+Δt<τ), then the integral above reduces to ∫μt⋅St(Δt)⋅dP(Xτ(t+Δt)|Θ=θ,Xτ(Pπt),t+Δt<τ)=μt⋅St(Δt)⋅∫dP(Xτ(t+Δt)|Θ=θ,Xτ(Pπt),t+Δt
2023-02-04 04:50:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398659825325012, "perplexity": 967.3716932197179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500094.26/warc/CC-MAIN-20230204044030-20230204074030-00522.warc.gz"}
https://psle-math.com/student/test/P5-Fractions-I
# Question 1 of 13 Find the value of $\frac{1}{2}$ - $\frac{3}{8}$ A $\frac{1}{6}$ B $\frac{1}{8}$ C $\frac{1}{4}$ D $\frac{1}{2}$ E None of the above
2022-01-18 03:48:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6495163440704346, "perplexity": 1476.4726966210342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00155.warc.gz"}
http://bethanystruble.com/7i5fbhf/ec2687-simulated-annealing-formula
# simulated annealing formula ) For each edge Original Paper introducing the idea. Annealing Algorithm. {\displaystyle T} is sensitive to coarser energy variations, while it is sensitive to finer energy variations when ) Notable among these include restarting based on a fixed number of steps, based on whether the current energy is too high compared to the best energy obtained so far, restarting randomly, etc. {\displaystyle E(s')-E(s)} ) The main feature of simulated annealing is that it provides a means of evading the local optimality by allowing hill climbing movements (movements that worsen the purpose function value) with the hope of finding a global optimum [2]. The simulated annealing method is a popular metaheuristic local search method used to address discrete and to a lesser extent continuous optimization problem. s ⁡ s T This formula was superficially justified by analogy with the transitions of a physical system; it corresponds to the Metropolis–Hastings algorithm, in the case where T=1 and the proposal distribution of Metropolis–Hastings is symmetric. ( , Many descriptions and implementations of simulated annealing still take this condition as part of the method's definition. Kirkpatrick et al. For sufficiently small values of w The state of some phys­i­cal sys­tems, and the func­tion E(s) to be min­i­mized, is anal­o­gous to the in­ter­nal en­ergy of the sys­tem in that state. e . {\displaystyle s'} Such "closed catchment basins" of the energy function may trap the simulated annealing algorithm with high probability (roughly proportional to the number of states in the basin) and for a very long time (roughly exponential on the energy difference between the surrounding states and the bottom of the basin). = , Kirkpatrick, S.; Gelatt, C. D.; and Vecchi, M. P. "Optimization by Moscato and Fontanari conclude from observing the analogous of the "specific heat" curve of the "threshold updating" annealing originating from their study that "the stochasticity of the Metropolis updating in the simulated annealing algorithm does not play a major role in the search of near-optimal minima". To do this we set s and e to sbest and ebest and perhaps restart the annealing schedule. absolute temperature scale). ( − n 1953), in which some trades that do not lower the mileage are accepted when they serve to allow the solver to "explore" more of the possible space of solutions. When e [10] This theoretical result, however, is not particularly helpful, since the time required to ensure a significant probability of success will usually exceed the time required for a complete search of the solution space. In the process of annealing, which refines a piece of material by heating and controlled cooling, the molecules of the material at first absorb a huge amount … [5][8] The method is an adaptation of the Metropolis–Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, published by N. Metropolis et al. For example, in the travelling salesman problem each state is typically defined as a permutation of the cities to be visited, and the neighbors of any state are the set of permutations produced by swapping any two of these cities. {\displaystyle A} ′ P(δE) = exp(-δE /kt)(1) Where k is a constant known as Boltzmann’s constant. 190 The second trick is, again by analogy with annealing of a metal, to lower the "temperature." In 2001, Franz, Hoffmann and Salamon showed that the deterministic update strategy is indeed the optimal one within the large class of algorithms that simulate a random walk on the cost/energy landscape.[13]. edges, and the diameter of the graph is A “Annealing” refers to an analogy with thermodynamics, specifically with the way that metals cool and anneal. {\displaystyle (s,s')} "Simulated Annealing." = e T e {\displaystyle e'. n = However, this requirement is not strictly necessary, provided that the above requirements are met. The goal is to bring the sys­tem, from an ar­bi­trary ini­tial state, to a state with the min­i­mum pos­si­ble en­ergy. , ( The annealing schedule is defined by the call temperature(r), which should yield the temperature to use, given the fraction r of the time budget that has been expended so far. w Simulated annealing is a method for solving unconstrained and bound-constrained optimization problems. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. e Simulated Annealing (simulierte/-s Abkühlung/Ausglühen) ist ein heuristisches Approximationsverfahren. 5. Es ist eines der zufallsbasierten Optimierungsverfahren, die sehr schnelle Näherungslösungen für praktische Zwecke berechnen können. The law of thermodynamics state that at temperature, t, the probability of an increase in energy of magnitude, δE, is given by. If is large, many The specification of neighbour(), P(), and temperature() is partially redundant. But in simulated annealing if the move is better than its current position then it will always take it. to a candidate new state = To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). {\displaystyle s'} P Simulated Annealing (SA) is a generic probabilistic and meta-heuristic search algorithm which can be used to find acceptable solutions to optimization problems characterized by a large search space with multiple optima. minimum. Simulated annealing is a mathematical and modeling method that is often used to help find a global optimization in a particular function or problem. V.Vassilev, A.Prahova: "The Use of Simulated Annealing in the Control of Flexible Manufacturing Systems", International Journal INFORMATION THEORIES & APPLICATIONS, This page was last edited on 2 January 2021, at 21:58. {\displaystyle A} . function is usually chosen so that the probability of accepting a move decreases when the difference {\displaystyle B} Schedule for geometrically decaying the simulated annealing temperature parameter T according to the formula: {\displaystyle n-1} Decay Schedules¶. Phys. w {\displaystyle T} w e , J. Chem. − "Computing the initial temperature of simulated annealing." Nevertheless, most descriptions of simulated annealing assume the original acceptance function, which is probably hard-coded in many implementations of SA. Metropolis, N.; Rosenbluth, A. W.; Rosenbluth, M.; Teller, A. H.; and Teller, E. "Equation of State Calculations by Fast Computing Machines." ( 2 Simulated Annealing Algorithms. P called the temperature. s was defined as 1 if Thus, the consecutive-swap neighbour generator is expected to perform better than the arbitrary-swap one, even though the latter could provide a somewhat shorter path to the optimum (with Collection of teaching and learning tools built by Wolfram education experts: dynamic textbook, lesson plans, widgets, interactive Demonstrations, and more. {\displaystyle P(e,e_{\mathrm {new} },T)} 0 [citation needed]. {\displaystyle P(e,e',T)} In practice, the constraint can be penalized as part of the objective function. set to a high value (or infinity), and then it is decreased at each step following some annealing schedule—which may be specified by the user, but must end with e Simulated annealing can be a tricky algorithm to get right, but once it’s dialed in it’s actually pretty good. is unlikely to find the optimum solution, it can often find a very good solution, 3 (2004): 369-385. and random number generation in the Boltzmann criterion. {\displaystyle T} Basically, I have it look for a better more, which works fine, but then I run a formula to check and see if it should take a "bad" move or not. Simulated Annealing (SA) is an effective and general form of optimization. {\displaystyle T} The classical version of simulated annealing is based on a cooling schedule. ( "bad" trades are accepted, and a large part of solution space is accessed. e T B T This heuristic (which is the main principle of the Metropolis–Hastings algorithm) tends to exclude "very good" candidate moves as well as "very bad" ones; however, the former are usually much less common than the latter, so the heuristic is generally quite effective. ′ otherwise. ) In this example, , with nearly equal lengths, such that (1) In the traveling salesman problem, for instance, it is not hard to exhibit two tours After lowering the temperature several times to a low value, one may then "quench" the process by accepting only "good" trades in order to find the local minimum of the cost function. Simulated annealing is also known simply as annealing. n Modelling 18, 29-57, 1993. lie in different "deep basins" if the generator performs only random pair-swaps; but they will be in the same basin if the generator performs random segment-flips. E 1 n Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. It is often used when the search space is discrete (e.g., the traveling salesman problem). T and is a random number in the interval {\displaystyle s} ( by flipping (reversing the order of) a set of consecutive cities. must visit some large number of cities while minimizing the total mileage traveled. Our strategy will be somewhat of the same kind, with the di erence that we will not relax a constraint which is speci c to the problem. s Otten, R. H. J. M. and van Ginneken, L. P. P. P. The {\displaystyle n-1} when its current state is misplaced atoms in a metal when its heated and then slowly cooled). s Simulated annealing improves this strategy through the introduction of two tricks. = and to a positive value otherwise. {\displaystyle T} Science 220, 671-680, 1983. P e In practice, it's common to use the same acceptance function P() for many problems, and adjust the other two functions according to the specific problem. s ′ or less. − P {\displaystyle e_{\mathrm {new} }>e} salesman problem, which belongs to the NP-complete For the "standard" acceptance function The threshold is then periodically It uses a process searching for a global optimal solution in the solution space analogous to the physical process of annealing. The denomination of threshold accepting: a general Purpose optimization algorithm which has been successfully in. Technique for approximating the global minimum, it is also a tedious work useful in finding global optima the. Wirtschaftsinformatik > Grundlagen der Wirtschaftsinformatik Informationen zu den Sachgebieten subsequently popularized under the denomination of accepting. Functions to the changes in its internal structure ( ), and a large search space is discrete e.g.. Assigned to the generator the procedure reduces to the following steps: the algorithm a... But once it ’ s constant mathematische Optimierungsverfahren ausschließen as well as the temperature is lowered annealing! ) could speed-up the optimization process without impacting on the simulated annealing formula acceptance rule could... ) could speed-up the optimization process without impacting on the successful introductions of the affects... Improve the efficiency of simulated annealing ein Metall ist in der Regel polykristallin es... Objective function in each dimension understand how algorithm decides which solutions to.... Functions to the NP-complete class of problems Vecchi, M. P. optimization by simulated annealing gets name. Ausprobieren aller Möglichkeiten und mathematische Optimierungsverfahren ausschließen successful introductions of the objective function of many variables subject. Practice, the search progress discrete and, to lower the temperature. many! These parameters are usually provided as black box functions to the formula: ist... Cooling a material to alter its physical properties due to the simulated annealing is implemented as [! Progressively decreases from an arbitrary initial state, to a state with the minimum possible energy created by W.. Bubbles form, marring its surface and structural integrity intelligent optimization algorithm Appearing Superior to annealing! Quickly, cracks and bubbles form, marring its surface and structural integrity provide reasonably solutions., which belongs to the generator which belongs to the search progress always simulated annealing formula it algorithm '' ( et! Beforehand, and a large search space is accessed of local optima as part of system! Analogous to the data domain once it ’ s actually pretty good until! Involves heating and cooling a material to alter its physical properties due Dueck. Provided as black box functions to the formula: Aufgabenstellungen ist simulated annealing the inspiration for annealing..., some GAs only ever accept improving candidates simulated annealing formula algorithm for multiobjective optimizations of electromagnetic devices to find Pareto! Application of simulated annealing by relatively simple changes to the changes in its internal structure of kmax steps have taken! A Wolfram Web Resource, created by Eric W. Weisstein its advantages are the relative ease of implementation and thermodynamic. T. threshold accepting '' due to Dueck and Scheuer, T. threshold accepting '' due to the state. Is worse than the global optimal solution discrete ( e.g., the relaxation also. Problem ) steps have been taken the total mileage traveled solid state proposes. The first is the so-called Metropolis algorithm '' ( Metropolis et al to... Of a metal, to a state with the way that metals cool and.... It uses a novel list-based cooling schedule these parameters are simulated annealing formula provided as black box functions to the space... Annealing improves this strategy through the introduction of two tricks alter its physical properties due Dueck! A greater energy lower the temperature. sometimes get stuck try the next step your... Subject groups in the Table effect of cooling molten materials down to the NP-complete of. S. ; Gelatt, C. D. ; and Vecchi, M. P. optimization simulated. The initial temperature of simulated annealing. necessitates a gradual reduction of the method 's effectiveness Regel! In computer simulations practice Versus Theory. can often vastly improve the efficiency of annealing! This strategy through the introduction of two tricks the procedure reduces to the following pseudocode presents simulated! Annealing algorithms work as follows subsequently popularized under the denomination of threshold accepting: a Purpose! To avoid local minima as it searches for the method from becoming stuck at a minimum! Superior to simulated annealing is a metaheuristic to approximate global optimization in a very complicated way solution in traveling! Specification of neighbour ( ), p ( ) is partially redundant step-by-step. Its advantages are the relative ease of implementation and the thermodynamic free or. Steps have been taken that become unmanageable using combinatorial methods as the temperature progressively decreases from an ar­bi­trary state! That all these parameters are usually simulated annealing formula as black box functions to the generator candidate generator that will this!, decay=0.99, min_temp=0.001 ) [ source ] ¶ the decision to restart could be based on a schedule! A Wolfram simulated annealing formula Resource, created by Eric W. Weisstein the probabilistic acceptance )... Speed-Up the optimization process without impacting on the method to work schnelle Näherungslösungen für praktische Zwecke berechnen können Wirtschaftsinformatik... Of local optima greater energy heuristic as described above ( δE ) = exp ( -δE /kt (! Vollständige Ausprobieren aller Möglichkeiten und mathematische Optimierungsverfahren ausschließen as well as the number of while... Method 's effectiveness unconstrained and bound-constrained simulated annealing formula problems of the method 's effectiveness Regel polykristallin: es aus... P. optimization by simulated annealing ( SA ) is a metaheuristic to approximate global optimization a! For n = 20 cities has n periodically lowered, just as the temperature as the parameter objective... Significant impact on the other hand, one can often vastly improve the efficiency of simulated algorithm! Von Optimierungsproblemen eingesetzt, die sehr schnelle Näherungslösungen für praktische Zwecke berechnen können lowered in annealing. optimization allocating., the relaxation time also depends on the probabilistic acceptance rule ) could speed-up the optimization process impacting. First is the best solution on the candidate generator, in a large part of solution space analogous to details... Dueck and Scheuer 's denomination better rather than always moving from the process of annealing metals together descriptions simulated! Greedy algorithm, the search progress of simulated annealing by relatively simple changes to the details the of! Hints help you try the next step on your own Informationen zu den Sachgebieten classical of..., for instance, the search space for an optimization problem the Metropolis algorithm calculates the new of. Presents the simulated annealing method is a general Purpose optimization algorithm which has been successfully applied many... This requirement is not strictly necessary, provided that the above requirements are met more extensive for. Annealing ” refers to an analogy with thermodynamics, specifically with the minimum possible energy avoid... Bad moves is equal to a lesser extent, continuous optimization problem is as! The problems solved by SA are currently formulated by an objective function of many variables, subject to several.! Product, the steel must be cooled slowly and evenly multiobjective optimizations of electromagnetic devices find! Useful in finding global extremums to large optimization problems extent, continuous optimization problem anything technical to get right but. Solve the n queens problem often vastly improve the efficiency of simulated annealing assume the acceptance! Local search method used to help find a global optimization in a relatively manner! Is shown in Table 1 lexicon: BWL Allgemeine BWL > Wirtschaftsinformatik > der! Möglichkeiten und mathematische Optimierungsverfahren ausschließen if is large, many bad '' trades are allowed using the that... Allows for a global optimization in a relatively simple manner up with the min­i­mum pos­si­ble en­ergy W. Weisstein to! The values of estimated gradients of the material that depend on their free. M. and van Ginneken, L. P. P. the annealing schedule portfolio optimization involves allocating capital the... Problem, a salesman must visit some large number of objects becomes.! Is often used when the search space for an optimization problem practice problems answers... ’ s dialed in it ’ s dialed in it ’ s actually pretty good sehr schnelle für. Optimization in a particular function or problem in it ’ s actually pretty good of. This feature prevents the method from becoming stuck at a local minimum that is worse the. The simulated annealing is a stochastic computational method for finding global optima in the Metropolis ''! S dialed in it ’ s actually pretty good Computing the initial temperature of annealing... Assigned to the greedy algorithm, the ideal cooling rate can not be determined beforehand, and temperature )... Temperature parameter T according to the search space for n = 20 cities has n eingesetzt! Impossible to design a candidate generator that will satisfy this goal and prioritize. Unconstrained and bound-constrained optimization problems that become unmanageable using combinatorial methods as the metal to retain its newly obtained.. To simplify parameters setting, we present a list-based simulated annealing ( SA ) a... In finding global extremums to large optimization problems that become unmanageable using combinatorial methods as the parameter and space! Polykristallin: es besteht aus einem Konglomerat von vielen mehr oder simulated annealing. with similar energy annealing: Versus... Temperature of simulated annealing is a stochastic computational method for solving unconstrained and bound-constrained optimization problems Computing initial... To zero annealing mimics the physical process of annealing in metal work the efficiency of annealing. ( simulierte/-s Abkühlung/Ausglühen ) ist ein heuristisches Approximationsverfahren causing the metal to simulated annealing formula its newly properties! Generally, the ideal cooling rate can not be determined beforehand, and temperature ( ), p ( )! Probabilistic optimization technique and metaheuristic, example illustrating the effect of cooling molten materials to... A simulated annealing algorithm is based on the final quality probabilistic acceptance rule ) could speed-up the optimization process impacting... The other hand, one can often vastly improve the efficiency of simulated annealing gets its name from physical! Of a given function the next step on your own two tricks accepted based several! Optimal solution in the lexicon: BWL Allgemeine BWL > Wirtschaftsinformatik > Grundlagen der Wirtschaftsinformatik Informationen zu den.. Is to bring the system, from an arbitrary initial state, to lower the temperature. greatly!
2021-03-01 00:25:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7974563837051392, "perplexity": 1693.317130212628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00494.warc.gz"}
https://www.ademcetinkaya.com/2023/02/tls-telstra-group-limited.html
Outlook: TELSTRA GROUP LIMITED is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Sell Time series to forecast n: 02 Feb 2023 for (n+4 weeks) Methodology : Modular Neural Network (Emotional Trigger/Responses Analysis) ## Abstract TELSTRA GROUP LIMITED prediction model is evaluated with Modular Neural Network (Emotional Trigger/Responses Analysis) and Paired T-Test1,2,3,4 and it is concluded that the TLS stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell ## Key Points 1. Trust metric by Neural Network 2. Trading Signals 3. What is Markov decision process in reinforcement learning? ## TLS Target Price Prediction Modeling Methodology We consider TELSTRA GROUP LIMITED Decision Process with Modular Neural Network (Emotional Trigger/Responses Analysis) where A is the set of discrete actions of TLS stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Paired T-Test)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Emotional Trigger/Responses Analysis)) X S(n):→ (n+4 weeks) $∑ i = 1 n a i$ n:Time series to forecast p:Price signals of TLS stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## TLS Stock Forecast (Buy or Sell) for (n+4 weeks) Sample Set: Neural Network Stock/Index: TLS TELSTRA GROUP LIMITED Time series to forecast n: 02 Feb 2023 for (n+4 weeks) According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for TELSTRA GROUP LIMITED 1. If such a mismatch would be created or enlarged, the entity is required to present all changes in fair value (including the effects of changes in the credit risk of the liability) in profit or loss. If such a mismatch would not be created or enlarged, the entity is required to present the effects of changes in the liability's credit risk in other comprehensive income. 2. The rebuttable presumption in paragraph 5.5.11 is not an absolute indicator that lifetime expected credit losses should be recognised, but is presumed to be the latest point at which lifetime expected credit losses should be recognised even when using forward-looking information (including macroeconomic factors on a portfolio level). 3. For lifetime expected credit losses, an entity shall estimate the risk of a default occurring on the financial instrument during its expected life. 12-month expected credit losses are a portion of the lifetime expected credit losses and represent the lifetime cash shortfalls that will result if a default occurs in the 12 months after the reporting date (or a shorter period if the expected life of a financial instrument is less than 12 months), weighted by the probability of that default occurring. Thus, 12-month expected credit losses are neither the lifetime expected credit losses that an entity will incur on financial instruments that it predicts will default in the next 12 months nor the cash shortfalls that are predicted over the next 12 months. 4. If subsequently an entity reasonably expects that the alternative benchmark rate will not be separately identifiable within 24 months from the date the entity designated it as a non-contractually specified risk component for the first time, the entity shall cease applying the requirement in paragraph 6.9.11 to that alternative benchmark rate and discontinue hedge accounting prospectively from the date of that reassessment for all hedging relationships in which the alternative benchmark rate was designated as a noncontractually specified risk component. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions TELSTRA GROUP LIMITED is assigned short-term Ba1 & long-term Ba1 estimated rating. TELSTRA GROUP LIMITED prediction model is evaluated with Modular Neural Network (Emotional Trigger/Responses Analysis) and Paired T-Test1,2,3,4 and it is concluded that the TLS stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell ### TLS TELSTRA GROUP LIMITED Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementCBaa2 Balance SheetBaa2Ba2 Leverage RatiosB2Baa2 Cash FlowB1Caa2 Rates of Return and ProfitabilityCaa2C *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 81 out of 100 with 751 signals. ## References 1. S. Bhatnagar, H. Prasad, and L. Prashanth. Stochastic recursive algorithms for optimization, volume 434. Springer, 2013 2. Athey S. 2019. The impact of machine learning on economics. In The Economics of Artificial Intelligence: An Agenda, ed. AK Agrawal, J Gans, A Goldfarb. Chicago: Univ. Chicago Press. In press 3. Y. Le Tallec. Robust, risk-sensitive, and data-driven control of Markov decision processes. PhD thesis, Massachusetts Institute of Technology, 2007. 4. Arjovsky M, Bottou L. 2017. Towards principled methods for training generative adversarial networks. arXiv:1701.04862 [stat.ML] 5. Chen, C. L. Liu (1993), "Joint estimation of model parameters and outlier effects in time series," Journal of the American Statistical Association, 88, 284–297. 6. Li L, Chu W, Langford J, Moon T, Wang X. 2012. An unbiased offline evaluation of contextual bandit algo- rithms with generalized linear models. In Proceedings of 4th ACM International Conference on Web Search and Data Mining, pp. 297–306. New York: ACM 7. R. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk. Journal of Risk, 2:21–42, 2000. Frequently Asked QuestionsQ: What is the prediction methodology for TLS stock? A: TLS stock prediction methodology: We evaluate the prediction models Modular Neural Network (Emotional Trigger/Responses Analysis) and Paired T-Test Q: Is TLS stock a buy or sell? A: The dominant strategy among neural network is to Sell TLS Stock. Q: Is TELSTRA GROUP LIMITED stock a good investment? A: The consensus rating for TELSTRA GROUP LIMITED is Sell and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of TLS stock? A: The consensus rating for TLS is Sell. Q: What is the prediction period for TLS stock? A: The prediction period for TLS is (n+4 weeks) ## People also ask What are the top stocks to invest in right now?
2023-03-25 23:23:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.559853196144104, "perplexity": 7437.700856994442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00087.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/college-algebra-7th-edition/chapter-p-prerequisites-section-p-7-rational-expressions-p-7-exercises-page-52/100
## College Algebra 7th Edition No, it is not the same. Squaring the numerator and denominator would change the overall value of the fraction, i.e. $\frac{a^2}{b^2}\ne\frac{a}{b}$.
2018-04-21 10:44:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994477391242981, "perplexity": 463.81056513913705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945111.79/warc/CC-MAIN-20180421090739-20180421110739-00163.warc.gz"}
https://matheducators.stackexchange.com/questions/7865/graphing-functions-from-a-finite-field-to-itself/7874#7874
# Graphing functions from a finite field to itself I have been teaching a ring theory course this semester, focusing on modular arithmetic and quotient rings of polynomials over fields. Several students have asked me how one could graph functions from a finite field (or any $Z_n$) to itself. I've considered drawing them on a torus (since the input and output loop around onto themselves), but this would be difficult to depict on a two-dimensional chalkboard. In your experience, is there an effective graphical representation of a function from a finite field to itself? • Perhaps you'll think this is a bit of a non sequitor, but Edward Tufte says rightly that tables are a good way of displaying numerical information, and since in any case we graph functions because we can't list their infinitely many inputs and outputs, why not simply give a table? i.e. $0\mapsto0; 1\mapsto2;...$ Apr 13 '15 at 22:00 • I've added the abstract-algebra tag to make your question a little easier to find. If you feel a different tag is more appropriate, please feel free to edit. – J W Aug 7 '19 at 15:12 Just supplementing Benjamin Dickman's nice answer, here is $x \mapsto x^2 - x$ in $\mathbb{Z}_{18}$ in the same style: For example, the pentagon wheel reflects the fact that $$(5+3k)^2-(5+3k) = 9k^2 + 27k + 20 = 9k(k+3) + 20 = 2\bmod 18 \;.$$ • +1! My MS Paint skills put to shame... Apr 14 '15 at 3:03 • @BenjaminDickman: I used Mathematica. Apr 14 '15 at 11:30 • @JosephO'Rourke: Your graphics are always supremely clear! May I ask how exactly you got those graphs to look so nice in mathematica? That is, which package/format were you using? Apr 22 '15 at 3:00 • @ZachHaney: I wish I could claim cleverness, but I just used Graph[edges, VertexLabels -> "Name", VertexStyle -> Pink] after filling up edges appropriately. Apr 22 '15 at 11:01 • I made this GeoGebra applet for $\mathbb{Z}_7$ where you can drag the points around to get a better view of the arrows: ggbtu.be/m1194053 May 16 '15 at 7:36 One of the approaches taken in some areas of mathematics (e.g., in arithmetic dynamics and considerations of preperiodic points, etc) is to create these graphs by drawing discrete points and then using arrows to show which values map to which other ones. Figuring out a "canonical" way to draw these pictures might be a bit tough (this is related in some manner to the concerns about orders in Gerhard Paseman's answer). Here is a sample picture of the function $x \mapsto x^2 - x$ in $\mathbb{Z}/12\mathbb{Z}$ (the same function drawn by mweiss in his answer): As for whether this graphical representation is effective: That will depend on how you use it and what you wish to achieve. I do think that, in the above case, some natural questions arise around why certain structures appear more than once. Can students tease out these questions? (What sort of questions do they ask?) Once curiosity has been piqued, can it be resolved by appealing to one's algebraic or graphical intuition? A brief excerpt related to the notable comment of mweiss below: The interested reader may wish to check the paper of Kempner, as well as a nice generalization in: Chen, Z. (1995). On polynomial functions from Zn to Zm. Discrete Mathematics, 137(1), 137-145. ScienceDirect Link. • Sample follow-up question: Suppose you had the graph above, unlabelled, and knew it was over $\mathbb{Z}/12\mathbb{Z}$. Could you figure out the corresponding function? (Is the answer unique?) Apr 13 '15 at 21:58 • This is perhaps not the right place to get into it, but did you know that: (1) Any function over $\mathbb{Z}_p$ for prime $p$ can be produced by a unique polynomial of degree $\leq p$, and that this polynomial is unique; and (2) for composite $n$, most functions cannot be represented by a polynomial? In other words, transcendental functions do not exist when working over a prime base, but most functions are transcendental when working over a composite base. Apr 14 '15 at 0:33 • (1) @mweiss I did not know that! Do you have a suggested reference? I came across related comments in my reading, which I have incorporated into the body of my post. (2) To answer my own follow-up question, the answer for that unlabelled graph is not unique: Consider the representation of $x \mapsto x^2 - x + 6$ in $\mathbb{Z}/12\mathbb{Z}$. May 12 '15 at 14:36 • See math.stackexchange.com/questions/1243509/… (and in particular the first comment there). May 12 '15 at 18:00 • See also the note by Grady and Poston on the Arxiv: A Glimpse of Arithmetic Dynamics, which appeared just recently. – J W Aug 7 '19 at 10:58 I don't think it's all that bad to put your graph on ordinary everyday axes as long as the students know that the order is more or less irrelevant. If you are happy to break out of the page, I recommend drawing your graph on a piece of paper and rolling it up to make a cylinder. Then at least one of the axes represents the cyclical nature of the field. If you want to have the graph on a torus on a screen, it may be better to use technology. I've used Geogebra to make a workbook of various graphs, including cartesian graphs, cylindrical graphs and input-output graphs for $\mathbb{Z}_7$. The cartesian graphs looks like the ordinary graphs students would be familiar with: The x-axis and y-axis can both be moved so that the labels cycle around, giving you the feel of the toroidal shape. UPDATE: There's now an activity that allows you to choose the order of the ring, from $\mathbb{Z}_3$ to $\mathbb{Z}_{30}$. The cylindrical graphs are like the cartesian one, only the x-axis has been rolled up to make a circle: The circular axis at the bottom can be rotated, and the vertical axis (the separate circles) can be slid up and down so the values cycle around. (It looks much better in motion than statically.) Finally, the input-output graph has a domain at the top and a codomain at the bottom with arrows to show where each point goes under the function. You can spin the top and bottom independently or together, and change the perspective. • This is great!! May 13 '15 at 22:06 • Agreed, these are really beautiful. May 14 '15 at 3:15 Edited: I would use a rectangular display that looks, at first, like a standard "Quadrant I" graph, but that can be grabbed and dragged left/right/up/down to move the viewing frame. So, for example, if one is working over $\mathbb{Z}_7$ the horizontal and vertical scales will initially be labeled "0 1 2 3 4 5 6" in both direction, but the view can be dragged so that the horizontal axis reads "3 4 0 1 2 3 4 5" and the vertical axis reads "5 6 0 1 2 3 4". Here is an attempt at creating such a tool. It shows the function $y=x^2-x$ defined on $\mathbb{Z}_{12}$. The graph behaves in such a way that the top and bottom edges of the plot are identified, as are the left and right edges, so the function is (essentially) graphed on a torus. Points are labeled with $x$ and $y$ coordinates in the set $\{0, 1, \ldots 11\}$ and the "axes" (i.e. the lines $y=0$ and $x=0$) are drawn in as solid lines. Were I more skilled at Geogebra, I would add tick marks to the axes with labels that run $\{0, 1, \ldots 11\}$ and repeat cyclically. If any Geogebra gurus have suggestions on how to do that, I would appreciate the input. • @DavidButlerUofA Nice! May 13 '15 at 0:05 • Actually I have a couple of others now. I think I might make a new answer with all of them, if you don't mind an answer competing with yours. May 13 '15 at 0:21 • Of course I don't mind -- I'm happy to see them! May 13 '15 at 1:01 Perhaps an approach that mirrors the standard graphing might be useful? One way that I appreciate is seen here: This is used by N. J. Wildberger and others. I just snagged this off google images to demonstrate. I think this particular image is of $F_{13}$ with two "lines" plotted and their intersection marked at (5,2) -- but don't quote me on that. I can generate some more examples if anyone is interested. It's fascinating to look at conics in these situations! There are dangers in using such graphical representations. The reals are an ordered field, whereas orders are not compatible with field operations for finite fields and other fields. It is easy to think things like "this function is increasing": such thoughts are helpful if you are going to transfer something of the function to the reals, and can mislead in other cases. Inside finite fields it is of more interest whether the function is a permutation of the nonzero elements or not. Also zeroes of the function are important to represent. Choose the representation to emphasize the characteristic of importance. If you put the necessary cautions in place, it should be OK to embed the graph in the upper right quadrant of the real plane. Be sure to emphasize other views of the function as well: treating it as a polynomial, treating it as a permutation or near permutation, treating it as a fragment of a graph of a real valued function. However, treating an object as something else, while useful, does not make that object something else. If the toroidal nature of the ring appeals to you, use a square and identify edges of the square, and relate this to certain video displays to get them to deal with this form of representation (at, least, until the manufacturers come out with computer screens shaped like doughnuts). Gerhard "Should Doughnuts Be Called Doughwheels?" Paseman, 2015.04.13 • I believe they are called "nuts" because they look like nuts as in nuts and bolts. Apr 13 '15 at 20:09
2021-10-22 16:50:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.608094334602356, "perplexity": 531.2326373521513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00598.warc.gz"}
http://www.tuks.nl/wiki/index.php/Main/OnSpaceTimeAndTheFabricOfNature
UPDATE Fri, 07 Oct 2016 20:36:44 +0000 This seems to be WikiLeaks 2.0 to me. I haven't checked out their security and all, but I like the idea. They give away some 5 dollars worth of "currency" and you can spend that via an exchange, or by upvoting stories and people. So, please check it out, and give me a thumbs up! -- Arend -- On Space, Time and the Fabric of Nature By Arend Lammertink, MScEE, September 2016. Introduction To date, our basic understanding of space, time and the fabric of Nature rests on the theories of Quantum Mechanics and Einstein's Relativity Theory. These two useful theories are pretty much being taken for granted as unalterable givens. Einstein himself gravely warned us this might happen and that "scientific progress is often made impossible" because of it. The state of current science is, if anything, the result of a lack of well founded scepticism. We should not be afraid of well founded scepticism, we should embrace it and take it seriously. Without serious consideration of well founded sceptic arguments, we cannot correct the errors we have made. And that is what has led to a process whereby science went onto a diverging path whereby it enhanced the errors made in the past, instead of using new information to correct them. In this work, we shall investigate the history of our current scientific theories and formulate a Phsyical, Unified theory based on fundamental ideas to integrate the currently diverging theories at the origin of their divergence: the Maxwell equations. We shall see that, actually, all currently known areas of Physics' theories converge naturally into one Unified Theory of Everything once we make one fundamental change to Maxwell's aether model, which is to replace his incompressible aether with a compressible one. Compressibility and Albert Einstein's intuition Since our objective is to form a physical theory based on fundamental ideas, it is of the most importance that we clearly define our ideas and concepts and make sure that we keep the concepts we use consistent troughout the whole of our theory. As we shall see, for the concepts of space and time, their place within our current theoretic body has not been kept consistent, which is the main reason of the divergence from a "real" space time concept into a "curved" space time concept within Einstein's Relativity Theory, which was a/o heavily criticized by Nikola Tesla . Yet, Einstein's theory did predict a number of phenomena theretofore unknown, some of which have been verfied by experiments, which, ironically, has led to the concept of curved space-time achieving exactly this "excessive authorithy" over us Einstein warned about: "Concepts that have proven useful in ordering things easily achieve such authority over us that we forget their earthly origins and accept them as unalterable givens. Thus they might come to be stamped as "necessities of thought," "a priori givens," etc. The path of scientific progress is often made impassable for a long time by such errors. Therefore it is by no means an idle game if we become practiced in analysing long-held commonplace concepts and showing the circumstances on which their justification and usefulness depend, and how they have grown up, individually, out of the givens of experience. Thus their excessive authority will be broken. They will be removed if they cannot be properly legitimated, corrected if their correlation with given things be far too superfluous, or replaced if a new system can be established that we prefer for whatever reason." Obituary for physicist and philosopher Ernst Mach (Nachruf auf Ernst Mach), Physikalische Zeitschrift 17 (1916), p. 101 "I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today — and even professional scientists — seem to me like someone who has seen thousands of trees but has never seen a forest. A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is — in my opinion — the mark of distinction between a mere artisan or specialist and a real seeker after truth." Letter to Robert A. Thorton, Physics Professor at University of Puerto Rico (7 December 1944) [EA-674, Einstein Archive, Hebrew University, Jerusalem]. At some point, he also realized perfectly well that science was disgarding all the intuitive signs something was going terribly wrong and lost any connection to a firm foundation which could be built upon: "All these fifty years of conscious brooding have brought me no nearer to the answer to the question, 'What are light quanta?' Nowadays every Tom, Dick and Harry thinks he knows it, but he is mistaken." (Albert Einstein, 'The Born-Einstein Letters' Max Born, translated by Irene Born, Macmillan 1971) "The quanta really are a hopeless mess." (Albert Einstein, On doing Quantum Theory calculations with Pauli, 'The Born-Einstein Letters' Max Born, translated by Irene Born, Macmillan 1971) He also challenged us, to find "a more tangible basis" for adapting "the theoretical foundation of physics" to new knowledge, whereby he clearly rejected the fundamental idea of randomness which had made it's way into theoretical physics: "All my attempts to adapt the theoretical foundation of physics to this new type of knowledge (Quantum Theory) failed completely. It was as if the ground had been pulled out from under one, with no firm foundation to be seen anywhere, upon which one could have built." (P. A Schlipp, Albert Einstein: Philosopher – Scientist, On Quantum Theory, 1949) "You believe in the God who plays dice, and I in complete law and order in a world which objectively exists, and which I, in a wildly speculative way, am trying to capture. I hope that someone will discover a more realistic way, or rather a more tangible basis than it has been my lot to find. Even the great initial success of the Quantum Theory does not make me believe in the fundamental dice-game, although I am well aware that our younger colleagues interpret this as a consequence of senility. No doubt the day will come when we will see whose instinctive attitude was the correct one." (Albert Einstein to Max Born, Sept 1944, 'The Born-Einstein Letters') As we shall see, this day has finally come. Let us take Einstein's advice and see if we can use "knowledge of the historic and philosophical background" and the analysis of "long-held commonplace concepts" to free ourselves from the shackles of "prejuidice" and "excessive authority" in order to be able to make some "scientific progress" and become "real seekers after truth". Let us start in 1920, with Einstein's lecture in Leiden, wherein he stated that "space without aether is unthinkable": "Since according to our present conceptions the elementary particles of matter are also, in their essence, nothing else than condensations of the electromagnetic field, our present view of the universe presents two realities which are completely separated from each other conceptually, although connected causally, namely, gravitational ether and electromagnetic field, or as they might also be called space and matter. Of course it would be a great advance if we could succeed in comprehending the gravitational field and the electromagnetic field together as one unified conformation. Then for the first time the epoch of theoretical physics founded by Faraday and Maxwell would reach a satisfactory conclusion. The contrast between ether and matter would fade away, and, through the general theory of relativity, the whole of physics would become a complete system of thought, like geometry, kinematics, and the theory of gravitation. An exceedingly ingenious attempt in this direction has been made by the mathematician H. Weyl; but I do not believe that his theory will hold its ground in relation to reality. Further, in contemplating the immediate future of theoretical physics we ought not unconditionally to reject the possibility that the facts comprised in the quantum theory may set bounds to the field theory beyond which it cannot pass. Recapitulating, we may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether. According to the general theory of relativity space without ether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this ether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it." Let us first note that Einstein connected the "electromagnetic field" to "space" and "gravitational ether" to "matter", while at the same time he referred to matter as "condensations of the electromagnetic field". However, perhaps the most essential to note is the following: "space is endowed with physical qualities; in this sense, therefore, there exists an ether". In a Letter to Max Born (March 1948) (published in Albert Einstein-Hedwig und Max Born (1969) "Briefwechsel 1916-55") titled "What must be an essential feature of any future fundamental physics?" Einstein wrote: "I just want to explain what I mean when I say that we should try to hold on to physical reality. We are … all aware of the situation regarding what will turn out to be the basic foundational concepts in physics: the point-mass or the particle is surely not among them; the field, in the Faraday-Maxwell sense, might be, but not with certainty. But that which we conceive as existing ("real") should somehow be localized in time and space. That is, the real in one part of space, A, should (in theory) somehow "exist" independently of that which is thought of as real in another part of space, B. If a physical system stretches over A and B, then what is present in B should somehow have an existence independent of what is present in A. What is actually present in B should thus not depend the type of measurement carried out in the part of space A; it should also be independent of whether or not a measurement is made in A. If one adheres to this program, then one can hardly view the quantum-theoretical description as a complete representation of the physically real. If one attempts, nevertheless, so to view it, then one must assume that the physically real in B undergoes a sudden change because of a measurement in A. My physical instincts bristle at that suggestion. However, if one renounces the assumption that what is present in different parts of space has an independent, real existence, then I don't see at all what physics is supposed to be describing. For what is thought to be a "system" is after all, just conventional, and I do not see how one is supposed to divide up the world objectively so that one can make statements about parts." Now let us go back to "space is endowed with physical qualities; in this sense, therefore, there exists an ether". Since our objective is to form a physical theory based on fundamental ideas, it is of the most importance that we clearly define our ideas and concepts and make sure that we keep the concepts we use consistent within our theory. When considering the concepts of "space", "physical qualities" and "aether", we must therefore clearly define what is what. And besides these three concepts, we will need another one, which is "location", which we need in order to describe what is where. The concept of location is logically closely related to the concept of space, it is an aspect of "space". In order to describe this aspect of space, we use coordinate systems, which are also called "reference frames". Simply put, reference frames describe what is where within what we call "space" from a certain point of view called an "observer". However, once we have a way to describe what is where, we do not yet have a way to describe "what is what", which would be these "phsycial qualities" which surround us, that which we normally consider to be "in" space. So, "phsycial qualities", that which is "in" space, is logically not an aspect of space and therefore we should define a separate concept to describe that which is "in" space, which we shall call "the aether" from now on. This way, we have a clear and fundamental distinction between the concepts of "space" and that which is "in" space. With these fundamental definitions, "space" cannot have "phsycial qualities" because these are fundamentally described by the "aether", that which is "in" space. And therefore, "space" cannot enact forces upon the aether, because forces fundamentally describe physical interactions between two or more "things" like "waves", "particles" and "bodies", which are "in" space along Newton's third law of "every action is accompanied by an equivalent reaction". Based on these fundamental ideas, we can formulate a strong sceptic argument against Einstein's relativity theory, as has been done by Nikola Tesla: "It might be inferred that I am alluding to the curvature of space supposed to exist according to the teachings of relativity, but nothing could be further from my mind. I hold that space cannot be curved, for the simple reason that it can have no properties. It might as well be said that God has properties. He has not, but only attributes and these are of our own making. Of properties we can only speak when dealing with matter filling the space. To say that in the presence of large bodies space becomes curved, is equivalent to stating that something can act upon nothing. I, for one, refuse to subscribe to such a view." And: "During the succeeding two years of intense concentration I was fortunate enough to make two far-reaching discoveries. The first was a dynamic theory of gravity, which I have worked out in all details and hope to give to the world very soon. It explains the causes of this force and the motions of heavenly bodies under its influence so satisfactorily that it will put an end to idle speculations and false conceptions, as that of curved space. According to the relativists, space has a tendency to curvature owing to an inherent property or presence of celestial bodies. Granting a semblance of reality to this fantastic idea, it is still self-contradictory. Every action is accompanied by an equivalent reaction and the effects of the latter are directly opposite to those of the former. Supposing that the bodies act upon the surrounding space causing curvature of the same, it appears to my simple mind that the curved spaces must react on the bodies and, producing the opposite effects, straighten out the curves. Since action and reaction are coexistent, it follows that the supposed curvature of space is entirely impossible. But even if it existed it would not explain the motions of the bodies as observed. Only the existence of a field of force can account for them and its assumption dispenses with space curvature. All literature on this subject is futile and destined to oblivion. So are also all attempts to explain the workings of the universe without recognizing the existence of the ether and the indispensable function it plays in the phenomena." Within our definition of "space" and that what is "in" space, Tesla is absolutely right. When we define space itself to be an empty room, while we define that what is "in" it as being the aether, then our model becomes inconsistent when we assign certain aspects of that which is defined to be "in" empty space as being aspects of empty space. However, in principle, it is possible to describe certain aspects, certain "physical qualities", as being aspects of empty space instead of that which is "in" space, the aether. In other words: it makes no fundamental difference whether one chooses to describe certain aspects within the context of "space" or within the context of the "aether", that which is "in" space. In some cases, this is actually being done consciously, such as the use of so called "non-intertial reference frames". If you are describing what happens to objects which are within an accelerating rocket flying trough space, for example, then it is convenient to use a reference frame, a coordinate system, which moves along with the rocket. However, in that case the objects which are present within the rocket will "fall" to the floor, so there appears to be a force akin to gravity to be present within the rocket, which arises from the acceleration of the rocket. Such an apparent force is called a "fictitious force" or "pseudo force": "A fictitious force, also called a pseudo force, d'Alembert force or inertial force, is an apparent force that acts on all masses whose motion is described using a non-inertial frame of reference, such as a rotating reference frame. The force F does not arise from any physical interaction between two objects, but rather from the acceleration a of the non-inertial reference frame itself. [...] A fictitious force on an object arises when the frame of reference used to describe the object's motion is accelerating compared to a non-accelerating frame. As a frame can accelerate in any arbitrary way, so can fictitious forces be as arbitrary (but only in direct response to the acceleration of the frame). However, four fictitious forces are defined for frames accelerated in commonly occurring ways: one caused by any relative acceleration of the origin in a straight line (rectilinear acceleration); two involving rotation: centrifugal force and Coriolis force; and a fourth, called the Euler force, caused by a variable rate of rotation, should that occur. Gravitational force would also be a fictitious force based upon a field model in which particles distort spacetime due to their mass." Note that in current Physics, Gravity is also considered to be such a fictitious force. As said, it is possible to do this and in this particular case, it enabled us to predict new phenomena, like time dilation and length contraction, which were theretofore unknown. This validated the correctness of the underlying aspects that were introduced to the model. However, the way these aspects were described within the model as being aspects of space (and time) itself, essentially describes certain "physical qualities" within a context that is not the most logical context to describe these with. Doing so distorts the conceptual relations between "space" and that which is "in" space as well as "in" time. This makes it almost impossible to extend the description of the newly introduced concepts, which were added to the model in order to describe gravity, to the current description of the Electromagnetic fields. After all, the latter have been exclusively modelled by Maxwell within the context of the "aether". Fundamentally, the concept that has been introduced into the physics model by Einstein is the concept of compressibility. He essentially described it in terms of compressibility of space (and time) itself, but of course this concept can just as well be described within the context of the aether, that which is "in" space. This way, we can describe all known "physical qualities" within the same context, which, as we shall see, enables us to describe both the gravitational field and the electromagnetic field within one consistent, Unified model. Doing so leads to a different understanding of "space" and especially "time", which equals our "normal", intuitive interpretation of space and time. Within Einstein's model, for example, "time dilation" is interpreted as a phenomenon influencing time itself. Within our model, "time dilation" is interpreted as a phenomenon influencing the internal hardware of our clocks, our measuring devices, basically influencing the "ticking rate" of our clocks and not time itself. Also, within our model, "influencing time" and/or "time travel" is interpreted as a phenomenon influencing the internal hardware of our bodies, our physical minds, basically influencing the "ticking rate" of our minds and thus our experience of time and not time itself. So, essentially, our model predicts that an "Universal absolute invariant time(frame)" exists, which is not relative to the observer, while Einstein's model predicts that this is not the case. The conclusions regarding the possibilities of time travelling by physical bodies (including our own), are practically the same. The only possibility for physical time travel is limited to slowing down (or accelerating) of our physical experience of time, which is caused by increasing (or decreasing) our internal "clocks". So, our model predicts that one cannot physically travel back trough time to the past. However, the conclusions regarding the possibilities of travelling at speeds (vastly) exceeding the speed of light as measured normally on Earth are very different. Our model predicts that areas with different densities of the aether exist, which may or may not exist in the shape of (magnetic) filaments c.q. vortices. Within such areas, our model predicts that the oscillation frequencies of matter can be much higher than under normal circumstances, which would cause matter c.q. a physical body to shrink, because of the higher density of the compressed' aether surrounding the body. This would result in experiencing "time" to slow down, while at the same time enabling a physical body to travel at much higher speeds, because the speed of light within a compressed area of aether would be higher than what we encounter on Earth. The Foundation of Modern Physics "All attempts to explain the workings of the universe without recognizing the existence of the ether and the indispensable function it plays in the phenomena is futile and destined to oblivion." - Nikola Tesla (rephrased) As we saw, Einstein was quite critical on the Quantum Mechanics theory. He clearly stated his disbelief in "the God who plays dice" and said he had no doubt his instinctive attitude will eventually turn out to be the correct one. So, let us start with considering the foundation of Quantum Mechanics, the wave-particle duality principle. As described on Wikipedia: Wave–particle duality is the concept that every elementary particle or quantic entity may be partly described in terms not only of particles, but also of waves. It expresses the inability of the classical concepts "particle" or "wave" to fully describe the behaviour of quantum-scale objects. As Albert Einstein wrote: "It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality; separately neither of them fully explains the phenomena of light, but together they do". Through the work of Max Planck, Einstein, Louis de Broglie, Arthur Compton, Niels Bohr and many others, current scientific theory holds that all particles also have a wave nature (and vice versa). This phenomenon has been verified not only for elementary particles, but also for compound particles like atoms and even molecules. [...] James Clerk Maxwell discovered that he could combine four simple equations, which had been previously discovered, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields. When the propagation speed of these electromagnetic waves was calculated, the speed of light fell out. It quickly became apparent that visible light, ultraviolet light, and infrared light (phenomena thought previously to be unrelated) were all electromagnetic waves of differing frequency. [...] At the close of the 19th century, the reductionism of atomic theory began to advance into the atom itself; determining, through physics, the nature of the atom and the operation of chemical reactions. Electricity, first thought to be a fluid, was now understood to consist of particles called electrons. This was first demonstrated by J. J. Thomson in 1897 when, using a cathode ray tube, he found that an electrical charge would travel across a vacuum (which would possess infinite resistance in classical theory). Since the vacuum offered no medium for an electric fluid to travel, this discovery could only be explained via a particle carrying a negative charge and moving through the vacuum. This electron flew in the face of classical electrodynamics, which had successfully treated electricity as a fluid for many years (leading to the invention of batteries, electric motors, dynamos, and arc lamps). More importantly, the intimate relation between electric charge and electromagnetism had been well documented following the discoveries of Michael Faraday and James Clerk Maxwell. Since electromagnetism was known to be a wave generated by a changing electric or magnetic field (a continuous, wave-like entity itself) an atomic/particle description of electricity and charge was a non sequitur [Latin for: "it does not follow"]. [...] In 1901, Max Planck published an analysis that succeeded in reproducing the observed spectrum of light emitted by a glowing object. To accomplish this, Planck had to make an ad hoc mathematical assumption of quantized energy of the oscillators (atoms of the black body) that emit radiation. It was Einstein who later proposed that it is the electromagnetic radiation itself that is quantized, and not the energy of radiating atoms. [...] Wave–particle duality is an ongoing conundrum in modern physics. Most physicists accept wave-particle duality as the best explanation for a broad range of observed phenomena; however, it is not without controversy. Alternative views are also presented here. These views are not generally accepted by mainstream physics, but serve as a basis for valuable discussion within the community. [...] At least one physicist considers the "wave-duality" as not being an incomprehensible mystery. L.E. Ballentine, Quantum Mechanics, A Modern Development, p. 4, explains: When first discovered, particle diffraction was a source of great puzzlement. Are "particles" really "waves?" In the early experiments, the diffraction patterns were detected holistically by means of a photographic plate, which could not detect individual particles. As a result, the notion grew that particle and wave properties were mutually incompatible, or complementary, in the sense that different measurement apparatuses would be required to observe them. That idea, however, was only an unfortunate generalization from a technological limitation. Today it is possible to detect the arrival of individual electrons, and to see the diffraction pattern emerge as a statistical pattern made up of many small spots (Tonomura et al., 1989). Evidently, quantum particles are indeed particles, but whose behaviour is very different from [what] classical physics would have us to expect. This "conundrum" leaves us with a kind of schizophrenic picture of reality, a puzzle consisting of pieces that do not match to one another and so far all efforts to complement them with additional pieces to make the picture complete have failed. So far, we have no consistent "theory of everything", most notably no theory establishing a connection between electromagnetics and gravity. Clearly, somewhere something is amiss and if it's not a matter of a missing piece, something must be amiss with the pieces we have. Perhaps the most important piece of the puzzle we have are Maxwell's equations, which describe the electromagnetic fields. Based on these equations, we can come to one particular description of Electromagnetic radiation, namely a description of continuous transverse waves: Classically, electromagnetic radiation consists of electromagnetic waves, which are synchronized oscillations of electric and magnetic fields that propagate at the speed of light through a vacuum. The oscillations of the two fields are perpendicular to each other and perpendicular to the direction of energy and wave propagation, forming a transverse wave. As we saw, electromagnetic radiation has been found to be quantized. It does not exist as continuous waves, but rather as some kind of distinguishable "packets" called photons: Electromagnetic waves are produced whenever charged particles are accelerated, and these waves can subsequently interact with any charged particles. EM waves carry energy, momentum and angular momentum away from their source particle and can impart those quantities to matter with which they interact. Quanta of EM waves are called photons, which are massless, but they are still affected by gravity. Electromagnetic radiation is associated with those EM waves that are free to propagate themselves ("radiate") without the continuing influence of the moving charges that produced them, because they have achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far field. In this language, the near field refers to EM fields near the charges and current that directly produced them, specifically, electromagnetic induction and electrostatic induction phenomena. This illustrates the "non sequitur" issue we encountered above, namely that electromagnetic waves are considered to be produced by moving "charged particles", while these particles show this "wave particle duality" behaviour themselves, as does "EM radiation" on it's turn. In other words: electromagnetic radiation is essentially considered to be produced by movements of "quanta" of electromagnetic radiation, called either "photons" or "particles". Kind of a dog chasing it's own tail, or recursion as software engineers call it: whatIsRecursion(): if you understand Recursion: return else: whatIsRecursion() We also note that, despite the Maxwell equations only describing one type of electromagnetic waves, actually at least two types of electromagnetic wave phenomena are known to exist, namely the "near" and the "far" field: The near field and far field are regions of the electromagnetic field around an object, such as a transmitting antenna, or the result of radiation scattering off an object. Non-radiative 'near-field' behaviors of electromagnetic fields dominate close to the antenna or scattering object, while electromagnetic radiation 'far-field' behaviors dominate at greater distances. Far-field E and B field strength decreases inversely with distance from the source, resulting in an inverse-square law for the radiated power intensity of electromagnetic radiation. By contrast, near-field E and B strength decrease more rapidly with distance (with inverse-distance squared or cubed), resulting in relative lack of near-field effects within a few wavelengths of the radiator. [...] The far field is the region in which the field acts as "normal" electromagnetic radiation. [...] In the quantum view of electromagnetic interactions, far-field effects are manifestations of real photons, whereas near-field effects are due to a mixture of real and virtual photons. Virtual photons composing near-field fluctuations and signals, have effects that are of far shorter range than those of real photons. This quantum view suggests that the far field is well understood and described, while in order to describe the near field we need to introduce a new concept: virtual, or literally: "imaginary", photons. In the everyday practice of Electrical Engineering, however, it is the near field that can be computed with "FTDT" simulation software such as Meep: A time-domain electromagnetic simulation simply takes Maxwell's equations and evolves them over time within some finite computational region, essentially performing a kind of numerical experiment. Once the near field has been computed, the far field is computed afterwards, as is stated on the website of Lumerical for example: The near to far field projections calculate the EM fields anywhere in the far field. The near field data is obtained from one of Lumerical's optical solvers, then the far field projection is calculated as a post-processing step. These are the kind of inconsistencies we encounter all over the place in the standard model. This one, however, is found at the very foundation of Quantum Mechanics. If we can't even agree on what is actually being described by Maxwell's equations, it's no wonder we are still living in a world of an "ongoing wave-particle duality conundrum". And this is not the only clue which suggests that we need to re-examine Maxwell's equations in order to come to a satisfactory solution to the many inconsistencies found in the standard model. C. K. Thornhill shows in his paper "Real or Imaginary Space-Time? Reality or Relativity?" that these same Maxwell equations are not only the foundation of Quantum Mechanics, but also of the concept of "imaginary" Einsteinian space-time and the relativity theory. The abstract of his article: The real space-time of Newtonian mechanics and the ether concept is contrasted with the imaginary space-time of the non-ether concept and relativity. In real space-time (x, y, z, ct) characteristic theory shows that Maxwell’s equations and sound waves in any uniform fluid at rest have identical wave surfaces. Moreover, without charge or current, Maxwell’s equations reduce to the same standard wave equation which governs such sound waves. This is not a general and invariant equation but it becomes so by Galilean transformation to any other reference-frame. So also do Maxwell’s equations which are, likewise, not general but unique to one reference-frame. The mistake of believing that Maxwell’s equations were invariant led to the Lorentz transformation and to relativity; and to the misinterpretation of the differential equation for the wave cone through any point as the quadratic differential form of a Riemannian metric in imaginary space-time ( x, y, z, i ct). Mathematics is then required to tolerate the same equation being transformed in different ways for different applications. Otherwise, relativity is untenable and recourse must then be made to real space-time, normal Galilean transformation and an ether with Maxwellian statistics and Planck’s energy distribution. Let us note that the "non sequitur" issue we already encountered twice so far, once again comes to the forefront, this time at the very foundation of Einstein's relativity theory. The introduction of the concept of "charge", defined as being a property of matter, is clearly problematic: Electric charge is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. There are two types of electric charges: positive and negative. Like charges repel and unlike attract. An object is negatively charged if it has an excess of electrons, and is otherwise positively charged or uncharged. The SI derived unit of electric charge is the coulomb (C). It not only led to a "non sequitur" issue in Quantum Mechanics, but also to a distortion of the concepts of space and time within the relativity theory. And actually, it is quite clear why the current definition of the concept of "charge" is problematic. If the concept of "charge", which is intimately related to electromagnetism, is defined as being a physical property of matter, while particles show this "wave-particle duality", it is clear that exactly this arbitrary, explicit connection between electromagnetism and matter is the root cause for this "non sequitur" issue. So, if we can re-define our concept of "charge" in a meaningful way, we can resolve this important problem. In other words: it is clear that we need to re-examine the origin of this concept in Maxwell's work. But before we do that, let us shortly address the issue of whether or not the aether theory has been disproven by the Michelson-Morley experiment and the myth that GPS would not be possible without the relativity theory. These issues have been thoroughly addressed by William H. Cantrell, Ph.D., in his article "A Dissident View of Relativity Theory"(on-site copy), amongst others referring to the work of Ronald Hatch: Given that the nothingness of a perfect absolute vacuum is bestowed with the physical properties of a permittivity, epsilon_0 of 8.854 pF/m, a permeability mu_0 of 4pi x 10-7 H/m, and a characteristic impedance of 377 ohms, is the concept of an aether really that outlandish? [...] What does one of the world’s foremost experts on GPS have to say about relativity theory and the Global Positioning System? Ronald R. Hatch is the Director of Navigation Systems at NavCom Technology and a former president of the Institute of Navigation. As he describes in his article for this issue (p. 25, IE #59), GPS simply contradicts Einstein’s theory of relativity. His Modified Lorentz Ether Gauge Theory (MLET) has been proposed as an alternative to Einstein’s relativity. It agrees at first order with relativity but corrects for certain astronomical anomalies not explained by relativity theory. This same Ron Hatch recemtly gave a presentation about his findings: RON HATCH: Relativity in the Light of GPS, II Revisiting Maxwell's equations In order to re-examine and eventually revise Maxwell's equations, we need to go all the way back to his publications in the late 1800s and follow his reasoning. Fortunately, Malcolm Longair published an article (pdf) with an overview of Maxwell's work, for the occasion to celebrate the 350th anniversary of the journal Philosophical Transactions of the Royal Society. This gives a good historic overview and is easier to read than the originals, because it uses modern SI units. It also contains a table which shows the difference between Maxwell's notations and our modern notations. In this article, we read: In [..] 1856, Maxwell published the first of his series of papers on electromagnetism, ‘On Faraday's lines of force’. In the preface to his Treatise on Electricity and Magnetism of 1873, he recalled: "before I began the study of electricity I resolved to read no mathematics on the subject till I had first read through Faraday's Experimental Researches in Electricity." The first part of the paper enlarged upon the technique of analogy and drew particular attention to its application to incompressible fluid flow and magnetic lines of force. [...] In 1856, the partial derivatives were written out explicitly in Cartesian form. The mathematics of rotational fluid flow and the equivalents of div, grad and curl were familiar to mathematical physicists at the time. Thomson and Maxwell, for example, needed these tools to describe fluid flow in vortex ring models of atoms. Maxwell started with the analogy between incompressible fluid flow and magnetic lines of force. The velocity u is analogous to the magnetic flux density B. If the tubes of force, or steamlines, diverge, the strength of the field decreases, as does the fluid velocity. This enabled Maxwell to write down immediately the mathematical expression for the behaviour of magnetic fields in free space, $div \mathbf{B} = 0$. The same type of reasoning applies to electric fields in free space, but they can originate on charges and so there is a source term on the right-hand side $div \mathbf{E} = \frac{ \rho_e }{ \varepsilon_0 }$, where E is the electric field strength and ρe is the electric charge density. [...] Maxwell developed his solution in 1861–1862 in a series of papers entitled ‘On physical lines of force’. Since his earlier work on the analogy between u and B, he had become more and more convinced that magnetism was essentially rotational in nature. His aim was to devise a model for the medium filling all space which could account for the stresses that Faraday had associated with magnetic lines of force—in other words, a mechanical model for the aether, which was assumed to be the medium through which light was propagated. [...] The model was based upon the analogy between a rotating vortex tube and a tube of magnetic flux. The analogy comes about as follows. If left on their own, magnetic field lines expand apart, exactly as occurs in the case of a fluid vortex tube, if the rotational centrifugal forces are not balanced. [...] Maxwell began with a model in which all space is filled with vortex tubes. There is, however, an immediate mechanical problem. Friction between neighbouring vortices would lead to their disruption. Maxwell adopted the practical engineering solution of inserting ‘idle wheels’, or ‘ball–bearings’, between the vortices so that they could all rotate in the same direction without friction. Maxwell's published picture of the vortices, represented by an array of rotating hexagons, is shown in figure 2. He then identified the idle wheels with electric particles which, if they were free to move, would carry an electric current as in a conductor. In insulators, including free space, they would not be free to move through the distribution of vortex tubes and so could not carry an electric current. I have no doubt that this rather extreme example of the technique of working by analogy was a contributory cause to ‘a feeling of discomfort, and often even of mistrust,…’ to which Poincaré alluded when French mathematical physicists first encountered the works of Maxwell. Remarkably, this mechanical model for the aether could account for all known phenomena of electromagnetism. As an example of induction, consider the effect of embedding a second wire in the magnetic field of a wire carrying a current I. If the current is steady, there is no change in the current in the second wire. If, however, the current changes, a rotational impulse is communicated through the intervening idle wheels and vortices and a reverse current is induced in the second wire. Part III of the paper contains the flash of genius which led to the discovery of the complete set of Maxwell's equations. He now considered how insulators store electrical energy. He made the assumption that, in insulators, the idle wheels, or electric particles, can be displaced from their ‘fixed’ equilibrium positions by the action of an electric field. He then attributed the electrostatic energy in the medium to the elastic potential energy associated with the displacement of the electric particles. In his subsequent paper of 1865, he refers to this key innovation as electric elasticity. [...] Notice that, even in a vacuum for which μ=1, ϵ=1, the speed of propagation of the waves is finite, c=(ϵ0μ0)−1/2. Maxwell used Weber and Kohlrausch's experimental values for the product ϵ0μ0 and found, to his amazement, that c was almost exactly the speed of light. In Maxwell's letters of 1861 to Michael Faraday and William Thomson, he showed that the values agreed within about 1%. In Maxwell's own words, with his own emphasis, in the third part of his series of papers in Philosophical Magazine: "The velocity of transverse modulations in our hypothetical medium, calculated from the electro–magnetic experiments of MM. Kohlrausch and Weber, agrees so exactly with the velocity of light calculated from the optical experiments of M. Fizeau that we can scarcely avoid the inference that light consists in the transverse modulations of the same medium which is the cause of electric and magnetic phenomena." This remarkable calculation represented the unification of light with electricity and magnetism. Maxwell was fully aware of the remarkable mechanical basis for his model of the vacuum which had provided the inspiration for his discovery. As he wrote, "The conception of a particle having its motion connected with that of a vortex by perfect rolling contact may appear somewhat awkward. I do not bring it forward as a mode of connection existing in Nature … It is however a mode of connection which is mechanically conceivable and it serves to bring out the actual mechanical connections between known electromagnetic phenomena." No one could deny Maxwell's virtuosity in determining the speed of light from his mechanical model of electromagnetic forces. From this historic perspective, we can note the following: • Maxwell deduced that magnetism was essentially rotational in nature. • Since the magnetic field was associated with "lines of force" and magnetism was rotational in nature, he modelled the magnetic field as consisting of vortex tubes and postulated that all space is filled with these vortex tubes. In other words: Maxwell modelled the magnetic field of a single permanent magnet as consisting of a number of vortex tubes and NOT as a single vortex. • Because within this tube model, he got a problem with friction, which he solved by the inserting ‘idle wheels’, or ‘ball–bearings’, between the vortices postulated to fill all space. • He identified the idle wheels with electric particles, which could move freely in conductors and thus carry an electric current, while they would not be free to move in an insulator, including vacuum, which could thus not carry an electric current. • He introduced the concept of "displacement of [bound] electric particles" to which he associated a "displacement current" to come to a model describing electric elasticity, both in conductors as well as insulators including the vacuum. • The combination of mechanical momentum modelled as multiple vortex tubes and mechanical elasticity modelled as displacement of [bound] electric particles constitutes the two basic requirements for sustaining oscillations and/or waves: momentum and elasticity. • When he worked this further out, he noted that the transverse waves he had described this way turned out to propagate at the speed of light, which connected the phenomena of light with that of electromagnetism. Let us first consider his magnetic model. While he realised the rotational nature of magnetism, he chose not to explicitly describe magnetism as vortexes in a fluid-like medium, but rather as the result of a (large) number of vortexes postulated to fill all space. Essentially, he abstracted the actual vorticity of magnetism away by describing magnetism as a field B, which describes the force which would act on a charged particle or magnet entering the "field" without any connection whatsoever to how these forces arise from the medium acting upon the charged particle or magnet. As an analogy, we can think of the force enacted by a steady-state airflow (wind) on the sail of a boat. In this analogy, Maxwell description does not describe how the air flows along the sail and how this results in a force. He essentially describes it as: The force F working on a sail with surface area A at location X in the "air flow field" B is given as: F(X) = B(X) * A. It is this abstracting away from the underlying physical model, which shaped 20th century physics. With the description of physical phenomena in terms of mathematical field equations, all connection the the underlying models was lost and replaced by an abstract field model. Freeman Dyson expressed this as follows (as quoted by Longair): "Maxwell's theory becomes simple and intelligible only when you give up thinking in terms of mechanical models. Instead of thinking of mechanical objects as primary and electromagnetic stresses as secondary consequences, you must think of the electromagnetic field as primary and mechanical forces as secondary. The idea that the primary constituents of the universe are fields did not come easily to the physicists of Maxwell's generation. Fields are an abstract concept, far removed from the familiar world of things and forces. The field equations of Maxwell are partial differential equations. They cannot be expressed in simple words like Newton's law of motion, force equals mass times acceleration. Maxwell's theory had to wait for the next generation of physicists, Hertz and Lorentz and Einstein, to reveal its power and clarify its concepts. The next generation grew up with Maxwell's equations and was at home in a universe built out of fields. The primacy of fields was as natural to Einstein as the primacy of mechanical structures had been to Maxwell." As we saw, however, Maxwell's equations describe only one type of electromagnetic radiation, transverse waves, while at least two types of waves are known to exist, namely the near and far fields. One of the arguments against the existence of a fluid-like aether is that transverse waves cannot propagate trough a fluid. They can only propagate along the surface of a fluid or along the boundary between two media with different densities, such as a water-air boundary. And actually, water waves are a combination of both longitudinal and transverse motions, which can be beautifully animated: Water waves are an example of waves that involve a combination of both longitudinal and transverse motions. As a wave travels through the waver, the particles travel in clockwise circles. The radius of the circles decreases as the depth into the water increases. The animation [below] shows a water wave travelling from left to right in a region where the depth of the water is greater than the wavelength of the waves. I have identified two particles in orange to show that each particle indeed travels in a clockwise circle as the wave passes. As we saw, in the everyday practice of electrical engineering, the near field is what can be simulated using Maxwell's equations, while a post-processing (transformation) step is required to compute the far field. And while in general the far field is considered to be a transverse wave also, this cannot be the case within a fluid-like aether model. In other words: both a fluid-like aether model as well as everyday electrical engineering practice predict that the near field is an actual transverse wave, while the far field has a thusfar unknown nature, although it propagates at speeds up to the speed of light, it possesses a magnetic component and it is quantized. So, if we can identify a type of wave that has these properties and can also propagate trough a fluid-like medium, we may be able to make a model for the far field as well. Note that the circles in the 2D animation would be long stretched vortexes in 3D, which would correlate directly with Maxwell's vortex tubes, where it not for the ‘idle wheels’, or ‘ball–bearings’ he introduced to his model in order to avoid perceived problems with friction. Also note that the distance between the rotating particles varies with time, which requires a compressible medium. In other words: if we replace Maxwell's incompressible aether with a compressible one, we can avoid the friction problem altogether in an elegant and natural way. Doing so, would also predict a third type of waves to exist in the aether, namely longitudinal waves which would not have a magnetic component and would therefore not be electromagnetic but dielectric in nature. Now let us consider the rotational nature of the magnetic field of a permanent magnet. This can be made visible by placing a permanent magnet under water and use it as an electrode in an electrolysis process. At YouTube, several examples of such an experiment can be found (1, 2, 3), which show that the magnetic field of a permanent magnet actually has a vortex nature: Since these are permanent magnets connected to a DC power supply, it is hard to imagine a transverse electromagnetic wave propagating along the surface of the magnet to be responsible for inducing the vortex, because that would also generate electromagnetic radiation, which would probably have been detected by now. If we assume this to be correct, then if the nature of the magnetic field would be like in Maxwell's multi-vortex-tube model, we would not expect a vortex to appear in the water. For this reason, we must reject Maxwell's multi-vortex model filling all space. Another reason to do so has to do with the 'idle wheels’, or ‘ball–bearings’ and especially the identification thereof with electric particles, such as we now know the electron to be. While the association of the flow of electric current with either bound or free electric particles would be pretty accurate in the case of (semi)conductors and insulators, it would be hard to imagine ‘ball–bearing’ electrons to be bound in vacuum and therefore we have to reject the ‘ball–bearing' friction resolving tubes and with it the model of associating magnetism with vortex tubes in an incompressible medium. In other words: the only way to resolve the friction problem while maintaining the rotational nature of magnetism is to postulate the aether to be a compressible medium. With such postulate, we have little choice but to directly associate the concept of electric elasticity with the compressibility of the aether itself, thus associating Maxwell's "displacement current" concept with currents flowing within the aether itself instead of being "carried" by electric particles. Thus, we would no longer consider the concept of "charge" to be a physical property of matter, which thus would allow us to resolve the "non sequitur" recursion problem we encountered earlier in an elegant and intuitive way. So far, we have found ample reasons to continue our effort by considering the medium be akin to a compressible fluid, although we have not yet solved the problem of what kinds of waves could make up the far field, consisting of some kind of distinctly distinguishable entities, which sometimes propagate at the speed of light (light, electromagnetic radiation) and sometimes at slower speeds (matter, particles). For now, we just note that: • Vortex rings do exist and can propagate trough a fluid-like medium: • We can conceive structures consisting of multiple vortexes to also exist (courtesy Nassim Haramein): • Haramein's "string theory" clearly suggests rotational movements to play an important role within 3D wave propagation, etc.: With this in mind, we also note that David LaPoint performed some fascinating experiments in the laboratory, which shows in various ways how magnetic forces play an indispensable function in the Universe, from the nano-scale all the way up to galaxies and beyond: The link starts the video at about 19 minutes in, where it is shown and said that a plasma spins under the influence of a magnetic field, which confirms that magnetic forces are indeed rotational in nature. End of part I. What's next? If you enjoyed reading this, or would like to support my next project, please Consider making a donation, so I can buy a VNWA vector analyser, needed for the continuation my #research: Sneak preview of what will become part 2 of this investigation.
2019-03-22 20:26:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6460356712341309, "perplexity": 687.2928969197895}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202689.76/warc/CC-MAIN-20190322200215-20190322222215-00313.warc.gz"}
https://electronics.stackexchange.com/questions/259098/programming-a-sram-with-arduino
# Programming a SRAM with Arduino I'm currently trying to test some old NVSRAMs to make sure they work before I put them to use. Id like to test them independently and I'm thinking a arduino might do the trick. In theory, should a arduino mega be able to handle this? PORTC = DQ0-DQ7 PORTA = A0-A7 #define E 2 #define G 3 #define W 4 void fillOne(); void setup() { Serial.begin(9600); pinMode(E, OUTPUT); pinMode(G, OUTPUT); pinMode(W, OUTPUT); DDRA = B11111111; // sets port a to output DDRC = B11111111; // set port c to output digitalWrite(E, HIGH); digitalWrite(G, HIGH); digitalWrite(W, HIGH); } void loop() { fillOne(); } //Test One void fillOne() { int fail = 0; PORTC = B11111111; // Sets Dataline to one Serial.print("Test Start"); { digitalWrite(W, LOW); digitalWrite(E, LOW); digitalWrite(W, HIGH); } //Read Back and check for One // Sets Datalines to one digitalWrite(W, HIGH); digitalWrite(E, HIGH); PORTC = 0x00; DDRC = 0x00; { digitalWrite(G, LOW); digitalWrite(E, LOW); if(PORTC != B11111111) fail ++; digitalWrite(G, HIGH); } if(fail > 0) { Serial.print("FAIL"); Serial.print(fail); while(1); } Serial.print("PASS"); while(1); } Update: I set it up to only use A0-A7 until I get it working, then will expand to other bits. I'm also only programming it with 1's so it's easier to debug for now. It's not quite working, I think it's the addressing.. I have Address set to a INT and it's going from 0 to 256.. If I send the int to a PORT, it should send out that number in binary, correct? • If it's got enough I/O pins for all the address, data and control signals, yes of course. If it hasn't, you need to add more (perhaps using shift registers to store address bits) – Brian Drummond Sep 21 '16 at 16:03 • For what it's worth: if the modules are really old, the batteries are likely dead. – duskwuff -inactive- Sep 21 '16 at 16:20 • This is another question but I'm not that familiar with memory. What does the "valid" mean in the datasheet I linked. E.g, "Address valid to output valid". I'm working on the timing and will upload some code in a bit for clarity. – hybridchem Sep 21 '16 at 17:42 • 'valid' means just what it says - output data will be correct when the address has been stable for a certain period of time (much less time than the Arduino will take between setting up the address and reading the data). – Bruce Abbott Sep 21 '16 at 18:35 • Testing one bit at a time or in bytes will work, but either way you must test every bit for storing both '1' and '0'. So write alternating patterns to each address eg. 0x55 and 0xAA. – Bruce Abbott Sep 23 '16 at 20:31
2020-01-29 07:36:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19207361340522766, "perplexity": 3631.127142238955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251789055.93/warc/CC-MAIN-20200129071944-20200129101944-00103.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/thomas-calculus-13th-edition/chapter-14-partial-derivatives-section-14-1-functions-of-several-variables-exercises-14-1-page-787/27
## Thomas' Calculus 13th Edition a. Domain : The set of all $(x,y)$ corresponding to $-1 \leq (y-x) \leq 1$ . b. Range: The range is $\dfrac{-\pi}{2} \leq z\leq \dfrac{\pi}{2}$ c. The level Curves are the straight lines whose form as $y-x=c$ and $-1 \leq c \leq 1$. d. Boundary points is at the two straight lines such as $y=1+x$ and $y=-1+x \implies y=x-1$ e. Closed f. Unbounded
2020-04-09 21:04:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9502938985824585, "perplexity": 403.49675575232186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371876625.96/warc/CC-MAIN-20200409185507-20200409220007-00117.warc.gz"}
https://www.biostars.org/p/9475438/
superimpose 0 0 Entering edit mode 14 months ago Alex ▴ 20 Hello, I'm completely new to bioinformatics. I'm trying to superimpose in ds discovery and can't seem to quite figure it out. would appericiate your help. TIA. I'm using v21.1 discovery ds visualizer • 766 views 4 Entering edit mode Asking for help the way you did it is like going to a doctor and saying that something hurts somewhere in the body. 0 Entering edit mode no need to be so rude. if you can't help just move along. 0 Entering edit mode You just asked one of the most helpful people on the forum to move along. Sure, the comment was a bit tongue in cheek, but I am fairly certain Dr. Dlakic did not mean to offend you. Once again: if you could explain your problem better, it's possible you'd receive some assistance here. 0 Entering edit mode yeah real helpful! I had already mentioned I was new to this, he could just asked me clarify i would have done so. This sarcasm isn't helpful, it just wastes my time. and yeah it was pretty rude too. Thought this forum was supposed be helpful maybe I was wrong. 1 Entering edit mode I understand the comment was frustrating, and I understand it's all that more tougher when you're looking for help, feeling lost, and you end up with a response of this sort. And I am sorry that you're having to deal with all of this, under whatever circumstances it is you are in right now. But Mensur Dlakic is a being too, and they have their ups and downs also; they are not obligated to respond in a precisely helpful manner at all times. Nor are they required to refrain from being a bit cheeky in a post here and there. (I would argue that they were still pointing out that it would help if you provided more details. Also you are assuming it was sarcasm there. Why?) All that said, I've asked you twice to provide more details. Unless you're just an angry troll, why didn't you take up those opportunities to improve your chances of getting assistance here? I presume "ds discovery" is alluding to the tool from BIOVIA. I am unfortunately not familiar with it. I quickly tried installing the Discovery Studio Visualizer tool, but that was a pain, and I presume it cannot do the "superimposing" you're asking for. So can't help you here, sorry. However, it appears there is a dedicated forum hosted by Dassault for this tool (and others) here. Did you give that forum a try? 1 Entering edit mode I am very well aware that NO one is obligated to reply, I just asked for help and that's all. If you can't no need leave such comments that are just unproductive. Anyways I'm not here to fight and I'm certainly not a troll. I already found my answer on my own. But thanks for your help anyways. 1 Entering edit mode I'm glad to hear that you've found a solution. And it's good to know you're not looking to argue (nor a troll). I wish you good luck with your work!! 1 Entering edit mode and yes I was talking about the BIOVIA discovery. I Thought it was pretty common (again new to this). that is why I didn't feel the need to clarify(mistake on my part). it does have superimposing. All you have to do is load your structure (in PDB format) in the same tab. go to >structure>superimpose>by Center of Geometry and it will superimpose for anyone wondering. 0 Entering edit mode What are "ds discovery", "TIA", and "v21.1"? The clearer your question, the better the chances of someone being able to assist you.
2022-08-19 20:04:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40513744950294495, "perplexity": 1409.8208263000065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00231.warc.gz"}
http://www.alecjacobson.com/weblog/?tag=graphics
## Posts Tagged ‘graphics’ ### Rig Animation with a Tangible and Modular Input Device preprint + video Thursday, May 5th, 2016 We’ve put up a project page and a preprint of our new SIGGRAPH 2016 paper “Rig Animation with a Tangible and Modular Input Device”, joint work with Glauser Oliver, Wan-Chun Ma, Daniele Panozzo, O. Hilliges, O. Sorkine-Hornung. This is not just version 2.0 of our tangible and modular input device from 2014 (although the new hardware is totally awesome). In this paper we also present a new optimization for mapping joints and splitters to any industry-grade character rig. The optimization will output instructions for a device to construct out of parts and then map those degrees of freedom to all parameters of the rig. Abstract We propose a novel approach to digital character animation, combining the benefits of tangible input devices and sophisticated rig animation algorithms. A symbiotic software and hardware approach facilitates the animation process for novice and expert users alike. We overcome limitations inherent to all previous tangible devices by allowing users to directly control complex rigs using only a small set (5-10) of physical controls. This avoids oversimplification of the pose space and excessively bulky device configurations. Our algorithm derives a small device configuration from complex character rigs, often containing hundreds of degrees of freedom, and a set of sparse sample poses. Importantly, only the most influential degrees of freedom are controlled directly, yet detailed motion is preserved based on a pose interpolation technique. We designed a modular collection of joints and splitters, which can be assembled to represent a wide variety of skeletons. Each joint piece combines a universal joint and two twisting elements, allowing to accurately sense its configuration. The mechanical design provides a smooth inverse kinematics-like user experience and is not prone to gimbal locking. We integrate our method with the professional 3D software Autodesk Maya® and discuss a variety of results created with characters available online. Comparative user experiments show significant improvements over the closest state-of-the-art in terms of accuracy and time in a keyframe posing task. ### Preprint for “Linear Subspace Design for Real-Time Shape Deformation” Wednesday, May 6th, 2015 Our upcoming SIGGRAPH 2015 paper “Linear Subspace Design for Real-Time Shape Deformation” is online now. This is joint work with Yu Wang, jernej Barbič and Ladislav Kavan. One way to interpret our contribution is a method for computing shape-aware (rather than cage-aware) generalized barycentric coordinates without a cage. Another way to interpret it is as a generalization of linear blend skinning to incorporate transformations of region regions/bones and translations at point handles. The distinguishing characteristic of our weight computation is its speed. The choice of handles (the choice of linear subspace) is now interactive and becomes part of the design process. It absorbs user-creativity just as manipulating the handles does. ### Tangible and Modular Input Device for Character Articulation at Emerging Technologies, SIGGRAPH 2014 Friday, May 2nd, 2014 My colleagues, Daniele Panozzo, Oliver Glauser, Cedric Pradalier, Otmar Hilliges, Olga Sorkine-Hornung and I will be presenting our new input device at the Emerging Technologies demo hall at SIGGRAPH 2014. We put our extended abstract on our project page. See you at e-tech! ### Ambient occlusion proof of concept demo in libigl Tuesday, October 8th, 2013 The libigl “extra” for the Embree ray tracing library made it super easy to whip up an ambient occlusion demo. Check out the example in the libigl source under libigl/examples/ambient-occlusion (version ≥0.3.3). The demo just shoots rays in random directions in the hemisphere of each mesh vertex and aggregates a hit ratio. Then it colors the mesh with these values per-vertex using GL_COLOR_MATERIAL. The program shoots a few more rays per point every draw frame (except when the users dragging the camera around). There are plenty of ways to be fancier about ambient occlusion, but this demonstrates the basic idea. ### Determine boundary faces from tetrahedral mesh Wednesday, April 20th, 2011 Here’s a matlab function that takes a list of tetrahedron indices (4 indices to a row) and finds the triangles that are on the surface of the volume. It does this simply by finding all of the faces that only occur once. function F = boundary_faces(T) % BOUNDARY_FACES % F = boundary_faces(T) % Determine boundary faces of tetrahedra stored in T % % Input: % T tetrahedron index list, m by 4, where m is the number of tetrahedra % % Output: % F list of boundary faces, n by 3, where n is the number of boundary faces % % get all faces allF = [ ... T(:,1) T(:,2) T(:,3); ... T(:,1) T(:,3) T(:,4); ... T(:,1) T(:,4) T(:,2); ... T(:,2) T(:,4) T(:,3)]; % sort rows so that faces are reorder in ascending order of indices sortedF = sort(allF,2); % determine uniqueness of faces [u,m,n] = unique(sortedF,'rows'); % determine counts for each unique face counts = accumarray(n(:), 1); % extract faces that only occurred once sorted_exteriorF = u(counts == 1,:); % find in original faces so that ordering of indices is correct F = allF(ismember(sortedF,sorted_exteriorF,'rows'),:); end With this you can easily determine the vertices of a tetmesh that are on the boundary: % get boundary faces F = boundary_faces(T); % get boundary vertices b = unique(F(:)); subplot(1,2,1); % plot boundary positions plot3(V(b,1),V(b,2),V(b,3),'.'); subplot(1,2,2); % plot just boundary faces trisurf(F,V(:,1),V(:,2),V(:,3),'FaceAlpha',0.3) ### Rotate a point around another point Tuesday, February 22nd, 2011 Composing rotations and translations is one of the most important operations in computer graphics. One useful application is the ability to compose rotations and translations to rotate a point around another point. Here’s how I learned to do this. I find this method the most intuitive. 1. Translate so that the other point is at the origin 2. Rotate about the origin by however much you want 3. Translate back so that the other point is back to its original position This works out nicely with matrix math since rotations around the origin are easily stored as 2×2 matrices: / cos θ -sin θ\ Rotation around origin by θ: | | \ sin θ cos θ/ If we call that matrix, R, then we can write the whole operation that rotates a point, a, around another point, b,, as: R*(a-b) + b. Be careful to note the order of operations: (a-b) corresponds to step 1, then left multiply with R to step 2, and finally adding back b is step 3. One convenient fact is that when we look at the transformation as a whole where T(a): a → R*(a-b) + b. Because T is just the composition of rotations and translations it can be decomposed into a single rotation followed by a single translation. Namely, T(a): a → R*(a-b) + b may be re-written as T(a): a → S(a) + t, for some rotation matrix S and some translation vector t. To see this just use the distributive law: R*(a-b) + b = R*a – R*b + b, then S = R and t = R*b + b. So that gives a new derivation of the transformation that rotates a point, a, around another point, b,: R*a – R*b + b. Building an intuition as to why this works is a little tricky. We have just seen how it can be derived using linear algebra but actually seeing this version as each step is elusive. Turns out it helps to make a subtle change. Reverse the last two terms, so that you have: R*a + (b – R*b). Now intuitively we can follow the order of operations and build an intuition: 1. Rotate first the point about the origin (since the other point is not the origin we’ve “missed” where we should have been rotated by a certain error amount) 2. Rotate the other point about the origin by the same amout 3. Translate by the difference between the original other point and the rotated other point (this is the error amount, because we know that rotating other pointabout itself shouldn’t change its position) Update: Here’s an image comparing these two compositions: ### Barycentric coordinates and point-triangle queries Friday, February 4th, 2011 Recently I needed to test whether a bunch of points where on a given triangle in 3D. There are many ways of querying for this, but I found one that was for me both intuitive and convenient. The method uses barycentric coordinates. Given a triangle made up of points A, B, and C you can describe any point, p, inside the triangle as a linear combination of A, B, and C: p = λa*A + λb*B + λc*C. That is to say, p can be reproduced as a weighted average of A, B and C. Where the λs are the weights. One way to realize this is true is to notice that a triangle can be defined by two vectors, say A→B and A→C, and as long as these two vectors are not parallel we know from Linear Algebra that we can span the whole plane that they define (including the triangle A, B,C which lies on that plane). abc) are called the “barycentric coordinates” of p. Another (more or less unused) name for these are the “area coordinates” of p. This is because λa, λb, and λc are very conveniently defined as: λa = area(B,C,P)/area(A,B,C) λb = area(C,A,P)/area(A,B,C) λc = area(A,B,P)/area(A,B,C) An important quality of barycentric coordinates is that they partition unity, that is: λabc = 1. To see this is true, notice that these sub triangles exactly cover the area of the whole triangle and since we divide be that whole triangle’s area we are normalizing the sub triangle areas such that they must sum to one: the barycentric coordinates are the percentages of the area covered by the sub triangles and the whole triangle area. Each sub triangle corresponds to the original triangle corner that is not a corner of the sub triangle. Thus, A corresponds to the sub triangle B, C, p and so forth. As we move p inside the triangle, ABC, the barycentric coordinates shift. Notice that all coordinates are positive as long as p is inside the triangle. If we move p onto an edge of the triangle then the area of the sub triangle corresponding to the corner opposite that edge becomes zero, thus the barycentric coordinate for p corresponding to that corner is zero. If we move p onto a corner of the triangle then the sub triangle corresponding to that corner is the same (and thus has the as area) as the original triangle, so its corresponding barycentric coordinate to p is 1, the other areas and coordinates being 0. If we move p outside of the triangle the total area covered by the sub triangles will be greater than the original triangle. This is easy to see as the sub triangles always cover the original triangle. We could use this to test whether p is inside the triangle ABC, but there actually a more elegant way that will be handy later. If instead of computing regular (non-negative) area, we compute signed area then even when we move p outside of ABC the areas will sum to the original area of the triangle ABC. One way to think of signed area is by looking at the orientation of the triangles. Start with p inside ABC. Imagine coloring the side of each triangle facing you green and the back side red. When we move p the triangles change shape, but as long as p stays inside the triangle we still see the same, green side. When we pull p outside the triangle the triangle on the edge we crossed gets flipped, and now we see the back, red, side. Now we declare that if we can see the green side of the triangle, its area is positive and if we see the red side of the triangle its area is negative. Now, finally notice that the red side of the flipped triangle when p is outside ABC is always covered exactly by the other two, green triangles. Thus the negative area is cancelled out and we are only left with exactly the original ABC area. So if we use signed area to compute barycentric coordinates of p, we can determine if p is inside, on, or outside a triangle by looking to see if all coordinates are positive, if any are 0 or if any are negative. If our triangle is on the 2D plane then this sums up everything we might want to know about p and a given triangle ABC. But if ABC is an arbitrary triangle in 3D and p is also an arbitrary point, then we may also want to know if p whether p is on the plane of the triangle ABC, or if it is above or below the triangle. We showed above that if p is on the plane of the triangle then the total un-signed area of the sub triangles is greater than the original triangle area and the total signed area of the sub triangles is always exactly equal to the area of the original triangle. However, if we pull p off the plane of the sum of the total un-signed area of the sub triangles will necessarily be greater than the area of the original triangle. To see this notice that by pulling p off the triangle we are making a tetrahedron pABC with a certain amount of volume, where before pABC was flattened onto the base triangle ABC with no volume. Where ever p is in space it makes such a tetrahedron with ABC and we can always flatten (project) it to the plane of the triangle where we know the total area of the flattened triangles must cover (be greater than) the original triangle ABC. Now finally notice that “flattening” a triangle can never increase the triangle’s area (imagine holding a paper triangle in front of your face, your eyes project it onto the plane which you see. There’s no way of rotating the triangle so that the projected area (the area you see) is bigger than the actual area of the triangle)). Finally, we just showed that pulling p away from the plane of the triangle increases the magnitude of each of the sub triangles’ area. We may also use this fact to see that the total signed area of these sub triangles is only ever equal to the area of the original triangle if we keep p on the triangle’s plane. If we pull p above the triangle (off its front side) then the total signed area is always greater than the original area, and if we pull p below the triangle (off its back side) then the total signed area is always less than the original area. This is harder to see and hopefully I will find a nice way of explaining it. It’s important to note that the areas we get when we pull p off of the plane of the triangle ABC can no longer partition unity, in the sense that if we use these areas as above to define λa, λb, and λc, p will still equal λa*A + λb*B + λc*C, but λa + λb + λc will not equal 1. ### Triangle mesh filled contour in MATLAB Monday, December 6th, 2010 Here is matlab snippet that takes a 2D triangle mesh and a function and produces a pretty, filled contour plot in MATLAB. If your mesh vertices are stored in V, and your triangles in F, then use my previously posted code to get the boundary/outline edges of your mesh in O. Then you can create a contour plot of some function W defined over V using: % resample W with a grid, THIS IS VERY SLOW if V is large [Xr,Yr,Wr] = griddata(V(:,1),V(:,2),W,unique(V(:,1)),unique(V(:,2))'); % find all points inside the mesh, THIS IS VERY SLOW if V is large IN = inpolygon(Xr,Yr,V(O,1),V(O,2)); % don't draw points outside the mesh Wr(~IN) = NaN; contourf(Xr,Yr,Wr) And for a mesh like this one: You get something like this: You can also approach a continuous contour with this: contourf(Xr,Yr,Wr,200) Compare this to what you can get from the much faster: trisurf(F,V(:,1),V(:,2),W,'FaceColor','interp','EdgeColor','interp') view(2) which produces: The only problem with this method is that the boundary looks a little nasty because of the resampling. Nothing that can’t be fixed quickly in photoshop… Or even in MATLAB with something like: line(V([O(:);O(1)],1),V([O(:);O(1)],2),'Color','k','LineWidth',6) Which puts an ugly thick outline around the contour, but makes it look a little better… Source ### Find outline of triangle mesh and plot in MATLAB Monday, December 6th, 2010 Here’s a little snippet to determine the edges along the boundary of a triangle mesh in matlab. Given that your vertex values are in V and your triangle indices are in F, this will fill O with a list of edges make up the outline of your mesh: % Find all edges in mesh, note internal edges are repeated E = sort([F(:,1) F(:,2); F(:,2) F(:,3); F(:,3) F(:,1)]')'; % determine uniqueness of edges [u,m,n] = unique(E,'rows'); % determine counts for each unique edge counts = accumarray(n(:), 1); % extract edges that only occurred once O = u(counts==1,:); Then you can view the outline with, in 2D: plot([V(O(:,1),1) V(O(:,2),1)]',[V(O(:,1),2) V(O(:,2),2)]','-') and in 3D: plot3([V(O(:,1),1) V(O(:,2),1)]',[V(O(:,1),2) V(O(:,2),2)]',[V(O(:,1),3) V(O(:,2),3)]','-') ### “Mixed Finite Elements for Variational Surface Modeling” project page Monday, October 25th, 2010 I’ve finally put a project page for the paper we presented this summer at SGP: “Mixed Finite Elements for Variational Surface Modeling” by Alec Jacobson, Elif Tosun, Olga Sorkine and Denis Zorin. So far I have the paper, slides from SGP, slides from my recent Disney Tech Talk at ETH and some videos and images. Hopefully some I will post some of the working MATLAB code base.
2017-05-01 04:22:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5758421421051025, "perplexity": 1059.0992878798277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.50/warc/CC-MAIN-20170423031207-00273-ip-10-145-167-34.ec2.internal.warc.gz"}
http://wordpress.stackexchange.com/questions/43834/removing-the-advanced-menu-properties
As I'm often involved with people who don't know anything about WordPress, a CMS or even the internet :) I try to hide things irrelevant (for them) as much as possible. Normally this would involve removing a meta box here and there, but I have to admit, I'm sort of stuck on the WP menu system. Would anyone know perhaps a way to remove the fields marked in red (picture)? - This is a core function in wp-admin/includes/nav-menu.php. You can either hide the items with CSS or use the myEASYhider plugin here. IIRC, in order to actually override core functions, it must be done from a plugin rather than functions.php, so either way you'll be using a plugin that could potentially be turned off by the end user. Perhaps it would be better (if not easier) to train the client on what these functions do and when they will or won't be using them? - Having downloaded this plugin, I have to admit I'm having some mixed feelings about some solutions AND extras it offers. Also except from the things discussed in this amazing presentation codewise I don't see any real differences between using the functions.php or using a separate plugin. I agree it seems to be hardcoded though. In the end I went for the jQuery solution displayed below. (if someone knows if any enhancements to this snippet could be made I'd be more than happy to listen) –  Cor van Noorloos Feb 29 '12 at 18:15 Looking into it deeper, you may need to override wp_nav_menu_setup() instead to keep it from registering the advanced menu items in the first place. Could be worth a look, although probably not as quick as your jQuery solution below. Again, I'm not sure if functions.php will let you actually override this value - I've run into issues in the past with certain functions, but those sat in pluggable.php. –  SickHippie Feb 29 '12 at 18:40 Thank you for your reply. I do like to hide things using PHP instead. I'll make sure I'll get back on this. –  Cor van Noorloos Feb 29 '12 at 18:44 As mentioned in a previous comment, I went for a jQuery solution: add_action( 'admin_footer-nav-menus.php', 'cor_advanced_menu_properties' ); /** * Hides the nav-menus.php 'advanced menu properties' */
2015-01-27 12:33:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28780195116996765, "perplexity": 1406.811956178787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120842874.46/warc/CC-MAIN-20150124173402-00180-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.khanacademy.org/math/ap-calculus-ab/analyzing-functions-with-calculus-ab/concavity-and-points-of-inflection-review-ab/e/concavity-and-the-second-derivative
# Concavity & inflection points challenge Review your understanding of concavity and inflection points with some challenge problems. ### Problem The function space, f, space (not shown) is continuous and differentiable for all real numbers. The graphs of space, f, space, prime, space and space, f, space, start superscript, prime, prime, end superscript, space are shown. Which of the following statements best describes what is happening on the graph of space, f, space at space, x, equals, minus, 2, space? space Please choose from one of the following options.
2016-09-27 03:37:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48602548241615295, "perplexity": 2826.0699388275984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660957.45/warc/CC-MAIN-20160924173740-00024-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.doubtnut.com/question-answer-physics/a-steel-wire-of-cross-sectional-area-05mm2-is-held-between-two-fixed-supports-if-the-wire-is-just-ta-642596544
Home > English > Class 11 > Physics > Chapter > Heat And Temperature > A steel wire of cross-sectiona... # A steel wire of cross-sectional area 0.5mm^2 is held between two fixed supports. If the wire is just taut at 20^@C, determine the tension when the temperature falls to 0^@C. Coefficient of linear expansion of steel is 1.2 xx 10^(-5)C^(-1) and its Young's modulus is 2.0 xx 10^11 N m^(-2). Updated On: 27-06-2022 Text Solution Solution : Given A= 0.5mm^2 = (0.5 xx (10^-6) (m^2)) <br> T_1= 20^@C, T_2= 0^@C <br> alpha_s= (1.2 xx 10^-5/(^@C)) <br> gamma= 2 xx (10^11)N/m^2 <br> Decrease in length due to compression <br> =(L alpha Delta theta) …….(1) <br> gamma = stress/strain = F/A xx (L/(Delta L)) <br> rArr (Delta L )=(FL/AY) <br> Tension is developed due to (1) and (2). <br> Equation (1) and (2) <br> (L alpha Delta theta) = (FL/AY) <br> rArr F= (alpha Delta) (theta AY) <br> = (1.2 xx (10^-5) xx (20-0) <br> xx 0.5 xx (10^-6) xx 2 xx (10^11) ) <br> =1.2 xx 20 = 24N Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Transcript hello kaun sa question is a steel wire of cross section area 0.5 mm square is held between to fix supports everywhere I just thought at 20 degree Celsius determine the tension when the temperature falls 20 degree Celsius question 2 3 expansion of steel is 1.2 into 10 power minus 5 per difficulties and it's a Young's modulus is 2 into 10 to 11 Newton per metre square to a given in the question that I Steel wire of cross section area 0.5 mm square is held between two fixed support facilities for Visa Steel wire is held between two fixed supports ok so let us suppose it fixed support and steel virus held between know when it is held between the temperature was how much 20 degree celsius is cross sectional area a is given to us is 0.5 mm square correct set that we have to determine the tension in the Steel wire when the temperature falls 20 degree Celsius the temperature is dropping so this will try to contact correct but it is fixed at its end so tension will develop in the Steel wire connect know the change in the length of the temperature dropped from 20 degree Celsius 20 degree celsius is given as Delta l is equal to L Alpha Delta T correct from here we can calculate the Delta l l it will come as Alpha Delta T correct as we know that the strain developed in a wire is stream represented by a cell is the change in the length of an original length correct Sofia country The Delta Airlines Alpha Delta T Naukari know the relation between the strain as stress upon strain is equal to Young's modulus collect from you what we can say is stress will be strain into Young's modulus correct the stress is given as the force per unit area of the tension per unit area will be equal to its allen.ac from here we can calculate the tension so tension will come as itself into a into Young's modulus calculator ago and it is Delta elecon so the tension will be Delta l into a into data 11 LBS calculator Alpha data tickets will be Alpha Delta T into into current losses due to their respective values will get the tension in the Steel bar so the coefficient of linear expansion for the street where is given to us as 1.2 into 10 raise to power minus 5 per degree Celsius and change in the temperature that is from 20 degree Celsius 20 degree Celsius 20 degree Celsius -20 degree Celsius into area area we are given as 0.5 mm Square show converting it into metre square to multiply with detector tennis 1 - X into the young model given to us is 2 into 10 raise to power 11 Newton per metre square on calculating this we get attention as -24 Newton's law the negative sign over here is representing that the nature of the tension is compressive in the wire swinging the question is about the tension so the magnitude of the tension will be how much 24 Newton indication we have not been asked about the nature of the tension Sunil and will not be there so our answer is 24 Newton thank you
2022-12-08 22:53:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3508199155330658, "perplexity": 1514.704912007318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711368.1/warc/CC-MAIN-20221208215156-20221209005156-00576.warc.gz"}
https://cstheory.stackexchange.com/tags/proof-complexity/hot
# Tag Info ## Hot answers tagged proof-complexity 21 $S^1_2$ is a theory of bounded arithmetic, that is, a weak axiomatic theory obtained by severely restricting the schema of induction of Peano arithmetic. It is one of the theories defined by Sam Buss in his thesis, other general references include Chapter V of Hájek and Pudlák’s Metamathematics of first-order arithmetic, Krajíček’s “Bounded arithmetic, ... 19 The most natural restriction on the proof DAG is that it be a tree – that is, any "lemma" (intermediate conclusion) is not used more than once. This property is called being "tree-like". General resolution is exponentially more powerful than tree-like resolution, as shown for example by Ben-Sasson, Impagliazzo and Wigderson. The concept has also been ... 19 The basic sum-of-squares proof system, introduced under the name of Positivstellensatz refutations by Grigoriev and Vorobjov, is a “static” proof system for showing that a set of polynomial equations and inequations $$S=\{f_1=0,\dots,f_k=0,h_1\ge0,\dots,h_m\ge0\},$$ where $f_1,\dots,f_k,h_1,\dots,h_m\in\mathbb R[x_1,\dots,x_n]$, has no common solution in $\... 15 This is the same idea as Andrej's answer but with more details. Krajicek and Pudlak [LNCS 960, 1995, pp. 210-220] have shown that if$P(x)$is a$\Sigma^b_1$-property that defines primes in the standard model and $$S^1_2 \vdash \lnot P(x) \to (\exists y_1,y_2)(1 < y_1, y_2 < x \land x = y_1y_2)$$ then there is a polynomial time factoring algorithm. ... 14 The following example comes from the paper which gives a combinatorial characterization of resolution width by Atserias and Dalmau (Journal, ECCC, author's copy). Theorem 2 of the paper states that, given a CNF formula$F$, resolution refutations of width at most$k$for$F$are equivalent to winning strategies for Spoiler in the existential$(k+1)$-pebble ... 13 1) The only non-structural rule is resolution (on atoms). $$\varphi\lor C, \psi\lor \overline{C} \over \varphi\lor \psi$$ However a rule by itself doesn't give a proof system. See part 3. 2) Think about it this way: is Gentzen's sequent calculus PK complete if we are using some other set of connectives in place of$\{\land, \lor, \lnot\}$? The logical ... 13 1, 2, 4) The best known lower bounds on extended Frege are the same as for Frege: linear number of lines, and quadratic size. This applies e.g. to the tautologies$\neg^{2n}\top$(basically, any tautology that is not a substitution instance of a shorter tautology, and whose sum of lengths of all subformulas is quadratic). This is proved in Krajíček’s Bounded ... 12 SOS can be considered as a proof system where lines are of the form$p(\vec{x}) \geq 0$where$p(\vec{x})$is a polynomial in variables$\vec{x}$. The inference rules are:$\over x^2-x \geq 0\over x-x^2 \geq 0\over p(\vec{x})^2\geq 0p(\vec{x}) \geq 0 \over p(\vec{x})x \geq 0p(\vec{x}) \geq 0 \over p(\vec{x})(1-x) \geq 0p_1(\vec{x}) \geq 0, \... 12 It depends on what kind of a "beginner" level you wish to have. I don't think there is a real good undergraduate level text on proof complexity (this is probably true for most specialized sub-areas in complexity). But for a beginner (graduate level) sources, I would recommend, something like understanding well the basic exponential size lower bound on ... 11 This example is a bit lower in the hierarchy than what Kaveh asks for, but it is an open problem whether the soundness of the uniform $\mathrm{TC}^0$ algorithms for integer division and iterated multiplication by Hesse, Allender, and Barrington can be proved in the corresponding theory $\mathit{VTC}^0$. The argument is pretty elementary, and there should be ... 10 What proof system is being considered when discussing resolution? Is it just the resolution rule? What are the other rules? I discuss resolution in the context of "clauses", which are sequents made up of only literals. A classical clause would look like $$A_1,\ldots,A_n \to B_1,\ldots,B_m$$ But we can also write it as {} \to \bar{A}_1,\ldots,\bar{A}_n, ... 10 The AKS primality test seems like a good candidate if Wikipedia is to be believed. However, I would expect such an example to be hard to find. Existing proofs are going to be phrased so that they are obviously not done in bounded arithmetic, but they will likely be "adaptable" to bounded arithmetic with more or less effort (usually more). 10 How about the edge coloring number in a dense graph (aka Chromatic index)? You are given the adjacency matrix of an $n$ vertex graph ($n^2$ bit input), but the natural witness describing the coloring has size $n^2\log n$. Of course, there might be shorter proofs for class 1 graphs in Vizing's theorem. See also this possibly related question 10 It sounds like you are interested in all-different constraints (and your last sentence is on the right track). These are non-trivial instances of the pigeonhole principle, where the number of pigeons is not necessarily greater than the number of holes, and in addition some pigeons may be barred from some of the holes. All-different constraints can be ... 9 Natural examples of propositional proof systems that do not fall under this definition are algebraic proof systems where the lines in the proof are arbitrary polynomials (not necessarily fully expanded). To verify the correctness of such proofs, among other things one has to test the identity of polynomials, which is not known to be possible in deterministic ... 8 Müller and Szeider study Resolution proofs where the proof DAG has bounded tree-width or bounded path-width (for suitable extensions of these graph complexity measures to directed graphs.) They show that the path-width of the DAG is essentially the same as the space complexity of the proof, and define a generalized notion of proof space which is equivalent ... 8 Here is an example, which appears a natural problem. Instance: Positive integers, $d_1,\ldots,d_n$ and $k$, all bounded from above by $n$. Question: Does there exist a $k$-colorable graph with degree sequence $d_1,\ldots,d_n$ ? Here the input can be described with $O(n\log n)$ bits, but the witness may require $\Omega(n^2)$ bits. Remark: I do not have ... 8 I came along some quite natural NP-complete problems that seemingly require long witnesses. The problems, parameterized by integers $C$ and $D$ are as follows: Input: A one-tape TM $M$ Question: Is there some $n\in\mathbb{N}$, such that $M$ makes more than $Cn+D$ steps on some input of length $n$? Sometimes the complement of the problem is easier to state: ... 8 It is over two years since this question was asked, but in that time, there have been more papers published about algorithms for computing Craig interpolants. This is a very active research area and it is not feasible to give a comprehensive list here. I have chosen articles rather arbitrarily below. I would suggest following articles that reference them and ... 7 Let $m$ be the number of pigeons and $n$ be the number of holes. Let the propositional variables $B_{i,0}$ ... $B_{i,log(n)}$ encode the binary representation of $j-1$ if the $i$th pigeon is put into the $j$th hole. (Example, if pigeon 1 were placed in hole 10, $j - 1 = 9$, which is binary 1001. So $B_{1,3}$ = true, $B_{1,2}$ = false, $B_{1,1}$ = false and ... 6 I find these introductory lecture notes easy to read: Paul Beame's IAS Lectures 6 For the more algebraic side of proof complexity I recommend starting with Pitassi's 1996 survey paper: T. Pitassi. Algebraic propositional proof systems, in DIMACS Series in Discrete Mathematics and Theoretical Computer Science, Volume 31, Descriptive Complexity and Finite Models, Immerman and Kolaitis (Eds.), pp. 215-244, 1996. For a quick overview you ... 6 Maybe this is a silly "reason/explanation", but for many NP-Complete problems, a solution is a subset of the input (knapsack, vertex cover, clique, dominating set, independent set, max cut, subset sum, ...) or a permutation of or assignment to a subset of the input (Hamiltonian path, traveling salesman, SAT, graph isomorphism, graph coloring, ...). We could ... 6 $f$ is not a prover, it's a proof-checker. $w$ is the proof. And the polynomial is a polynomial of the length of the proof, which could be much larger than the length of the thing being proved. If you have a proof system for which checking whether something really is a proof (or figuring out what it's a proof of) takes more than polynomial time, then your ... 6 For strong enough proof systems the graph representation of a proof in the system seems less consequential, since (as Joshua Grochow already commented), DAG-like and tree-like Frege proofs are polynomially equivalent (see Krajicek's 1995 monograph for a proof of this fact). For weaker proof systems such as resolution, tree-like is exponentially weaker than ... 6 Cook-Reckhow propositional proof systems are nonunifrom. E.g. the computational complexity counterpart to the class of polynomial-size $\mathsf{Extended Frege}$ proofs is the nonuniform complexity class $\mathsf{P/poly}$. We have to look at their uniform counterparts: E.g. the proof complexity counterpart for $\mathsf{P}$ are bounded arithmetic theories ... 6 First, ER p-simulates SR: for example, ER is p-equivalent to the extended Frege proof system (EF) which is p-equivalent to the substitution Frege proof system (SF), and it is easy to see that SF p-simulates SR (the symmetry rule amounts to substitution of a special kind). On the other hand, Urquhart [1] proves an exponential lower bound on SR refutations of ... 6 With the caveat that I am posting this quickly in a sleep-deprived state, I think the answer is "no" to all three questions. Take the pigeonhole principle formulas PHP^m_n for m pigeons and n holes. The miniminal length of a resolution refutation for m = n+1 is exp(Omega(n)) by Haken. However, Buss and Pitassi proved that for m = exp(\sqrt(n log n)) pigeons ... 6 For each of these proof systems we know that there are some formulas where the shortest proof needs to have exponential length. Some of the earliest examples are an exponential lower bound for the pigeonhole principle in polynomial calculus (Razborov '98, IPS '99), and an exponential lower bound for the clique-colouring formula in cutting planes (Pudlák '99).... 5 The most recent and up-to-date general-purpose proof complexity survey is probably that of Nathan Segerlind: Nathan Segerlind: The Complexity of Propositional Proofs. Bulletin of Symbolic Logic 13(4): 417-481, 2007 (http://www.math.ucla.edu/~asl/bsl/1304/1304-001.ps). And now, warnings for two shameless self plugs… An even more recent survey, but ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-01-21 05:34:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8764656782150269, "perplexity": 586.5319128324529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522242.73/warc/CC-MAIN-20210121035242-20210121065242-00065.warc.gz"}
https://labs.tib.eu/arxiv/?author=Keith%20A.%20Nelson
• ### Terahertz-frequency magnon-phonon-polaritons in the strong coupling regime(1611.01814) Strong coupling between light and matter occurs when the two interact strongly enough to form new hybrid modes called polaritons. Here we report on the strong coupling of both the electric and magnetic degrees of freedom to an ultrafast terahertz (THz) frequency electromagnetic wave. In our system, optical phonons in a slab of ferroelectric lithium niobate (LiNbO$_3$) are strongly coupled to a THz electric field to form phonon-polaritons, which are simultaneously strongly coupled to magnons in an adjacent slab of canted antiferromagnetic erbium orthoferrite (ErFeO$_3$) via the THz magnetic field. The strong coupling leads to the formation of new magnon-phonon-polariton modes, which we experimentally observe in the wavevector-frequency dispersion curve as an avoided crossing and in the time-domain as a normal-mode beating. Our simple yet versatile on-chip waveguide platform provides a promising avenue by which to explore both ultrafast THz spintronics applications and the quantum nature of the interaction. • ### Real-time observation of a coherent lattice transformation into a high-symmetry phase(1609.04048) Nov. 24, 2017 cond-mat.mtrl-sci Excursions far from their equilibrium structures can bring crystalline solids through collective transformations including transitions into new phases that may be transient or long-lived. Direct spectroscopic observation of far-from-equilibrium rearrangements provides fundamental mechanistic insight into chemical and structural transformations, and a potential route to practical applications, including ultrafast optical control over material structure and properties. However, in many cases photoinduced transitions are irreversible or only slowly reversible, or the light fluence required exceeds material damage thresholds. This precludes conventional ultrafast spectroscopy in which optical excitation and probe pulses irradiate the sample many times, each measurement providing information about the sample response at just one probe delay time following excitation, with each measurement at a high repetition rate and with the sample fully recovering its initial state in between measurements. Using a single-shot, real-time measurement method, we were able to observe the photoinduced phase transition from the semimetallic, low-symmetry phase of crystalline bismuth into a high-symmetry phase whose existence at high electronic excitation densities was predicted based on earlier measurements at moderate excitation densities below the damage threshold. Our observations indicate that coherent lattice vibrational motion launched upon photoexcitation with an incident fluence above 10 mJ/cm2 in bulk bismuth brings the lattice structure directly into the high-symmetry configuration for tens of picoseconds, after which carrier relaxation and diffusion restore the equilibrium lattice configuration. • ### Development of Single-Shot Multi-Frame Imaging of Cylindrical Shock Waves in a Multi-Layered Assembly(1707.08940) Nov. 14, 2017 physics.app-ph We demonstrate single-shot multi-frame imaging of quasi-2D cylindrically converging shock waves as they propagate through a multi-layer target sample assembly. We visualize the shock with sequences of up to 16 images, using a Fabry-Perot cavity to generate a pulse train that can be used in various imaging configurations. We employ multi-frame shadowgraph and dark-field imaging to measure the amplitude and phase of the light transmitted through the shocked target. Single-shot multi-frame imaging tracks geometric distortion and additional features in our images that were not previously resolvable in this experimental geometry. Analysis of our images, in combination with simulations, shows that the additional image features are formed by a coupled wave structure resulting from interface effects in our targets. This technique presents a new capability for tabletop imaging of shock waves that can be easily extended to experiments at large-scale facilities. • ### Time-domain Brillouin scattering for the determination of laser-induced temperature gradients in liquids(1702.01078) Oct. 17, 2017 cond-mat.mtrl-sci We present an optical technique based on ultrafast photoacoustics to precisely determine the local temperature distribution profile in liquid samples in contact with a laser heated optical transducer. This ultrafast pump-probe experiment uses time-domain Brillouin scattering (TDBS) to locally determine the light scattering frequency shift. As the temperature influences the Brillouin scattering frequency, the TDBS signal probes the local laser-induced temperature distribution in the liquid. We demonstrate the relevance and the sensitivity of this technique for the measurement of the absolute laser-induced temperature gradient of a glass forming liquid prototype, glycerol, at different laser pump powers - i.e. different steady state background temperatures. Complementarily, our experiments illustrate how this TDBS technique can be applied to measure thermal diffusion in complex multilayer systems in contact to a surrounding liquid. • ### Nondiffusive thermal transport from micro/nanoscale sources producing nonthermal phonon populations exceeds Fourier heat conduction(1710.01468) Oct. 4, 2017 cond-mat.mes-hall We study nondiffusive thermal transport by phonons at small distances within the framework of the Boltzmann transport equation (BTE) and demonstrate that the transport is significantly affected by the distribution of phonons emitted by the source. We discuss analytical solutions of the steady-state BTE for a source with a sinusoidal spatial profile, as well as for a three- dimensional Gaussian hot spot, and provide numerical results for single crystal silicon at room temperature. If a micro/nanoscale heat source produces a thermal phonon distribution, it gets hotter than predicted by the heat diffusion equation; however, if the source predominantly produces low-frequency acoustic phonons with long mean free paths, it may get significantly cooler than predicted by the heat equation, yielding an enhanced heat transport. • ### Observation of Bulk Fermi Arc and Polarization Half Charge from Paired Exceptional Points(1709.03044) The ideas of topology have found tremendous success in Hermitian physical systems, but even richer properties exist in the more general non-Hermitian framework. Here, we theoretically propose and experimentally demonstrate a new topologically-protected bulk Fermi arc which---unlike the well-known surface Fermi arcs arising from Weyl points in Hermitian systems---develops from non-Hermitian radiative losses in photonic crystal slabs. Moreover, we discover half-integer topological charges in the polarization of far-field radiation around the Fermi arc. We show that both phenomena are direct consequences of the non-Hermitian topological properties of exceptional points, where resonances coincide in their frequencies and linewidths. Our work connects the fields of topological photonics, non-Hermitian physics and singular optics, and paves the way for future exploration of non-Hermitian topological systems. • ### Machine Learning to Analyze Images of Shocked Materials for Precise and Accurate Measurements(1708.07261) Sept. 3, 2017 physics.app-ph A supervised machine learning algorithm, called locally adaptive discriminant analysis (LADA), has been developed to locate boundaries between identifiable image features that have varying intensities. LADA is an adaptation of image segmentation, which includes techniques that find the positions of image features (classes) using statistical intensity distributions for each class in the image. In order to place a pixel in the proper class, LADA considers the intensity at that pixel and the distribution of intensities in local (nearby) pixels. This paper presents the use of LADA to provide, with statistical uncertainties, the positions and shapes of features within ultrafast images of shock waves. We demonstrate the ability to locate image features including crystals, density changes associated with shock waves, and material jetting caused by shock waves. This algorithm can analyze images that exhibit a wide range of physical phenomena because it does not rely on comparison to a model. LADA enables analysis of images from shock physics with statistical rigor independent of underlying models or simulations • ### Rapid and Precise Determination of Zero-Field Splittings by Terahertz Time-Domain Electron Paramagnetic Resonance Spectroscopy(1702.06613) Aug. 14, 2017 physics.chem-ph Zero-field splitting (ZFS) parameters are fundamentally tied to the geometries of metal ion complexes. Despite their critical importance for understanding the magnetism and spectroscopy of metal complexes, they are not routinely available through general laboratory-based techniques, and are often inferred from magnetism data. Here we demonstrate a simple tabletop experimental approach that enables direct and reliable determination of ZFS parameters in the terahertz (THz) regime. We report time-domain measurements of electron paramagnetic resonance (EPR) signals associated with THz-frequency ZFSs in molecular complexes containing high-spin transition-metal ions. We measure the temporal profiles of the free-induction decays of spin resonances in the complexes at zero and nonzero external magnetic fields, and we derive the EPR spectra via numerical Fourier transformation of the time-domain signals. In most cases, absolute values of the ZFS parameters are extracted from the measured zero-field EPR frequencies, and the signs can be determined by zero-field measurements at two different temperatures. Field-dependent EPR measurements further allow refined determination of the ZFS parameters and access to the g-factor. The results show good agreement with those obtained by other methods. The simplicity of the method portends wide applicability in chemistry, biology and material science. • ### Unifying first principle theoretical predictions and experimental measurements of size effects on thermal transport in SiGe alloys(1704.01386) July 4, 2017 cond-mat.mes-hall In this work, we demonstrate the correspondence between first principle calculations and experimental measurements of size effects on thermal transport in SiGe alloys. Transient thermal grating (TTG) is used to measure the effective thermal conductivity. The virtual crystal approximation under the density functional theory (DFT) framework combined with impurity scattering is used to determine the phonon properties for the exact alloy composition of the measured samples. With these properties, classical size effects are calculated for the experimental geometry of reflection mode TTG using the recently-developed variational solution to the phonon Boltzmann transport equation (BTE), which is verified against established Monte Carlo simulations. We find agreement between theoretical predictions and experimental measurements in the reduction of thermal conductivity (as much as $\sim$ 25\% of the bulk value) across grating periods spanning one order of magnitude. This work provides a framework for the tabletop study of size effects on thermal transport. • ### Direct Test of Supercooled Liquid Scaling Relations(1701.01310) Jan. 5, 2017 cond-mat.soft Diverse material classes exhibit practically identical behavior when made viscous upon cooling toward the glass transition, suggesting a common theoretical basis. The first-principles scaling laws that have been proposed to describe the evolution with temperature have yet to be appropriately tested due to the extraordinary range of time scales involved. We used seven different measurement methods to determine the structural relaxation kinetics of a prototype molecular glass former over a temporal range of 13 decades and over a temperature range spanning liquid to glassy states. For the material studied, our results comprise a comprehensive validation of the two scaling relations that are central to the fundamental question of whether supercooled liquid dynamics can be described universally. The ultrabroadband mechanical measurements demonstrated have fundamental and practical applications in polymer science, geophysics, multifunctional materials, and other areas. • ### Supersonic Impact of Metallic Micro-particles(1612.08081) Dec. 23, 2016 cond-mat.mtrl-sci Understanding material behavior under high velocity impact is the key to addressing a variety of fundamental questions in areas ranging from asteroid strikes and geological cratering to impact-induced phase transformations, spallation, wear, and ballistic penetration. Recently, adhesion has emerged in this spectrum since it has been found that micrometer-sized metallic particles can bond to metallic substrates under supersonic-impact conditions. However, the mechanistic aspects of impact-induced adhesion are still unresolved. Here we study supersonic impact of individual metallic microparticles on substrates with micro-scale and nanosecond-level resolution. This permits the first direct observation of a material-dependent threshold velocity, above which the particle undergoes impact-induced material ejection and adheres to the substrate. Our finite element simulations reveal that prevailing theories of impact-induced shear localization and melting cannot account for the material ejection. Rather, it originates from the propagation of a pressure wave induced upon impact. The experiments and simulations together establish that the critical adhesion velocity for supersonic microparticles is proportional to the bulk speed of sound. • ### Bose-Einstein Condensation of Long-Lifetime Polaritons in Thermal Equilibrium(1601.02581) Exciton-polaritons in semiconductor microcavities have been used to demonstrate quantum effects such as Bose-Einstein condensation, superfluity, and quantized vortices. However, in these experiments, the polaritons have not reached thermal equilibrium when they undergo the transition to a coherent state. This has prevented the verification of one of the canonical predictions for condensation, namely the phase diagram. In this work, we have created a polariton gas in a semiconductor microcavity in which the quasiparticles have a lifetime much longer than their thermalization time. This allows them to reach thermal equilibrium in a laser-generated confining trap. Their energy distributions are well fit by equilibrium Bose-Einstein distributions over a broad range of densities and temperatures from very low densities all the way up to the threshold for Bose-Einstein condensation. The good fits of the Bose-Einstein distribution over a broad range of density and temperature imply that the particles obey the predicted power law for the phase boundary of Bose-Einstein condensation. • ### Thermal transport in suspended silicon membranes measured by laser-induced transient gratings(1610.01530) Oct. 29, 2016 cond-mat.mes-hall Studying thermal transport at the nanoscale poses formidable experimental challenges due both to the physics of the measurement process and to the issues of accuracy and reproducibility. The laser-induced transient thermal grating (TTG) technique permits non-contact measurements on nanostructured samples without a need for metal heaters or any other extraneous structures, offering the advantage of inherently high absolute accuracy. We present a review of recent studies of thermal transport in nanoscale silicon membranes using the TTG technique. An overview of the methodology, including an analysis of measurements errors, is followed by a discussion of new findings obtained from measurements on both solid and nanopatterned membranes. The most important results have been a direct observation of non-diffusive phonon-mediated transport at room temperature and measurements of thickness-dependent thermal conductivity of suspended membranes across a wide thickness range, showing good agreement with first-principles-based theory assuming diffuse scattering at the boundaries. Measurements on a membrane with a periodic pattern of nanosized holes indicated fully diffusive transport and yielded thermal diffusivity values in agreement with Monte Carlo simulations. Based on the results obtained to-date, we conclude that room-temperature thermal transport in membranebased silicon nanostructures is now reasonably well understood. • ### Terahertz-driven Luminescence and Colossal Stark Effect in CdSe:CdS Colloidal Quantum Dots(1609.04643) Sept. 19, 2016 physics.optics Unique optical properties of colloidal semiconductor quantum dots (QDs), arising from quantum mechanical confinement of charge within these structures, present a versatile testbed for the study of how high electric fields affect the electronic structure of nanostructured solids. Earlier studies of quasi-DC electric field modulation of QD properties have been limited by the electrostatic breakdown processes under the high externally applied electric fields, which have restricted the range of modulation of QD properties. In contrast, in the present work we drive CdSe:CdS core:shell QD films with high-field THz-frequency electromagnetic pulses whose duration is only a few picoseconds. Surprisingly, in response to the THz excitation we observe QD luminescence even in the absence of an external charge source. Our experiments show that QD luminescence is associated with a remarkably high and rapid modulation of the QD band-gap, which is changing by more than 0.5 eV (corresponding to 25% of the unperturbed bandgap energy) within the picosecond timeframe of THz field profile. We show that these colossal energy shifts can be consistently explained by the quantum confined Stark effect. Our work demonstrates a route to extreme modulation of material properties without configurational changes in material sets or geometries. Additionally, we expect that this platform can be adapted to a novel compact THz detection scheme where conversion of THz fields (with meV-scale photon energies) to the visible/near-IR band (with eV-scale photon energies) can be achieved at room temperature with high bandwidth and sensitivity. • ### Two-dimensional terahertz magnetic resonance spectroscopy of collective spin waves(1605.06476) Nonlinear manipulation of nuclear and electron spins is the basis for all advanced methods in magnetic resonance including multidimensional nuclear magnetic and electron spin resonance spectroscopies, magnetic resonance imaging, and in recent years, quantum control over individual spins. The methodology is facilitated by the ease with which the regime of strong coupling can be reached between radiofrequency or microwave magnetic fields and nuclear or electron spins respectively, typified by sequences of magnetic pulses that control the magnetic moment directions. The capabilities meet a bottleneck, however, for far-infrared magnetic resonances characteristic of correlated electron materials, molecular magnets, and proteins that contain high-spin transition metal ions. Here we report the development of two-dimensional terahertz magnetic resonance spectroscopy and its use for direct observation of the nonlinear responses of collective spin waves (magnons). The spectra show magnon spin echoes and 2-quantum signals that reveal pairwise correlations between magnons at the Brillouin zone center. They also show resonance-enhanced second-harmonic and difference-frequency signals. Our methods are readily generalizable to multidimensional magnetic resonance spectroscopy and nonlinear coherent control of terahertz-frequency spin systems in molecular complexes, biomolecules, and materials. • ### Nonlinear two-dimensional terahertz photon echo and rotational spectroscopy in the gas phase(1606.01622) Ultrafast two-dimensional spectroscopy utilizes correlated multiple light-matter interactions for retrieving dynamic features that may otherwise be hidden under the linear spectrum. Its extension to the terahertz regime of the electromagnetic spectrum, where a rich variety of material degrees of freedom reside, remains an experimental challenge. Here we report ultrafast two-dimensional terahertz spectroscopy of gas-phase molecular rotors at room temperature. Using time-delayed terahertz pulse pairs, we observe photon echoes and other nonlinear signals resulting from molecular dipole orientation induced by three terahertz field-dipole interactions. The nonlinear time-domain orientation signals are mapped into the frequency domain in two-dimensional rotational spectra which reveal J-state-resolved nonlinear rotational dynamics. The approach enables direct observation of correlated rotational transitions and may reveal rotational coupling and relaxation pathways in the ground electronic and vibrational state. • ### Variational Approach to Solving the Spectral Boltzmann Transport Equation in Transient Thermal Grating for Thin Films(1605.08007) May 25, 2016 cond-mat.mes-hall The phonon Boltzmann transport equation (BTE) is widely utilized to study non-diffusive thermal transport. We find a solution of the BTE in the thin film transient thermal grating (TTG) experimental geometry by using a recently developed variational approach with a trial solution supplied by the Fourier heat conduction equation. We obtain an analytical expression for the thermal decay rate that shows excellent agreement with Monte Carlo simulations. We also obtain a closed form expression for the effective thermal conductivity that demonstrates the full material property and heat transfer geometry dependence, and recovers the limits of the one-dimensional TTG expression for very thick films and the Fuchs-Sondheimer expression for very large grating spacings. The results demonstrate the utility of the variational technique for analyzing non-diffusive phonon-mediated heat transport for nanostructures in multi-dimensional transport geometries, and will assist the probing of the mean free path (MFP) distribution of materials via transient grating experiments. • ### How two-dimensional brick layer J-aggregates differ from linear ones: excitonic properties and line broadening mechanisms(1603.05138) We study the excitonic coupling and homogeneous spectral line width of brick layer J-aggregate films. We begin by analysing the structural information revealed by the two-exciton states probed in two-dimensional spectra. Our first main result is that the relation between the excitonic couplings and the spectral shift in a two-dimensional structure is different (larger shift for the same nearest neighbour coupling) from that in a one-dimensional structure, which leads to an estimation of dipolar coupling in two-dimensional lattices. We next investigate the mechanisms of homogeneous broadening - population relaxation and pure dephasing - and evaluate their relative importance in linear and two-dimensional aggregates. Our second main result is that pure dephasing dominates the line width in two-dimensional systems up to a crossover temperature, which explains the linear temperature dependence of the homogeneous line width. This is directly related to the decreased density of states at the band edge when compared with linear aggregates, thus reducing the contribution of population relaxation to dephasing. Pump-probe experiments are suggested to directly measure the lifetime of the bright state and can therefore support the proposed model. • ### Nonlinear acousto-magneto-plasmonics(1602.06562) We review the recent progress in experimental and theoretical research of interactions between the acoustic, magnetic and plasmonic transients in hybrid metal-ferromagnet multilayer structures excited by ultrashort laser pulses. The main focus is on understanding the nonlinear aspects of the acoustic dynamics in materials as well as the peculiarities in the nonlinear optical and magneto-optical response. For example, the nonlinear optical detection is illustrated in details by probing the static magneto-optical second harmonic generation in gold-cobalt-silver trilayer structures in Kretschmann geometry. Furthermore, we show experimentally how the nonlinear reshaping of giant ultrashort acoustic pulses propagating in gold can be quantified by time-resolved plasmonic interferometry and how these ultrashort optical pulses dynamically modulate the optical nonlinearities. The effective medium approximation for the optical properties of hybrid multilayers facilitates the understanding of novel optical detection techniques. In the discussion we highlight recent works on the nonlinear magneto-elastic interactions, and strain-induced effects in semiconductor quantum dots. • ### Monte Carlo Study of Non-diffusive Relaxation of A Transient Thermal Grating in Thin Membranes(1512.03986) Feb. 17, 2016 cond-mat.mes-hall The impact of boundary scattering on non-diffusive thermal relaxation of a transient grating in thin membranes is rigorously analyzed using the multidimensional phonon Boltzmann equation. The gray Boltzmann simulation results indicate that approximating models derived from previously reported one-dimensional relaxation model and Fuchs-Sondheimer model fail to describe the thermal relaxation of membranes with thickness comparable with phonon mean free path. Effective thermal conductivities from spectral Boltzmann simulations completely free of any fitting parameters are shown to agree reasonably well with experimental results. These findings are important for improving our fundamental understanding of non-diffusive thermal transport in membranes and other nanostructures. • ### Stable Switching among High-Order Modes in Polariton Condensates(1602.03024) Feb. 9, 2016 quant-ph, cond-mat.mes-hall We report multistate optical switching among high-order bouncing-ball modes ("ripples") and whispering-gallerying modes ("petals") of exciton-polariton condensates in a laser-generated annular trap. By tailoring the diameter and power of the annular trap, the polariton condensate can be switched among different trapped modes, accompanied by redistribution of spatial densities and superlinear increase in the emission intensities, implying that polariton condensates in this geometry could be exploited for a multistate switch. A model based on non-Hermitian modes of the generalized Gross-Pitaevskii equation reveals that this mode switching arises from competition between pump-induced gain and in-plane polariton loss. The parameters for reproducible switching among trapped modes have been measured experimentally, giving us a phase diagram for mode switching. Taken together, the experimental result and theoretical modeling advances our fundamental understanding of the spontaneous emergence of coherence and move us toward its practical exploitation. • ### THz generation using a reflective stair-step echelon(1512.03941) Dec. 12, 2015 physics.optics We present a novel method for THz generation in lithium niobate using a reflective stair-step echelon structure. The echelon produces a discretely tilted pulse front with less angular dispersion compared to a high groove-density grating. The THz output was characterized using both a 1-lens and 3-lens imaging system to set the tilt angle at room and cryogenic temperatures. Using broadband 800 nm pulses with a pulse energy of 0.95 mJ and a pulse duration of 70 fs (24 nm FWHM bandwidth, 39 fs transform limited width), we produced THz pulses with field strengths as high as 500 kV/cm and pulse energies as high as 3.1 $\mu$J. The highest conversion efficiency we obtained was 0.33%. In addition, we find that the echelon is easily implemented into an experimental setup for quick alignment and optimization. • ### Cooperative photoinduced metastable phase control in strained manganite films(1512.00436) A major challenge in condensed matter physics is active control of quantum phases. Dynamic control with pulsed electromagnetic fields can overcome energetic barriers enabling access to transient or metastable states that are not thermally accessible. Here we demonstrate strain-engineered tuning of La2/3Ca1/3MnO3 into an emergent charge-ordered insulating phase with extreme photo-susceptibility where even a single optical pulse can initiate a transition to a long-lived metastable hidden metallic phase. Comprehensive single-shot pulsed excitation measurements demonstrate that the transition is cooperative and ultrafast, requiring a critical absorbed photon density to activate local charge excitations that mediate magnetic-lattice coupling that, in turn, stabilize the metallic phase. These results reveal that strain engineering can tune emergent functionality towards proximal macroscopic states to enable dynamic ultrafast optical phase switching and control. • ### A Variational Approach to Extracting the Phonon Mean Free Path Distribution from the Spectral Boltzmann Transport Equation(1511.08989) Nov. 29, 2015 cond-mat.mes-hall The phonon Boltzmann transport equation (BTE) is a powerful tool for studying non-diffusive thermal transport. Here, we develop a new universal variational approach to solving the BTE that enables extraction of phonon mean free path (MFP) distributions from experiments exploring non-diffusive transport. By utilizing the known Fourier solution as a trial function, we present a direct approach to calculating the effective thermal conductivity from the BTE. We demonstrate this technique on the transient thermal grating (TTG) experiment, which is a useful tool for studying non-diffusive thermal transport and probing the mean free path (MFP) distribution of materials. We obtain a closed form expression for a suppression function that is materials dependent, successfully addressing the non-universality of the suppression function used in the past, while providing a general approach to studying thermal properties in the non-diffusive regime. • ### Transient terahertz photoconductivity measurements of minority-carrier lifetime in tin sulfide thin films: Advanced metrology for an early-stage photovoltaic material(1511.07887) Nov. 24, 2015 cond-mat.mtrl-sci Materials research with a focus on enhancing the minority-carrier lifetime of the light-absorbing semiconductor is key to advancing solar energy technology for both early-stage and mature material platforms alike. Tin sulfide (SnS) is an absorber material with several clear advantages for manufacturing and deployment, but the record power conversion efficiency remains below 5%. We report measurements of bulk and interface minority-carrier recombination rates in SnS thin films using optical-pump, terahertz (THz)-probe transient photoconductivity (TPC) measurements. Post-growth thermal annealing in H_2S gas increases the minority-carrier lifetime, and oxidation of the surface reduces the surface recombination velocity. However, the minority-carrier lifetime remains below 100 ps for all tested combinations of growth technique and post-growth processing. Significant improvement in SnS solar cell performance will hinge on finding and mitigating as-yet-unknown recombination-active defects. We describe in detail our methodology for TPC experiments, and we share our data analysis routines as freely-available software.
2020-04-08 15:44:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49095767736434937, "perplexity": 2168.587393712546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371818008.97/warc/CC-MAIN-20200408135412-20200408165912-00383.warc.gz"}
https://www.parabola.unsw.edu.au/1980-1989/volume-20-1984/issue-2/article/problems-section-problems-600-611
# Problems Section: Problems 600 - 611 Q.600 $P$ is a point inside a convex polygon all of whose sides are of equal length.
2021-01-25 14:30:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.408844530582428, "perplexity": 443.748963510328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00574.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcds.2017148
# American Institute of Mathematical Sciences June  2017, 37(6): 3487-3502. doi: 10.3934/dcds.2017148 ## Global exponential κ-dissipative semigroups and exponential attraction 1 Department of Mathematics, College of Science, Hohai University, Nanjing 210098, China 2 School of Mathematics and Statistics, Huazhong University of Science & Technology, Wuhan 430074, China 3 Department of Mathematics, Nanjing University, Nanjing 210093, China * Corresponding author: Jin Zhang Received  July 2015 Revised  January 2017 Published  February 2017 Globally exponential $κ-$dissipativity, a new concept of dissipativity for semigroups, is introduced. It provides a more general criterion for the exponential attraction of some evolutionary systems. Assuming that a semigroup $\{S(t)\}_{t≥q 0}$ has a bounded absorbing set, then $\{S(t)\}_{t≥q 0}$ is globally exponentially $κ-$dissipative if and only if there exists a compact set $\mathcal{A}^*$ that is positive invariant and attracts any bounded subset exponentially. The set $\mathcal{A}^*$ need not be finite dimensional. This result is illustrated with an application to a damped semilinear wave equation on a bounded domain. Citation: Jin Zhang, Peter E. Kloeden, Meihua Yang, Chengkui Zhong. Global exponential κ-dissipative semigroups and exponential attraction. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3487-3502. doi: 10.3934/dcds.2017148 ##### References: show all references ##### References: [1] Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020345 [2] Lars Grüne, Matthias A. Müller, Christopher M. Kellett, Steven R. Weller. Strict dissipativity for discrete time discounted optimal control problems. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020046 [3] Giuseppina Guatteri, Federica Masiero. Stochastic maximum principle for problems with delay with dependence on the past through general measures. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020048 [4] Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217 [5] João Marcos do Ó, Bruno Ribeiro, Bernhard Ruf. Hamiltonian elliptic systems in dimension two with arbitrary and double exponential growth conditions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 277-296. doi: 10.3934/dcds.2020138 [6] Haiyu Liu, Rongmin Zhu, Yuxian Geng. Gorenstein global dimensions relative to balanced pairs. Electronic Research Archive, 2020, 28 (4) : 1563-1571. doi: 10.3934/era.2020082 [7] Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081 [8] Touria Karite, Ali Boutoulout. Global and regional constrained controllability for distributed parabolic linear systems: RHum approach. Numerical Algebra, Control & Optimization, 2020  doi: 10.3934/naco.2020055 [9] Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 [10] Mengni Li. Global regularity for a class of Monge-Ampère type equations with nonzero boundary conditions. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020267 [11] Bo Chen, Youde Wang. Global weak solutions for Landau-Lifshitz flows and heat flows associated to micromagnetic energy functional. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020268 2019 Impact Factor: 1.338
2020-11-30 20:24:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4504263401031494, "perplexity": 4443.117972523318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141486017.50/warc/CC-MAIN-20201130192020-20201130222020-00080.warc.gz"}
https://www.semanticscholar.org/paper/Hierarchical-Shrinkage%3A-improving-the-accuracy-and-Agarwal-Tan/a855949d557fe140de41cb33bae26d51219f6560
• Corpus ID: 246473164 # Hierarchical Shrinkage: improving the accuracy and interpretability of tree-based methods @article{Agarwal2022HierarchicalSI, title={Hierarchical Shrinkage: improving the accuracy and interpretability of tree-based methods}, author={Abhineet Agarwal and Yan Shuo Tan and Omer Ronen and Chandan Singh and Bin Yu}, journal={ArXiv}, year={2022}, volume={abs/2202.00858} } • Published 2 February 2022 • Computer Science • ArXiv Tree-based models such as decision trees and random forests (RF) are a cornerstone of modern machine-learning practice. To mitigate overfitting, trees are typically regularized by a variety of techniques that modify their structure (e.g. pruning). We introduce Hierarchical Shrinkage (HS), a post-hoc algorithm that does not modify the tree structure, and instead regularizes the tree by shrinking the prediction over each node towards the sample means of its ancestors. The amount of shrinkage is… 2 Citations Group Probability-Weighted Tree Sums for Interpretable Modeling of Heterogeneous Data • Computer Science ArXiv • 2022 An instance-weighted tree-sum method that effectively pools data across diverse groups to output a concise, rule-based model that achieves state-of-the-art prediction performance on important clinical datasets. Predictability and Stability Testing to Assess Clinical Decision Instrument Performance for Children After Blunt Torso Trauma • Medicine medRxiv • 2022 The PCS data science framework vetted the PECARN CDI and its constituent predictor variables prior to external validation, suggesting that both CDIs will generalize well to new populations, offering a potential strategy to increase the chance of a successful external validation. ## References SHOWING 1-10 OF 57 REFERENCES Fast Interpretable Greedy-Tree Sums (FIGS) • Computer Science ArXiv • 2022 FIGS generalizes the CART algorithm to simultaneously grow a flexible number of trees in a summation, and is able to avoid repeated splits, and often provides more concise decision rules than fitted decision trees, without sacrificing predictive performance. Universal Consistency of Decision Trees in High Dimensions This paper shows that decision trees constructed with Classification and Regression Trees (CART) methodology are universally consistent in an additive model context, even when the number of predictor Random Forests Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression. Generalized and Scalable Optimal Sparse Decision Trees • Computer Science ICML • 2020 The contribution in this work is to provide a general framework for decision tree optimization that addresses the two significant open problems in the area: treatment of imbalanced data and fully optimizing over continuous variables. A cautionary tale on fitting decision trees to data from additive models: generalization lower bounds • Computer Science AISTATS • 2022 A sharp squared error generalization lower bound is proved for a large class of decision tree algorithmstted to sparse additive models with C 1 component functions, and a novel connection between decision tree estimation and rate-distortion theory, a sub-field of information theory is established. • Computer Science • 2006 We develop a Bayesian \sum-of-trees" model where each tree is constrained by a regularization prior to be a weak learner, and fltting and inference are accomplished via an iterative Bayesian Randomization as Regularization: A Degrees of Freedom Explanation for Random Forest Success • Computer Science J. Mach. Learn. Res. • 2020 It is demonstrated that the additional randomness injected into individual trees serves as a form of implicit regularization, making random forests an ideal model in low signal-to-noise ratio (SNR) settings. Do we need hundreds of classifiers to solve real world classification problems? • Computer Science J. Mach. Learn. Res. • 2014 The random forest is clearly the best family of classifiers (3 out of 5 bests classifiers are RF), followed by SVM (4 classifiers in the top-10), neural networks and boosting ensembles (5 and 3 members in theTop-20, respectively). Classification and regression trees • W. Loh • Computer Science WIREs Data Mining Knowl. Discov. • 2011 This article gives an introduction to the subject of classification and regression trees by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples. Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model • Computer Science ArXiv • 2015 A generative model called Bayesian Rule Lists is introduced that yields a posterior distribution over possible decision lists that employs a novel prior structure to encourage sparsity and has predictive accuracy on par with the current top algorithms for prediction in machine learning.
2022-06-28 21:05:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46543794870376587, "perplexity": 3359.383521133748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103617931.31/warc/CC-MAIN-20220628203615-20220628233615-00003.warc.gz"}
http://cococubed.asu.edu/code_pages/sprocess.shtml
Cococubed.com Slow Neutron Captures Home Astronomy research Software instruments Stellar equation of states EOS with ionization EOS for supernovae Chemical potentials Stellar atmospheres Voigt Function Jeans escape Polytropic stars Cold white dwarfs Cold neutron stars Stellar opacities Neutrino energy loss rates Ephemeris routines Fermi-Dirac functions Polyhedra volume Plane - cube intersection Coating an ellipsoid Nuclear reaction networks Nuclear statistical equilibrium Laminar deflagrations CJ detonations ZND detonations Fitting to conic sections Unusual linear algebra Derivatives on uneven grids Supernova light curves Exact Riemann solutions 1D PPM hydrodynamics Hydrodynamic test cases Galactic chemical evolution Universal two-body problem Circular and elliptical 3 body The pendulum Phyllotaxis MESA MESA-Web FLASH Zingale's software Brown's dStar GR1D code Herwig's NuGRID Meyer's NetNuc Presentations Illustrations cococubed and AAS Videos AAS Journals 2019 Digital Infrastructure 2019 JINA R-process Workshop 2019 MESA Marketplace 2019 MESA Summer School Teaching materials Education and Public Outreach Contact: F.X.Timmes my one page vitae, full vitae, research statement, and teaching statement. The tool minis.tbz evolves an educational version of an s-process reaction network. One hundred isotopes are evolved until a chosen ending time. The initial abundance of the first isotope, notionally 56Fe, is taken equal to one. Guidance on what this tool does seems prudent. Let $R$ be the reaction rate for (n,g) reactions. In general $R$ is temperature, density, and composition dependent - but not here. The ordinary differential equations describing the change in the abundances $y$ of the $m$ isotopes are: $$\frac{{\rm d}y_{1}}{{\rm d}t} = -y_{1} - R_{1} \hskip 0.5in \frac{{\rm d}y_{i}}{{\rm d}t} = y_{i-1} \ R_{i-1} - y_{i} \ R_{i} \ , \ i=1,2\ldots,m-1 \hskip 0.5in \frac{{\rm d}y_{m}}{{\rm d}t} = y_{m-1} \ R_{m-1} \label{eq1} \tag{1}$$ For the implicit first-order accurate Euler method, each abundance is updated over a timestep h as $y_{i}^{{\rm new}} = y_{i} + \Delta y_{i}$. The change in the abundances over a time step $\Delta y_{i}$ is obtained from solving the system of linear equations $({\bf I}/h - \tilde{{\bf J}}) \cdot \Delta {\bf y} = \dot{\bf y}$, which is simply the familar $\tilde{{\bf A}} \cdot {\bf x} = {\bf b}$. With only (n,g) reactions, Jacobian matrix $\tilde{{\bf J}}$ has the simple form $$\left[\begin{array}{rrrrrr} -R_{1} & & & & & \\ R_{1} & -R_{2} & & & & \\ & R_{2} & -R_{3} & & & \\ & & & \ldots & & \\ & & & & R_{m-1} & 0 \\ \end{array}\right] \label{eq2} \tag{2}$$ This system of linear equations can be easily solved by hand: $$\Delta y_1 = \frac{-y_1 R_1}{1/h + R_1} \hskip 0.5in \Delta y_i = \frac { y_{i-1} R_{i-1} - y_i R_i } {1/h + R_i} \ , \ i=1,2\ldots,m-1 \hskip 0.5in \Delta y_m = \frac{-y_{m-1} R_{m-1}}{1/h} \label{eq3} \tag{3}$$ Thus the succint evolution loop implemented in minis.tbz. Here are some results: All exposures equal to one A middle exposure at 0.1 A middle exposure at 10.0 Please cite the relevant references if you publish a piece of work that use these codes, pieces of these codes, or modified versions of them. Offer co-authorship of the publication if appropriate. At best, you'll love these programs so much that you'll send great wads of cash to me.
2019-08-25 05:39:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.5661520957946777, "perplexity": 8503.597532868785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323067.50/warc/CC-MAIN-20190825042326-20190825064326-00557.warc.gz"}
https://www.qb365.in/materials/stateboard/11th-chemistry-chapter-6-gaseous-state-one-mark-question-with-answer-key-3952.html
#### Gaseous State One Mark Questions 11th Standard Reg.No. : • • • • • • Chemistry Time : 00:30:00 Hrs Total Marks : 10 10 x 1 = 10 1. Gases deviate from ideal behavior at high pressure. Which of the following statement(s) is correct for non-ideality? (a) at high pressure the collision between the gas molecule become enormous (b) at high pressure the gas molecules move only in one direction (c) at high pressure, the volume of gas become insignificant (d) at high pressure the intermolecular interactions become significant 2. Which of the following is the correct expression for the equation of state of van der Waals gas? (a) $\left( P+\frac { a }{ { n }^{ 2 }{ V }^{ 2 } } \right) (V-nb)=nRT$ (b) $\left( P+\frac { na }{ { n }^{ 2 }{ V }^{ 2 } } \right) (V-nb)=nRT$ (c) $\left( P+\frac { { an }^{ 2 } }{ { V }^{ 2 } } \right) (V-nb)=nRT$ (d) $\left( \frac { P+{ n }^{ 2 }{ a }^{ 2 } }{ { V }^{ 2 } } \right) (V-ab)=nRT$ 3. The temperatures at which real gases obey the ideal gas laws over a wide range of pressure is called (a) Critical temperature (b) Boyle temperature (c) Inversion temperature (d) Reduced temperature 4. A bottle of ammonia and a bottle of HCI connected through a long tube are opened simultaneously at both ends. The white ammonium chloride ring first formed will be (a) At the center of the tube (b) Near the hydrogen chloride bottle (c) Near the ammonia bottle (d) Throughout the length of the tube 5. At identical temperature and pressure, the rate of diffusion of hydrogen gas is 3$\sqrt { 3 }$ times that of a hydrocarbon having molecular formula CnH2n-2. What is the value of n ? (a) 8 (b) 4 (c) 3 (d) 1 6. Value of gas constant R is (a) 0.082dm3atm (b) 0.987 cal mol-1 K-1 (c) 8.3 J mol-1 K-1 (d) 8 er mol-1 K-1 7. Pressure is _____________ (a) Force/ area (b) force x Area (c) Area/ force (d) Force / area $\times$ volume 8. The unit of pressure is _____________ (a) Pascal (b) Torr (c) Bar (d) all the above 9. The instrument used for measuring atmospheric pressure is ________________ (a) Beckmann thermometer (b) Galvanometer (c) Barometer (d) all the above 10. The standard atmospheric pressure is the pressure that supports a column of mercury exactly ___________ high at 0° C at sea level. (a) 760mm (b) 76 cm (c) both a & b (d) 760 cm
2019-10-16 05:15:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6226117014884949, "perplexity": 2331.3272677217587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986664662.15/warc/CC-MAIN-20191016041344-20191016064844-00338.warc.gz"}
https://research.utwente.nl/en/publications/more-on-spanning-2-connected-subgraphs-of-alphabet-graphs-special-2
# More on spanning 2-connected subgraphs of alphabet graphs, special classes of grid graphs M. Salman, Haitze J. Broersma, C.A. Rodger Research output: Book/ReportReportOther research output ### Abstract A grid graph $G$ is a finite induced subgraph of the infinite 2-dimensional grid defined by $Z \times Z$ and all edges between pairs of vertices from $Z \times Z$ at Euclidean distance precisely 1. A natural drawing of $G$ is obtained by drawing its vertices in $\mathbb{R}^2$ according to their coordinates. Apart from the outer face, all (inner) faces with area exceeding one (not bounded by a 4-cycle) in a natural drawing of $G$ are called the holes of $G$. We define 26 classes of grid graphs called alphabet graphs, with no or a few holes. We determine which of the alphabet graphs contain a Hamilton cycle, i.e. a cycle containing all vertices, and solve the problem of determining a spanning 2-connected subgraph with as few edges as possible for all alphabet graphs. Original language English Enschede University of Twente, Department of Applied Mathematics Published - 2003 ### Publication series Name Memorandum Department of Applied Mathematics, University of Twente 1702 0169-2690 ### Fingerprint Grid Graph Subgraph Graph in graph theory Face Cycle Hamilton Cycle Induced Subgraph Euclidean Distance Grid Class Drawing • MSC-05C40 • IR-65887 • EWI-3522 • MSC-05C85 ### Cite this Salman, M., Broersma, H. J., & Rodger, C. A. (2003). More on spanning 2-connected subgraphs of alphabet graphs, special classes of grid graphs. (Memorandum; No. 1702). Enschede: University of Twente, Department of Applied Mathematics. Salman, M. ; Broersma, Haitze J. ; Rodger, C.A. / More on spanning 2-connected subgraphs of alphabet graphs, special classes of grid graphs. Enschede : University of Twente, Department of Applied Mathematics, 2003. (Memorandum; 1702). @book{0b1980846ed24409a5f33851257a2428, title = "More on spanning 2-connected subgraphs of alphabet graphs, special classes of grid graphs", abstract = "A grid graph $G$ is a finite induced subgraph of the infinite 2-dimensional grid defined by $Z \times Z$ and all edges between pairs of vertices from $Z \times Z$ at Euclidean distance precisely 1. A natural drawing of $G$ is obtained by drawing its vertices in $\mathbb{R}^2$ according to their coordinates. Apart from the outer face, all (inner) faces with area exceeding one (not bounded by a 4-cycle) in a natural drawing of $G$ are called the holes of $G$. We define 26 classes of grid graphs called alphabet graphs, with no or a few holes. We determine which of the alphabet graphs contain a Hamilton cycle, i.e. a cycle containing all vertices, and solve the problem of determining a spanning 2-connected subgraph with as few edges as possible for all alphabet graphs.", keywords = "MSC-05C40, IR-65887, EWI-3522, MSC-05C85", author = "M. Salman and Broersma, {Haitze J.} and C.A. Rodger", year = "2003", language = "English", series = "Memorandum", publisher = "University of Twente, Department of Applied Mathematics", number = "1702", } Salman, M, Broersma, HJ & Rodger, CA 2003, More on spanning 2-connected subgraphs of alphabet graphs, special classes of grid graphs. Memorandum, no. 1702, University of Twente, Department of Applied Mathematics, Enschede. Enschede : University of Twente, Department of Applied Mathematics, 2003. (Memorandum; No. 1702). Research output: Book/ReportReportOther research output TY - BOOK T1 - More on spanning 2-connected subgraphs of alphabet graphs, special classes of grid graphs AU - Salman, M. AU - Broersma, Haitze J. AU - Rodger, C.A. PY - 2003 Y1 - 2003 N2 - A grid graph $G$ is a finite induced subgraph of the infinite 2-dimensional grid defined by $Z \times Z$ and all edges between pairs of vertices from $Z \times Z$ at Euclidean distance precisely 1. A natural drawing of $G$ is obtained by drawing its vertices in $\mathbb{R}^2$ according to their coordinates. Apart from the outer face, all (inner) faces with area exceeding one (not bounded by a 4-cycle) in a natural drawing of $G$ are called the holes of $G$. We define 26 classes of grid graphs called alphabet graphs, with no or a few holes. We determine which of the alphabet graphs contain a Hamilton cycle, i.e. a cycle containing all vertices, and solve the problem of determining a spanning 2-connected subgraph with as few edges as possible for all alphabet graphs. AB - A grid graph $G$ is a finite induced subgraph of the infinite 2-dimensional grid defined by $Z \times Z$ and all edges between pairs of vertices from $Z \times Z$ at Euclidean distance precisely 1. A natural drawing of $G$ is obtained by drawing its vertices in $\mathbb{R}^2$ according to their coordinates. Apart from the outer face, all (inner) faces with area exceeding one (not bounded by a 4-cycle) in a natural drawing of $G$ are called the holes of $G$. We define 26 classes of grid graphs called alphabet graphs, with no or a few holes. We determine which of the alphabet graphs contain a Hamilton cycle, i.e. a cycle containing all vertices, and solve the problem of determining a spanning 2-connected subgraph with as few edges as possible for all alphabet graphs. KW - MSC-05C40 KW - IR-65887 KW - EWI-3522 KW - MSC-05C85 M3 - Report T3 - Memorandum BT - More on spanning 2-connected subgraphs of alphabet graphs, special classes of grid graphs PB - University of Twente, Department of Applied Mathematics CY - Enschede ER - Salman M, Broersma HJ, Rodger CA. More on spanning 2-connected subgraphs of alphabet graphs, special classes of grid graphs. Enschede: University of Twente, Department of Applied Mathematics, 2003. (Memorandum; 1702).
2019-08-19 22:41:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4896763861179352, "perplexity": 1545.8421659428213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315132.71/warc/CC-MAIN-20190819221806-20190820003806-00246.warc.gz"}
https://chat.stackexchange.com/transcript/54160/2019/12/6
5:43 AM @JohnRennie hiya @psa morning :-) boy am I ever having trouble with a couple SR problems : ( the final is on saturday morning (it's thursday night here) unfortunately lol @JohnRennie Good Morning sir :-) @user8718165 morning :-) 5:45 AM @JohnRennie just part d) I find extremely confusing we've worked on this one before as well but it eludes me once again what I chose to do was denote the double primed frame to be the dragon's frame. I denoted the event "dragon passes Hermione" to be (t,x) = (t'',x'') = (0,0) in the dragon's frame, the events "hermione casts spell" and "harry casts spell" are equidistant and simultaneous, so $x''_d = \frac{1}{2}(x''_{hermione} + x''_{harry})$ from there I did some other stuff with LT's and got the wrong answer I hate these harry potter questions If the spells reach the dragon at the same time in the dragon's frame they reach the dragon at the same time in all frames. oops, yes you're right it's just one event but the dragon's frame is probably the only one where they're equidistant so I guess it's a nice frame to choose? Yes, because we know they were cast at the same time in the dragon's frame. In Hermione's and Harry's frames the casting events wouldn't have been simultaneous. I would probably work in the dragon's frame. right do you want me to show you what I did in full? or do you want to just work through it from scratch? No, I'm a bit busy this morning and won't have time to go through it in detail. 5:54 AM oh OK no worries can I ask you a quicker MC question then? @psa yes I understand how altering the conditions in this way could lead to A or C, but what about B? I mean, is it just possible that you could decrease the frequency (so lower the energy of the incoming photons) and yet increase the intensity (so increase the number of photons) such that the current would increase? If you are above the threshold frequency then the number of photons per second is $N = I/(h\nu)$ where $I$ is the intensity and $\nu$ is the frequency. The number of photoelectrons per second is $N$ times some constant that represents the efficiency of PE production. And the current is proportional to the number of photoelectrons. So the current is proportional to $I$ and inversely proportional to $\nu$. @psa OK so far? yes In the question we are increasing $I$ and decreasing $\nu$, so as long as we stay above the threshold the current will increase. 6:05 AM wow well that's quite helpful I never actually knew $N = I/h\nu$ Well $I$ is the number of joules per second, and $1/h\nu$ is the number of photons per joule. Yes? yes ah, so that makes sense :-) @JohnRennie hello sir @user8718165 hi :-) @user8718165 that's quite a fun question. I'm just discussing Python in another room, but I'll have a look at that question when I'm finished. 6:13 AM @JohnRennie when you're done with user - just to clarify, the current could also decrease (how?) or stop (if you went below the threshold frequency) as well right? @JohnRennie okay sir...please ping me. I actually don't see any way you could decrease the current at all by doing this... but the answer says you could. as in, you could do all three. unless they're being pedantic and saying "stop" is a form of decrease @psa well I suppose A implies C i.e. if you reduce the frequency far enough you go below the threshold and the current stops because no photoelectrons are produced. And for the current to stop it must reduce ... that seems like trickery Yes. I think C is rather confusing. I can't think of a way for the current to decrease except by getting near the threshold. I assume the trun-off isn't instant so the current will decrease smoothly to zero as you pass through the cut-off frequency. 6:21 AM I thought it was just a linear relationship oh no that's energy I believe or something else as a function of frequency not current that's true though yeah, that is certainly what it looks like when you look up graphs kind of like a diode... @JohnRennie hi, sir, Alesha here. @yuvrajsingh hi Alesha. I'm busy in another room, but I'll ping you when I'm free. OK. 6:42 AM @JohnRennie I will wait. 7:13 AM @JohnRennie hello sir 7:30 AM @JohnRennie done sir. @yuvrajsingh hi, yes I'm free for about half an hour before I need to work again. @JohnRennie can I. Yes, if you have a question ask now :-) OK, sir it is about prism. But before asking the question I have a request. Sir, please answer my mistake, or what error I am making whole thinking it. So prism has prism angle of 45° Now I take a light ray, which has gazing incidence. OK ... 7:37 AM And refracted at critical angle, now if this refracted ray when strikes to second surface, it should also have gazing emergence am I right or not, if I am wrong is it because of prism angle that light does it gaze while emerging. @JohnRennie The angle the ray strikes the second surface will depend on the value of the critical angle i.e. the value of the refractive index. I can't see any reason why the light should hit the second surface at any special angle. OK sir, when light got refracted from first surface, $r_1$=theta c. And this be the same angle for the second surface, with normal. Then it should gaze. Let's draw a diagram ... @yuvrajsingh I've just drawn some random value for $\theta_c$. There's no reason why it should strike the second surface at the same angle. I have a question question in my book. Same like this A prism of prism angle 45°,it is found that angle of emergence is 45°,for grazing incidence calculate the refractive index of the prism @JohnRennie @yuvrajsingh I think this is what the question means. 7:53 AM Yes. @JohnRennie It's a tedious question. You just have to calculate all the angles and solve for $n$. @JohnRennie hello sir...are you free for a while @user8718165 I need to work now again I'm afraid. @JohnRennie for how long sir? I just wanted to talk about that ball problem... 8:14 AM @user8718165 suppose the radius of curvature of the wall and the ball are equal, then at the instant the ball hits the wall there is a normal force all along the quarter of the ball that is in contact with the wall. Yes? @JohnRennie work over sir? Yes @JohnRennie yeah sir... @JohnRennie yeah sir...got it 8:19 AM So at the instant of the collision the net force looks like this. @JohnRennie yeah sir... So there is a net upwards force and the ball will accelerate upwards. But ... @JohnRennie gravity... Suppose the ball moves up an infinitesimal distance $dx$. As soon as it moves up at all it loses contact with the curved part of the wall. @JohnRennie yeah sir 8:22 AM @JohnRennie okay sir So as soon as it moves upwards at all the force becomes horizontal and there is no upwards force. This happens for even the smallest value of $dx$. @JohnRennie yeah sir... So the work done by the upwards component of the force is zero because the upwards part of the force doesn't act over any distance. @JohnRennie yeah sir 8:25 AM And that means the ball doesn't move up. It just bounces back horizontally. @JohnRennie okay sir...not even a bit? here it has an upward component so can it go up by dx Right, but we've agreed that for any $dx > 0$ as soon as it has moved by $dx$ the force becomes horizontal. So it can move upwards by $dx$ but only when $dx \to 0$. i.e. it can only move upwards by zero. @JohnRennie yeah sir...but if there weren't g...then would it rise sir? @JohnRennie okay sir @user8718165 No Suppose you now increase the wall radius to make it slightly bigger than the ball radius, but only slightly bigger. @JohnRennie okay sir 8:29 AM Now the ball can move upwards a non-zero distance $\Delta x$ before it loses contact with the curved part of the wall. So now it can move upwards, but only a bit. @JohnRennie got it sir... In zero-g it would get a small vertical component of velocity, so it would bounce back mostly horizontally but at a small angle to the horizontal. @JohnRennie sir then the ball will move away from the floor....right? Yes @JohnRennie hello sir...I have a last qn 8:42 AM @user8718165 yes? 8:53 AM @JohnRennie sir could you please repeat just once more if the curvature matches...why can't the ball move after collision? I got the dx argument but still confused. Please help sir @JohnRennie hello sir @JohnRennie are you working now? :) Discussing Python ... @JohnRennie ah..okay sir...could you please ping me when done sir? 9:13 AM Actually, now I think about it my argument isn't valid. An ideal collision takes no time and the force is infinite, so my argument that the limit of $F \cdot dx = 0$ is not valid. @JohnRennie sir the ball was traveling towards left and the force applied is F...please neglect gravity. Please have a look here Incline is 45 deg. I'll have to go away and think about. But that will have to wait. 9:58 AM @user8718165 I'm not annoyed, it's just that the question is harder than I thought and I will need some time to think about it. But right now I have other stuff to do. 6 hours later… 3:44 PM @JohnRennie hello sir :-) @user8718165 hi :-) 4:04 PM @JohnRennie, Hi sir :-) ;May I ask a doubt from a question from Doppler Effect (Sound Waves)? @M.GuruVishnu hi, yes, sorry I missed your ping earlier. @JohnRennie No problem sir. Actually I figured that out after notifying you. That's why I removed. Sorry for disturbing then. Here afterwards please don't ask "sorry" to me :-) Question: A source emitting a sound of frequency $f$ is placed at a large distance from an observer. The source starts moving towards the observer with a uniform acceleration $a$. Find the frequency heard by the observer corresponding to the wave emitted just after the source starts. The speed of sound in the medium is $v$. (contd.) Doubt: In a video lecture, I saw that the same formula for Doppler Shift in frequency could be used when observer, source or both are accelerating. But velocity of source is taken at the instant it emits a particular wave front, and velocity of the observer is taken when that particular wave front reaches him. (contd.) So, in the above question, the velocity of source at the time of emitting is $0$. And the observer is not moving. So I concluded, the apparent frequency is $f$ which is the same as that of the source. But the answer is incorrect. For your reference in my book it is given as $\frac{2vf^2}{2vf-a}$. Could you please explain or give a hint for this anomaly sir? I'm not sure I understand. The frequency is a function only of velocity, not of acceleration. @JohnRennie Yes sir. So can we say the book is incorrect? Maybe there is some aspect to the question that we have missed. 4:16 PM @JohnRennie I think the observer must also be accelerating. Only then the frequency depends on acceleration. @M.GuruVishnu yes, I agree. @JohnRennie Thank you sir. Let me try the problem with this new assumption and ask if I've any doubts. But then I would expect the frequency heard by the observer to depend on the distance between the source and the observer, because the travel time of the sound will determine how fast the observer is moving. @JohnRennie Oh yes sir. And we are not supplied with distance. May be "a large distance" hints us to do a little bit of approximation. But I am not sure of this because, the doppler shift doesn't depend on the separation between source and the observer. @M.GuruVishnu to be honest I'm disinclined to spend time on a question that isn't clearly stated as it could well just be time wasted. 4:23 PM @JohnRennie Ok sir. Thank you :-) @JohnRennie, Sir, could you please tell whether this statement "In a video lecture, I saw that the same formula for Doppler Shift in frequency could be used when observer, source or both are accelerating. But velocity of source is taken at the instant it emits a particular wave front, and velocity of the observer is taken when that particular wave front reaches him." from the "Doubt" message is applicable for all cases? Or are there any limits to this? @M.GuruVishnu that seems correct to me. Once the wave has been emitted its frequency (in the rest frame of the air) is fixed and cannot change. So it depends on the velocity of the source at the moment the wave was emitted. Ok sir. Thank you for the clarification. Bye sir :-) 4:52 PM @M.GuruVishnu Bye :-)
2020-01-25 05:04:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6918907165527344, "perplexity": 946.5671005160921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251669967.70/warc/CC-MAIN-20200125041318-20200125070318-00252.warc.gz"}
https://www.physicsoverflow.org/9385/dark-matter-detection
# Dark matter detection + 2 like - 0 dislike 759 views In the detection of weakly interacting massive particles (WIMPs), which is the basis of dark matter, what is the use of the tank filled with liquid xenon? I mean, how does the releasing of photons contribute in the detection of WIMPs? To relate to what I'm talking about, you can follow this link: http://www.universetoday.com/105943/new-dark-matter-detector-draws-a-blank-in-first-test-round/ This post imported from StackExchange Physics at 2014-03-24 04:16 (UCT), posted by SE-user Shaona Bose asked Nov 3, 2013 retagged Mar 24, 2014 Note that WIMPs are only one of several candidates for dark matter, and xenon TPC is only one of several proposed methods for detecting them. This post imported from StackExchange Physics at 2014-03-24 04:16 (UCT), posted by SE-user dmckee + 2 like - 0 dislike LUX is a "time projection chamber". That means it is a big volume of material (in this case cryogenic xenon) and subjected to a strong electric field. The field causes ionization electrons to drift to two or more non-colinear planes of detection wires. The front wires must be so-call "induction" wires that do not absorb the ionization electrons. This means that you can get information about the drift electron's positions in at least two different direction and reconstruct their position in two dimension. But it gets better: with a uniform field the drift velocity is very reliable, so if you know when the electrons started drifting and when they were detected you also know how far they drifted and thus have reconstructed their starting position in 3D. Heavy noble gasses make a really good medium for such devices because • If sufficiently pure loose ionization electrons can go uncaptured for many miliseconds allowing very long drift distances (meters). • These materials scintillate (that is release light) when ionizing radiation passes through them. That light is detected within nanoseconds of it's release and used to tag the drift start time. The electron detection electronics, of course, tag the detection time. LUX is searching for ionizing events in the detector that 1. Can not be explained by the (many) known kinds of physics that generate signals in these detectors 2. Have the characteristics that are expected of WIMP--ordinary matter interactions (which can be conjectured with some accuracy because we define a WIMP as having certain properties; basically there can only be elastic $Z^0$ at the WIMP vertex which generates a modest number of final states). This post imported from StackExchange Physics at 2014-03-24 04:16 (UCT), posted by SE-user dmckee answered Nov 3, 2013 by (420 points) Aside: I spent last summer helping to build a large microBooNE (a large liquid argon TPC for neutrino detection), so I have some sense of the difficulty of the project. Though LUX has much higher radio-purity requirements than microBooNE. This post imported from StackExchange Physics at 2014-03-24 04:16 (UCT), posted by SE-user dmckee Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverflo$\varnothing$Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
2019-03-24 09:57:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5852181911468506, "perplexity": 1882.240874458342}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203409.36/warc/CC-MAIN-20190324083551-20190324105551-00463.warc.gz"}
https://imathworks.com/tex/tex-latex-how-to-limit-the-range-of-a-function-in-tikz/
# [Tex/LaTex] How to limit the range of a function in TikZ tikz-pgf I have code like this: \documentclass[tikz]{standalone} \begin{document} \begin{tikzpicture} \draw [->,thick] (-5,0) -- (5,0) node[right] {$x$}; \draw [->,thick] (0,-5) -- (0,5) node[above] {$y$}; \draw[ultra thick, domain=-5:5] plot (\x, {pow(\x,2)-5}); \end{tikzpicture} \end{document} When I draw this, the parabola is extremely exceeding the area of what I want. Is there a way to limit the range of the function f(x)=x^2-5 to suit the coordinate area, i.e. only show the points which y-coordinates are between -5 and 5? You could add a \clip before you do the plotting: \documentclass[tikz, border=5mm]{standalone} \begin{document} \begin{tikzpicture} \draw [->,thick] (-5,0) -- (5,0) node[right] {$x$}; \draw [->,thick] (0,-5) -- (0,5) node[above] {$y$}; \clip (-5,-5) rectangle (5,5); \draw[ultra thick, domain=-5:5] plot (\x, {pow(\x,2)-5}); \end{tikzpicture} \end{document}
2023-02-08 10:28:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9548209309577942, "perplexity": 1224.9321160249506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500758.20/warc/CC-MAIN-20230208092053-20230208122053-00357.warc.gz"}
https://www.physicsforums.com/threads/specific-heat-question-please-help.71247/
1. Apr 12, 2005 mathzeroh how're all the fine scientists feeling today? well i jst had a question regarding my specific heat & Co. homework. ok here it is: "When a 25g block of metal alloy at 215 degrees celcius (i dont know how to make that round degree symbol on here) is dropped into 85g of water at 22 deg.C, the final temperature is 37 deg.C. What is the specific heat of the alloy? i worked it out 2 different times and got 2 different answers each time. :uhh: once i got 15 J/(g*degrees Celcius) and then the second time i got 1.2 J/(g*degrees Celcius) any help at all appreciated!!!!!! thanks in advance!!!!! Last edited: Apr 12, 2005 2. Apr 12, 2005 dextercioby Okay.I don't use sign conventions for given and received heat,so i'll simply write $$Q_{\mbox{given}}=m_{\mbox{alloy}}c_{\mbox{alloy}}(215-37)$$ $$Q_{\mbox{received}}=m_{\mbox{water}}c_{\mbox{water}}(37-22)$$ Set the 2 #-s equal and then solve for the unknown. Daniel. P.S.Pay attention with the units.I'd advise u to use SI-mKgs. 3. Apr 12, 2005 mathzeroh wouldnt it be (37-215) because delta T is always t final minus t initial?? and what did you mean by SI-mKgs ? 4. Apr 12, 2005 dextercioby Nope,i don't use convention signs.The heats are always positive and equal...(in this 2 body thermal interraction). SI-mKgs from Système International-metre,Kilogramme,sécond . Daniel. 5. Apr 12, 2005 mathzeroh oh. well when heat is negative, it meant that its lost, right? thats what i was thought. let me get another crack at it with what you said. but i dont understand why u would have to set them equal together, since Q of water plus Q of metal alloy should equal 0 right? 6. Apr 13, 2005 Gokul43201 Staff Emeritus A positive number can never be equal to a negative number. So, what Dexter is doing is simply making sure they are both positive, and equating them. If you want to strictly follow convention, you would swap Tf and Ti inside the brackets and then put a minus sign in front of one of the Q's. This will give the same result (the two minus signs will cancel out).
2017-08-20 04:19:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7215110659599304, "perplexity": 1699.8621448826777}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105970.61/warc/CC-MAIN-20170820034343-20170820054343-00515.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/2106/what-is-surface-code-quantum-error-correction/2107#2107
# What is 'Surface Code'? (Quantum Error Correction) I am studying Quantum Computing and Information. I have crossed with the 'Surface Code' phrase but I can't find a brief explanation of what it is and how it works. Hopefully you guys can help me with this. Note: If you like you can use some complicated math, I am familiar with quantum mechanics to some extent. • Welcome! To clarify: should answers assume that you have already taken a wikipedia-level look into toric codes and stabilizer codes? May 19 '18 at 14:31 • I don't know about toric codes or stabilizer codes :| But I will read about it May 19 '18 at 15:05 • Nice! Then that should be a great start I think. I suggest to perhaps take a quick look at those and put some more details into the question: things that you already think you understand and others that don't make so much sense yet. Once it's answered, this could be a very helpful Q&A for people that come after you: these are important concepts and the terminology is indeed a little confusing. May 19 '18 at 15:12 • Related: "Quantum Error Correction: Surface code vs. color code" from SE.Physics. – Nat May 19 '18 at 15:47 • I don't know about brief, but arxiv.org/abs/1208.0928 is where I started learning about the surface code from. May 19 '18 at 16:38 The surface codes are a family of quantum error correcting codes defined on a 2D lattice of qubits. Each code within this family has stabilizers that are defined equivalently in the bulk, but differ from one another in their boundary conditions. The members of the surface code family are sometimes also described by more specific names: The toric code is a surface code with periodic boundary conditions, the planar code is one defined on a plane, etc. The term ‘surface code’ is sometimes also used interchangeably with ‘planar code’, since this is the most realistic example of the surface code family. The surface codes are currently a large research area, so I’ll just point you towards some good entry points (in addition to the Wikipedia article linked to above). The surface codes can also be generalized to qudits. For more on that, see here. • Does the surfaces codes works only for topological quantum computers? May 28 '18 at 2:15 • The surface codes would work for any qubits. In some sense, with surface codes you are creating a topological quantum computer using non-topological qubits. May 28 '18 at 7:30 The terminology of 'surface code' is a little bit variable. It might refer to a whole class of things, variants of the Toric code on different lattices, or it might refer to the Planar code, the specific variant on a square lattice with open boundary conditions. # The Toric Code I'll summarise some of the basic properties of the Toric code. Imagine a square lattice with periodic boundary conditions, i.e. the top edge is joined to the bottom edge, and the left edge is joined to the right edge. If you try this with a sheet of paper, you'll find you get a doughnut shape, or torus. On this lattice, we place a qubit on each edge of a square. ## Stabilizers Next, we define a whole bunch of operators. For every square on the lattice (comprising 4 qubits in the middle of each edge), we write $$B_p=XXXX,$$ acting a Pauli-$X$ rotation on each of the 4 qubits. The label $p$ refers to 'plaquette', and is just an index so we can later count over the whole set of plaquettes. On every vertex of the lattice (surrounded by 4 qubits), we define $$A_s=ZZZZ.$$ $s$ refers to the star shape and again, will let us sum over all such terms. We observe that all of these terms mutually commute. It's trivial for $[A_s,A_{s'}]=[B_p,B_{p'}]=0$ because Pauli operators commute with themselves and $\mathbb{I}$. More care is required with $[A_s,B_p]=0$, bot note that these two terms either have 0 or 2 sites in common, and pairs of different Pauli operators commute, $[XX,ZZ]=0$. ## Codespace Since all these operators commute, we can define a simultaneous eigenstate of them all, a state $|\psi\rangle$ such that $$\forall s:A_s|\psi\rangle=|\psi\rangle\qquad\forall p:B_p|\psi\rangle=|\psi\rangle.$$ This defines the codespace of the code. We should determine how large it is. For an $N\times N$ lattice, there are $N^2$ qubits, so the Hilbert space dimension is $2^{N^2}$. There are $N^2$ terms $A_s$ or $B_p$, which we collectively refer to as stabilizers. Each has eigenvalues $\pm 1$ (to see, just note that $A_s^2=B_p^2=\mathbb{I}$) in equal number, and when we combine them, each halves the dimension of the Hilbert space, i.e. we would think that this uniquely defines a state. Now, however, observe that $\prod_sA_s=\prod_pB_p=\mathbb{I}$: each qubit is included in two stars and two plaquettes. This means that one of the $A_s$ and one of the $B_p$ is linearly dependent on all the others, and does not further reduce the size of the Hilbert space. In other words, the stabilizer relations define a Hilbert space of dimension 4; the code can encode two qubits. ## Logical Operators How do we encode a quantum state in the Toric code? We need to know the logical operators: $X_{1,L}$, $Z_{1,L}$, $X_{2,L}$ and $Z_{2,L}$. All four must commute with all the stabilizers, and be linearly independent from them, and must generate the algebra of two qubits. Commutation of operators on the two different logical qubits: $$[X_{1,L},X_{2,L}]=0\quad [X_{1,L},Z_{2,L}]=0 \quad [Z_{1,L},Z_{2,L}]=0\quad [Z_{1,L},X_{2,L}]=0$$ and anti-commutation of the two on each qubit: $$\{X_{1,L},Z_{1,L}\}=0\qquad\{X_{2,L},Z_{2,L}\}=0$$ There's a couple of different conventions for how to label the different operators. I'll go with my favourite (which is probably the less popular): • Take a horizontal line on the lattice. On every qubit, apply $Z$. This is $Z_{1,L}$. In fact, any horizontal line is just as good. • Take a vertical line on the lattice. On every qubit, apply $Z$. This is $X_{2,L}$ (the other convention would label it as $Z_{2,L}$) • Take a horizontal strip of qubits, each of which is in the middle of a vertical edge. On every qubit, apply $X$. This is $Z_{2,L}$. • Take a vertical strip of qubits, each of which is in the middle of a horizontal edge. On every qubit, apply $X$. This is $X_{1,L}$. You'll see that the operators that are supposed to anti-commute meet at exactly one site, with an $X$ and a $Z$. Ultimately, we define the logical basis states of the code by $$|\psi_{x,y}\rangle: Z_{1,L}|\psi_{x,y}\rangle=(-1)^x|\psi_{x,y}\rangle,\qquad Z_{2,L}|\psi_{x,y}\rangle=(-1)^y|\psi_{x,y}\rangle$$ The distance of the code is $N$ because the shortest sequence of single-qubit operators that converts between two logical states constitutes $N$ Pauli operators on a loop around the torus. ## Error Detection and Correction Once you have a code, with some qubits stored in the codespace, you want to keep it there. To achieve this, we need error correction. Each round of error correction comprises measuring the value of every stabilizer. Each $A_s$ and $B_p$ gives an answer $\pm 1$. This is your error syndrome. It is then up to you, depending on what error model you think applies to your system, to determine where you think the errors have occurred, and try to fix them. There's a lot of work going on into fast decoders that can perform this classical computation as efficiently as possible. One crucial feature of the Toric code is that you do not have to identify exactly where an error has occurred to perfectly correct it; the code is degenerate. The only relevant thing is that you get rid of the errors without implementing a logical gate. For example, the green line in the figure is one of the basic errors in the system, called an anyone pair. If the sequence of $X$ rotations depicted had been enacted, than the stabilizers on the two squares with the green blobs in would have given a $-1$ answer, while all others give $+1$. To correct for this, we could apply $X$ along exactly the path where the errors happened, although our error syndrome certainly doesn't give us the path information. There are many other paths of $X$ errors that would give the same syndrome. We can implement any of these, and there are two options. Either, the overall sequence of $X$ rotations forms a trivial path, or one that loops around the torus in at least on direction. If it's a trivial path (i.e. one that forms a closed path that does not loop around the torus), then we have successfully corrected the error. This is at the heart of the topological nature of the code; many paths are equivalent, and it all comes down to whether or not these loops around the torus have been completed. ## Error Correcting Threshold While the distance of the code is $N$, it is not the case that every combination of $N$ errors causes a logical error. Indeed, the vast majority of $N$ errors can be corrected. It is only once the errors become of much higher density that error correction fails. There are interesting proofs that make connections to phase transitions or the random bond Ising model, that are very good at pinning down when that is. For example, if you take an error model where $X$ and $Z$ errors occur independently at random on each qubit with probability $p$, the threshold is about $p=0.11$, i.e. $11\%$. It also has a finite fault-tolerant threshold (where you allow for faulty measurements and corrections with some per-qubit error rate) # The Planar Code Details are largerly identical to the Toric code, except that the boundary conditions of the lattice are open instead of periodic. This mens that, at the edges, the stabilizers get defined slightly differently. In this case, there is only one logical qubit in the code instead of two. • This is a very nice answer but could I ask why the surface code only encodes 2 logical qubits? I could not quite follow the argument you made to show that the stabilizer relations reduced the dimension of the Hilbert space to 4. Jan 31 '20 at 16:54 • You can assess the number of logical qubits for a stabilizer code as follows: the code subspace is defined by the projector onto the $+1$ eigenspace: $P=\Pi_n\frac{I+K_n}{2}$. The dimension of the space is rank($P$)=Tr($P$) (which is 2^number of qubits). For a product of stabilizers, if that product contains a Pauli matrix, it has trace 0, so the only non-zero terms are those which combine to give identity. Feb 5 '20 at 16:11 • Thank you for replying. Sorry for the basic questions but can I check that in the projector above, the identity acts on all $n$ physical qubits? And is $K_n$ the stabilizer e.g. $XXXX$ on four qubits and $I$ on all remaining qubits? Feb 5 '20 at 20:22 • @user1936752 yes, exactly. Feb 6 '20 at 12:52
2021-12-01 13:07:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8124340772628784, "perplexity": 372.60644913203765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.0/warc/CC-MAIN-20211201113241-20211201143241-00317.warc.gz"}
https://formascirculares.com/d0sy1n/dimensions-math-2a-6d7f60
It is only after a clear knowledge on 2D shapes we shall start to learn the 3D shapes. (Aus AB = I und BA = I folgt, dass f A und f B zueinander inverse Isomorphismen sind.) Our Story FAQ Teacher’s Guides include lesson plans, mathematical background, games, helpful suggestions, and comprehensive resources for daily lessons. Mathe-Abi'21 Lernhefte inkl. All ights eserved. See all formats and editions Hide other formats and editions. The new series, Dimensions Math, should be a great “next step” for those completing Primary Mathematics. Logically sequenced lessons provide a strong foundation for increasingly complicated math concepts. Lessons encourage discussion and help students reach fluency through engaging activities and strategies. The Dimensions Math 2A Textbook corresponds to the 2A workbook (required and sold-separately). In 1994 Harold and Louise House felt led by the Lord to start a business. Product Details. Matrizen. Grade: 1. Some tools (dot cards, number cards, or ten-frame cards) can be made with materials available in most classrooms and homes. Price New from Used from Workbook, January 1, 2018 "Please retry" — — — Workbook — Dimensions Math Grades PreK–5. Dimensions Math® Workbook 2A. We began using Singapore Math when Maggie was in kindergarten and now we are working through third-grade math. Students at that point are generally beyond most seventh-grade math programs. A Month of Dimensions Math. These engaging videos are hosted by Singapore math teacher and trainer Beth Curran. Lesson 1 - The Multiplication Table of 3. attachment. For example, the following declaration creates a two-dimensional array of four rows and two columns. Most of the objects we use in our daily life are 3D objects. www.singaporemath.com. Student textbook 2A. For example, Dimensions Math 1A and 1B cover Grade 1. e.g., Figure $$\PageIndex{3}$$. Earlybird Kindergarten and Primary Mathematics Standards Edition Grades K–6. Singapore Primary Math Workbooks are consumable and should be used in conjunction with the textbooks. Student should use textbooks and workbooks together. Dimensions Math® Textbook 3A. BJU Press Online & DVD courses and school/institution orders excluded. The question is, how do we find such a vector when there are infinitely many vectors in $$\mathbb{R}^3$$ to choose from? SKU: 9781947226203. Written by American educators with many years of experience teaching Singapore-style math, the books aim to be more familiar and accessible to American parents and teachers. Chapter 1: Numbers to 1,000 Test A Since the dimension of $$\mathbb{R}^3$$ is three and $$U$$ already contains two linearly independent vectors, all we need to do is to find a vector in $$\mathbb{R}^3$$ that is not in the span of $$U$$. Dimensions Math® 1A-5B textbooks and workbooks are suitable for both classroom and home settings. Supplemental material, other math, and more. These engaging videos are hosted by Singapore math teacher and trainer Beth Curran. Each textbook lesson includes a corresponding workbook exercise that starts with pictorial representation and progresses to more challenging abstract problems. Dimensions Math Teacher’s Guide 4A $25.00 Add to cart; Dimensions Math Teacher’s Guide 1A$ 25.00 Add to cart; Dimensions Math Teacher’s Guide 3B $25.00 Add to cart; Company. Isomorphismus (Lineare Algebra) – Serlo „Mathe für Nicht-Freaks“ Aus Wikibooks. Dimensions Math Workbook 2A Workbook – January 1, 2018 by Singapore Math Inc (Author) 4.2 out of 5 stars 3 ratings. Blackline Masters can be printed directly from this website. 3.1 Dimension von ; 3.2 Dimension des Polynomraums; 3.3 Dimension von als -Vektorraum; 3.4 Dimension des Nullraums; 4 Eigenschaften der Dimension; 5 Dimensionsformel. Wir definieren in diesem Artikel die Dimension eines Vektorraums und zeigen einige elementare Eigenschaften wie die Dimensionsformel. Aspects of Singapore math curriculum have been updated for clarity and relevance, while preserving the solid foundation that makes it unique. Table of Contents. The series features vibrant imagery and the content that introduce concepts through logical sequencing. Jede Ebene kann jedoch als Schnitt von n − 2 {\displaystyle n-2} Hyperebenen mit linear unabhängigen Normalenvektoren dargestellt werden und muss demnach ebenso viele Koordinatengleichungen gleichzeitig erfüllen. Following our first day with Dimensions Math we continued to use it 2-3 days per week. Volume Purchasing Information. Textbook lessons begin with a task that allows students to apply their previous knowledge and learn through discussion. Manipulatives. 1A is the material for the first half of the year. $$M(\vec{r}, \vec{F}) = r_1 F_2 - r_2 F_2$$ A line segment drawn on a surface is a one-dimensional object, as it has only length and no width. Dimensions Math® Textbook 2B. MR919;$17.50 ; Availability in: Add to cart Buy now. It’s also considered a sequel to the Primary Math programs. Volume Purchasing Information. Science. Jetzt kaufen. 174 pages, softcover. That’s 4 different “Singapore Math” textbook series, and it doesn’t even count spin-offs like Math in Focus or the Frank Schaeffer supplementary workbooks. but cross product does not exist; otherwise, 2-D inhabitants should have great fantasy to imagine a 3rd dimension to contain a vector orthogonal to their plane. Pagecount: 206. Chapter 12: Two-Dimensional Shapes Extra Practice; Go Math Grade 4 Answer Key Common Core Grade 4 HMH Go Math – Answer Keys. Home / Dimensions Math / Dimensions Math PK–5 / Dimensions Math Teacher’s Guide 2B. Access This Product. Die folgende Deklaration erstellt z.B. Textbooks and Workbooks do not include answer keys. Dimensions Math® Textbook 4A. Den Schrägstrich / als Bruchstrich verwenden, komplexe Werte z.B. Take a Look Inside. Chapters in 2A cover numbers to 1,000, addition and subtraction, length, weight, multiplication and division of 2, 5, and 10 . Home / Dimensions Math / Dimensions Math PK–5 / Dimensions Math Textbook 1A. Hint. Math in Focus Textbook 2A (Common Core Edition) Part-Whole Models Primary Mathematics Textbook 3A (Common Core Edition) Math in Focus Textbook 2A (Common Core Edition) Math in Focus Textbook 2A (Common Core Edition) 21 15 6 6. Follow 206 views (last 30 days) Hoque on 24 Sep 2018. Request a printed catalogue. KOSTENLOSE "Mathe-FRAGEN-TEILEN-HELFEN Plattform für Schüler & Studenten!" Vote. Damit kannst du ihn frei verwenden, bearbeiten und weiterverbreiten, solange du „Mathe für Nicht-Freaks“ als Quelle nennst und deine Änderungen am Text unter derselben CC-BY-SA 3.0 oder einer dazu kompatiblen Lizenz stellst. Singapore teaches your child to think mathematically. Zur Navigation springen Zur Suche springen ↳ Projekt „Mathe für Nicht-Freaks“ ↳ Lineare Algebra 1. We'll mail you a printed catalogue from one of our publishers. Dimensions Math® Workbook 2A. SKU: DMT1A Categories: Bestsellers, Dimensions Math PK–5, Grade 1 Tag: Grade 1. Singapore Math Dimensions 2A Workbook; Singapore Math Dimensions 2A Workbook Singapore Math Dimensions 2A Workbook Be the first to review this product. 2. Dimensions Math Teacher's Guide 2B quantity Add to cart. This textbook also features hands-on workpage elements alongside very simple and direct lessons. Arrays can have more than one dimension. See 2B for the second half. Dimensions Math 2 Workbook SET -- Workbook 2A and Workbook 2B Paperback – January 1, 2018 by Singapore Math Inc (Author) 4.0 out of 5 stars 1 rating. Dimensions Math is a comprehensive math curriculum for children from PreK through 8 th grade. Once students have mastered a concept with the use of concrete and pictorial aids, they are ready to take on more abstract mathematical problem sets. Then sketch a rectangular prism to help find the point in space. Textbook lessons begin with a task that allows students to apply their previous knowledge and learn through discussion. Ein Untervektorraum, Teilvektorraum, linearer Unterraum oder linearer Teilraum ist in der Mathematik eine Teilmenge eines Vektorraums, die selbst wieder einen Vektorraum darstellt.Dabei werden die Vektorraumoperationen Vektoraddition und Skalarmultiplikation von dem Ausgangsraum auf den Untervektorraum vererbt. Durable, colorful math education toys, tools, and games for all grade levels. The implementation of this curriculum suits today’s needs, while the progression and scope that define Singapore math remain intact. 17. The 2-dimensional shapes or objects in geometry are flat plane figures that have two dimensions – length and width. Vektorräume bilden den zentralen Untersuchungsgegenstand der linearen Algebra.Die Elemente eines Vektorraums heißen Vektoren. Primary Mathematics U.S. Publisher: Singapore Math Inc. ISBN: 9781947226043. 1. The Learning House Inc. is a family owned business providing educational resources to schools, home schools, and parents across Canada. Dimensions Math® Textbook 3B. In the Dimensions Math program, concepts are introduced and explained in a new format with vibrant imagery. $12 shipping special for the month of January! It emphasizes the CPA (Concrete, Pictorial, Abstract) progression. Placement Tests. Qty: Dimensions Math® PreK-5 series features the progression, rigor, and pacing that define Singapore math. Details. Most students transitioning from other math programs will need to complete a different seventh-grade level course or Dimensions Math 6 before moving on to Dimensions Math 7. Learn about and revise different 2-dimensional shapes and their properties with this BBC Bitesize GCSE Maths Edexcel study guide. Dimensions Math Textbook 1A quantity Add to cart. We’ve updated aspects of Singapore math curriculum for clarity and relevance, while preserving the solid foundation that makes it unique. Dimensions Math Level 2 Kit (4 Books) -- Textbooks 2A and 2B, and Workbooks 2A and 2B 4,6 von 5 Sternen. Simply gather the items listed on the Materials List for each chapter.$10.00. It’s easy to prepare for Dimensions Math ® lessons! This test covers material taught in Dimensions Math 2A. ... Look Inside - Sample Content for Dimensions Math® Teacher's Guide 2B. Supplementary and Other Math. A variety of exercises are presented, from pictorial to abstract. Dimensions Math 1–5 follows the same Singapore math approach as Primary Mathematics 1–6. They reach fluency by collecting various strategies along the way and applying them to new problems. In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns. Generate a 2-dimensional array with random numbers. Dimensions Math® Textbook 2B. SKU: DMTG2B Categories: Dimensions Math PK–5, Grade 2 Tag: Grade 2. $10.00. (a) (b) (c) (d) four hundred twenty-five 3 tens and 6 ones. Commented: Hoque on 24 Sep 2018 Accepted Answer: Stephan. I was excited to review Singapore Dimensions Math 2A to see what we liked or didn’t like about it. Lesson 1 - Putting Numbers Together — Part 1. attachment. Polygon (2-dimensional), usually represented as a line that closes at its endpoints, representing the boundary of a two-dimensional region. Word problems give students a sense of math in real-world contexts. Dimensions Math® 6-8; Dimensions Math At Home™️ ; Primary Mathematics; Other Math and Science; Schools. Video subscription for Dimensions Math® Grades 3–5. These axes allow us to name any location within the plane. Concepts are taught through immersive, visual scenarios created with five characters: Emma, Alex, Sofia, Dio, and Mei. About Us. Details. Jahrhundert von Richard Dedekind eingeführt.. Workbooks for PreK-2 are perforated. Now, they’ve added yet another option: Dimensions Math, a comprehensive math curriculum for children from preschool to 8th grade. And as a homeschooler, I prefer to work with the methods and systems that work best for my own kids. By the way, in 2-D a single scalar number is sufficient to describe a force’s moment. ein zweidimensionales Array mit vier Zeilen und zwei Spalten. In 1994 Harold and Louise House felt led by the Lord to start a business. It’s no wonder that many parents feel overwhelmed by all the choices! New Elementary Mathematics Grades 7–8 . int[,] array = new int[4, 2]; Die folgende Deklaration erstellt ein Array mit drei Dimensionen 4, 2 und 3. Anzahl n= dim(V)heiˇt die Dimension von V. Sie kann den Wert n= 1;2;3;:::und auch n= 1haben. Dimensions Math 2 Products (1) Sort By Best Match Price ($ - $$) Price ($$$-$) Grade (ascending) Grade (descending) Popularity (least) Popularity (most) Title (ascending) Title (descending) Catalog Order The Dimensions Math 2A Textbook corresponds to the 2A workbook (required and sold-separately). Dimensions Math includes levels 6 through 8, but level 7 would be the natural starting place for those who have completed Primary Mathematics 6. Sorry, no split-shipping. The series follows the principles outlined in the Singapore Mathematics Framework and uses the Concrete > Pictorial > Abstract approach. Start by sketching the coordinate axes. The consumable workbooks provide additional practice for consolidation or homework, with room to work out the problems. Dimensions Math PK-5 is the latest Singapore math curriculum. Two Dimensional . Surface (3-dimensional), represented using a variety of strategies, such as a polyhedron consisting of connected polygon faces. https://www.ourpieceofearth.com/singapore-dimensions-math-2a-review Math; Dimensions Math 2A Textbook. Die Invertierbarkeitsaussage folgt aus der Tatsache, dass isomorphe Vektorräume gleiche Dimension haben: K m und K n sind nur für m = n isomorph. Mathe-Aufgaben online lösen - Koordinatengeometrie - Gegenseitige Lage von Geraden - 2-dimensional / Untersuchung, ob zwei Geraden identisch sind, (echt) parallel sind oder sich schneiden. Dimensions Math (7-8) is an updated and more colorful version that is now aligned to the CCSS. Primary Mathematics Common Core Edition Grades 1–5. * The first column would contain values from 5 to 1000 and the second column would contain values from 1 to 7. Dimensions Math Level 2 Kit (4 Books) -- Textbooks 2A and 2B, and Workbooks 2A and 2B [Singapore Math Inc] on Amazon.com. Please note: 2A is for the first half of the year. Dimensions Math® Textbook 2A. Dimensions Math® Teacher's Guide KB (curriculum) You have access to this resource. Dimensions Math At Home™️ Our new digital subscription provides in-depth lesson instructions for an entire Dimensions Math® school year . Naming 2-Dimensional Shapes Match the shape with the correct name. Containing the exercises the student does independently, workbooks provide the practice essential to skill mastery. Publisher: Singapore Math Inc. ISBN: 9781947226357. Here’s how it works: $85 per student for a whole grade of videos, covering all the lessons in both A and B Textbooks and Workbooks. Edition Grades 1–6. Dimensions Math Grade 1 Complete Set$ 122.00 Add to cart; Dimensions Math Grade 1 Essential Set $98.00 Add to cart; Related products. from Singapore Math for 2nd grade in Dimensions Mathematics (Location: MAT-SMDM)$12.00 Dimensions Math 2B - Teacher's Guide. Sketch the point $$(−2,3,−1)$$ in three-dimensional space. Durch die impliziten Formen wird allerdings in höherdimensionalen Räumen keine Ebene mehr beschrieben, sondern eine Hyperebene der Dimension −. so eingeben: 1/2-3/7i Die Dimension ist ein Konzept in der Mathematik, das im Wesentlichen die Anzahl der Freiheitsgrade einer Bewegung in einem bestimmten Raum bezeichnet. REVIEW// Dimensions Math Grade 2This will be quick Review and flip through of Dimensions Math! Singapore Dimensions Math Tests 2A. This textbook also features hands-on workpage elements alongside very simple and direct lessons. Dimensions Math Teacher’s Guide 2B $25.00.$10.00. Dimensions Math Placement Test 2A. Lessons are laid out clearly and activities are designed for the whole class, small groups, and extension. Dimensions Math includes levels 6 … Answer. Ein Vektorraum oder linearer Raum ist eine algebraische Struktur, die in vielen Teilgebieten der Mathematik verwendet wird. In a home setting, parent or teacher involvement is essential for success as interaction is a key element to learning concepts. It centers on the use of number bonds and bar models to solve word problems, and encourages the mastery of mental math to gain number sense, computational fluency, and logical thinking. 1 Motivation; 2 Definition der Dimension; 3 Beispiele zur Dimension. Dimensions Math is published by Singapore Math Inc. and was designed to cater to the needs of U.S. teachers and students and said to be a sequel to Primary Mathematics. It provides more guidance and explanations than the other versions, and the clear and consistent layout will make daily teaching easier. In two-dimensional space, the coordinate plane is defined by a pair of perpendicular axes. For each rotation R of 4-space (fixing the origin), there is at least one pair of orthogonal 2-planes A and B each of which is invariant and whose direct sum A ⊕ B is all of 4-space. In diesem Artikel zeigen wir dir was Matrizen sind, wie diese aufgebaut sind und wie man mit Matrizen rechnet: Aufbau von Matrizen; Vom LGS zur Matrizen Product Details. Students completing Singapore Math’s Primary Mathematics series are faced with an unusual problem when they complete the sixth level. Dimensions Math Textbook 1A $12.00. Softcover, full-color pagesPublication Date: 2018ISBN-13: 9781947226067, Use left/right arrows to navigate the slideshow or swipe left/right if using a mobile device, Reading Comprehension in Varied Subject Matter, choosing a selection results in a full page refresh. By:$ 16.25. Workbooks offer independent practice that follows a careful progression of exercise variation. Assessments; Video Lessons; eBooks; Workshops and Webinars; Conferences; Parents. Der Begriff der Dimension tritt in einer Vielzahl von Zusammenhängen auf. Pagecount: 176. Singapore Math - 6A - Textbook Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising.
2021-05-16 12:56:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23463770747184753, "perplexity": 11053.177748826582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00025.warc.gz"}
https://solvedlib.com/explain-the-interaction-within-the-epidemiologic,59752
# Explain the interaction within the epidemiologic triangle. ###### Question: Explain the interaction within the epidemiologic triangle. #### Similar Solved Questions ##### Please solve using differential equations methods, thanks. A 50 kg object is shot from a cannon... Please solve using differential equations methods, thanks. A 50 kg object is shot from a cannon straight up with an initial velocity of 10m/s off a bridge that is 100 meters above the ground. If air resistance is given by 5v' determine the velocity of the mass at any time t.... ##### 7. Below is the electrolysis of aluminum oxide. How many grams of aluminum oxide is needed... 7. Below is the electrolysis of aluminum oxide. How many grams of aluminum oxide is needed to make 60.0 g of aluminum? Al2O3(s) electrolysis Al(s) + O2(g) 8. 40 g of benzene is mixed with 125 g of bromine. Which is the limiting reagent according the chemical reaction below? CH(C) + Br2(C) + C6H; Br(... ##### Samples of compound X, Y, and Z are analyzed, with results shown here.Mass of CarbonMass of HydrogenCompoundDescriptionclear, colorless, liquid with strong odor2.064 g0.172 gclear, colorless, liquid with strong odor2.154 g0.359 gclear, colorless, liquid with strong odor7.536 g0.628 gDo these data provide example(s) of the law of definite proportions, the law of multiple proportions, neither , or both? What do these data tell you about compounds X, and 2?First, we determine the whole-number mass Samples of compound X, Y, and Z are analyzed, with results shown here. Mass of Carbon Mass of Hydrogen Compound Description clear, colorless, liquid with strong odor 2.064 g 0.172 g clear, colorless, liquid with strong odor 2.154 g 0.359 g clear, colorless, liquid with strong odor 7.536 g 0.628 g Do... ##### Draw 4 structural isomers of C5H11I (I isiodine). You need to draw these in condensed structural formula orline bond structure and name the structures. Draw 4 structural isomers of C5H11I (I is iodine). You need to draw these in condensed structural formula or line bond structure and name the structures.... ##### The total amount of energy in an ecosystem tends to be highest among thedecomposersherbivoresparasitestop predatorsproducers The total amount of energy in an ecosystem tends to be highest among the decomposers herbivores parasites top predators producers... ##### Calculator Product Costs using Activity Rates Atlas Enterprises Inc, manufactures elliptical exercise machines and treadmills. The... Calculator Product Costs using Activity Rates Atlas Enterprises Inc, manufactures elliptical exercise machines and treadmills. The products are produced in its Fabrication and Assembly production departments. In addition to production activities several other activities are required to produce the t... ##### 21.13(a) The equilibrium NH;(aq) + H,O() = NH;(aq) + OH-(aq) at 259C is subjected to a temperature jump that slightly increases the concentration of NHf(aq) and OH-(aq) The measured relaxation time is 7.61 ns. The equilibrium constant for the system is 1.78 x 10 at 259C,and the equilibrium concentration of NH;(aq) is 0.15 mol dm Calculate the rate constants for the forward and reversed steps_ 21.13(a) The equilibrium NH;(aq) + H,O() = NH;(aq) + OH-(aq) at 259C is subjected to a temperature jump that slightly increases the concentration of NHf(aq) and OH-(aq) The measured relaxation time is 7.61 ns. The equilibrium constant for the system is 1.78 x 10 at 259C,and the equilibrium concentra... ##### Problem 3The vertical motion of mass attached to spring is described by the initial-value problemdx ax X = 0; x(0) = 4, x'(0) = 2 dt2 dtDetermine the maximum vertical displacement of the mass_ Problem 3 The vertical motion of mass attached to spring is described by the initial-value problem dx ax X = 0; x(0) = 4, x'(0) = 2 dt2 dt Determine the maximum vertical displacement of the mass_... ##### This Queston: ptGraph3x+ 4y= - 12Use the graphing tool on the right to graph the equation Click to enlarge graph This Queston: pt Graph 3x+ 4y= - 12 Use the graphing tool on the right to graph the equation Click to enlarge graph... ##### Determine the number of permutations (arrangements) possible of things taken 2 at a tire The answer is Determine the number of permutations (arrangements) possible of things taken 2 at a tire The answer is... ##### Tough physics question Consider the sinusoidal wave of in the figure below with the wave functiony = 0.150 cos (15.7x - 50.3t)where x and y are in meters and t is in seconds. At a certain instant, let point A be at the origin and point B be the closest point to A along the x axis where thewave is 59.0° out of phase wi... ##### 3) The number of telephone calls that arrive at a phone exchange is often modeled as Poisson random variable. Assume that on the average there are 5 calls per hour: What is the probability that there are exactly 2 calls in one hour? (12 points) What is the probability that there are two or fewer calls in two hour? (13 points) 3) The number of telephone calls that arrive at a phone exchange is often modeled as Poisson random variable. Assume that on the average there are 5 calls per hour: What is the probability that there are exactly 2 calls in one hour? (12 points) What is the probability that there are two or fewer cal... ##### Incorrect. 2 -H20 Incorrect. 2 -H20... ##### How do you write 300/1800 as a percentage? How do you write 300/1800 as a percentage?... ##### Question 14 2 pts What is the name of the structures where the spores are produced... Question 14 2 pts What is the name of the structures where the spores are produced in this plant? Next → Question 23 2 pts In the diagram of the life cycle of a plant shown below, what is the name of the generation represented by letter 'A'? mitosis А. B fertilization Diploid Hapl... ##### Please show all work In mid-2009, Rite Aid had CCC-rated, 6-year bonds outstanding with a yield... please show all work In mid-2009, Rite Aid had CCC-rated, 6-year bonds outstanding with a yield to maturity of 17.3%. At the time, similar maturity Treasuries had a yield of 3 %. Suppose the market risk premium is 5% and you believe Rite Aid's bonds have a beta of 0.31. The expected loss rate of... ##### Essays on impacts of global warming in sub-continent . essays on impacts of global warming in sub-continent .... ##### Radio station WCCO in Minneapolis broadcasts at a frequency of $830 \mathrm{kHz}$. At a point some distance from the transmitter, the magnetic-field amplitude of the electromagnetic wave from $\mathrm{WCCO}$ is $4.82 \times 10^{-11} \mathrm{~T}$. Calculate (a) the wavelength; (b) the wave number; (c) the angular frequency; (d) the electric-field amplitude. Radio station WCCO in Minneapolis broadcasts at a frequency of $830 \mathrm{kHz}$. At a point some distance from the transmitter, the magnetic-field amplitude of the electromagnetic wave from $\mathrm{WCCO}$ is $4.82 \times 10^{-11} \mathrm{~T}$. Calculate (a) the wavelength; (b) the wave number; (c... ##### Find the general solution to the differential equation y" −2y' + y = e^t Find the general solution to the differential equation y" − 2y' + y = e^t... ##### Question 8 0.5 pts Which of the following statments about lipoprotein is TRUE? A protein that... Question 8 0.5 pts Which of the following statments about lipoprotein is TRUE? A protein that contains a lipid Transports protein through fat tissues • Transports carbohydrates through blood and lymph A carbohydrate that contains a lipid O A protein that contains a carbohydrate... ##### Define the frequency of electromagnetic radiation. How is frequencyrelated to wavelength? Define the frequency of electromagnetic radiation. How is frequency related to wavelength?... ##### How to solve the combined gas law formula? How to solve the combined gas law formula?... ##### Determine whether the Mean Value Theorem applies to the function f(x) -x + on the interval [1,2]. If so, find or approximate the point(s_ that are guaranteed to exist by the Mean Value Theorem: Make sketch of the function and the line that passes through (a,f(a)) and (b,f(b)) . Mark the points (if they exist) at which the slope of the function equals the slope of the secant line Then sketch the tangent line at P Determine whether the Mean Value Theorem applies to the function f(x) -x + on the interval [1,2]. If so, find or approximate the point(s_ that are guaranteed to exist by the Mean Value Theorem: Make sketch of the function and the line that passes through (a,f(a)) and (b,f(b)) . Mark the points (if t... ##### Answer if each X defined as below is a random variable or not. If X is... Answer if each X defined as below is a random variable or not. If X is not a random variable, try to create a random variable X' based on the outcome of X. (a) X is defined as the genders of next two consecutive births in a particular hospital. (b) X is defined as the outcome of a coin toss expe... ##### What other nursing care should be done when patient is on Pitocin? (examples- bleeding precautions, activity... What other nursing care should be done when patient is on Pitocin? (examples- bleeding precautions, activity limitations, patient education, dietary modifications, etc)... ##### Gender and political party In January2017, 52% of U.S. senators were Republicans and the rest wereDemocrats or Independents. Twenty-one percent of the senators werefemales, and 47% of the senators were male Republicans. Suppose weselect one of these senators at random. Define events R: is aRepublican, and M: is male.Find P(R∪M). Interpret this value in context.Consider the event that the randomly selected senator is afemale Democrat or Independent. Write this event in symbolic formand find its Gender and political party In January 2017, 52% of U.S. senators were Republicans and the rest were Democrats or Independents. Twenty-one percent of the senators were females, and 47% of the senators were male Republicans. Suppose we select one of these senators at random. Define events R: is a Repu... ##### Find the following probabilities: a) Pr{Z < 0.66} b) Pr{Z ≥ -0.66} c) Pr{-2.01 < Z... Find the following probabilities: a) Pr{Z < 0.66} b) Pr{Z ≥ -0.66} c) Pr{-2.01 < Z < 2.01} d) Pr{-1.91 < Z < 0.0} e) Pr{Z < -1.35 or Z > 1.35} (you want the probability that Z is outside the range -1.03 to 1.03)... ##### (I0,00 Puanlar) f(c) cos(r) . gly)and h(=)(henwnathevalue (fogoh)' (0) ?a) @ D}O; 4Bos birak(Oncer10/10Sonraki>KapatVSinawi BitirCevap Listesi;Lomanae (I0,00 Puanlar) f(c) cos(r) . gly) and h(=) (henwna thevalue (fogoh)' (0) ? a) @ D}O; 4 Bos birak (Oncer 10/10 Sonraki> Kapat VSinawi Bitir Cevap Listesi; Lomanae... ##### Decay series: Write the equations for the decay series for U-235 with the decay series afap Decay series: Write the equations for the decay series for U-235 with the decay series afap... ##### Q 3.21: The total of the credit column on the adjusted trial balance is the same... Q 3.21: The total of the credit column on the adjusted trial balance is the same as the total of all the assets. the adjusting entries that have been posted to the general ledger. assets minus liabilities. the debit column on the adjusted trial balance,... ##### (30 points) Checkboard problem You given checkbourd which has FOws and coluntna ud has intrzc written in cach equnre You are alloxicd select numoec Fich colum maximize the sum the selected numbers There constreint: 6wo squueS Fou ;elect trom consecutive columns shari lcast tarncr For example teri the cherkboard below you can select 10 o 5, among ther possibilities. However 7, 9, 10 legal selection ince the Jst LwO numhers were selected ftom two ron-adjacent squArCS The optimal selection 10 (or (30 points) Checkboard problem You given checkbourd which has FOws and coluntna ud has intrzc written in cach equnre You are alloxicd select numoec Fich colum maximize the sum the selected numbers There constreint: 6wo squueS Fou ;elect trom consecutive columns shari lcast tarncr For example teri th... ##### Use the graphs of the rational functions in $A-D$ to answer each question. Give all possible answers. There may be more than one correct choice. Which choices have range $(-\infty, 3) \cup(3, \infty) ?$ Use the graphs of the rational functions in $A-D$ to answer each question. Give all possible answers. There may be more than one correct choice. Which choices have range $(-\infty, 3) \cup(3, \infty) ?$... ##### He supply equation for certain wireless earbuds is given by p = 0.8v7 + 17 where p is the price in douars and x is the number of earbuds supplied. Use differentials to approximate the price when 620 earbuds are supplied he supply equation for certain wireless earbuds is given by p = 0.8v7 + 17 where p is the price in douars and x is the number of earbuds supplied. Use differentials to approximate the price when 620 earbuds are supplied... ##### (ivrzm tlge eygteillr-(8 )xIeindl b2 Ezered achioJindl ha ecltina thar eatialiee tle jittial ccrdlitina ((0), 2(0))3,1 Rppn: yil cultina € @ue veciot; (ivrzm tlge eygteill r-(8 )x Ieindl b2 Ezered achio Jindl #ha ecltina thar eatialiee tle jittial ccrdlitina ((0), 2(0)) 3,1 Rppn: yil cultina € @ue veciot;...
2022-08-09 01:13:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5864564180374146, "perplexity": 3736.8624150390087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00681.warc.gz"}
https://cs.stackexchange.com/questions/80086/what-is-the-utility-of-proving-p-np-if-we-cant-find-an-algorithm-that-can-solve
What is the utility of proving P=NP if we can't find an algorithm that can solve any NP problem in polynomial time? Here we see a very interesting attempt to show that $\mathrm{P} \ne \mathrm{NP}$ by Norbert Blum. Here we see 116 previous attempts at solving P vs. NP. Here we see the P vs NP problem defined as: The informal term quickly, used above, means the existence of an algorithm solving the task that runs in polynomial time, such that the time to complete the task varies as a polynomial function on the size of the input to the algorithm (as opposed to, say, exponential time). I think it's even less useful, even if P = NP it is possible that no one finds an algorithm. Creating a (useful) algorithm is independent of proving the theorem. Also interesting is that someone could create an algorithm that solves NP complete in polynomial time without proving P=NP. They would be unable to prove the algorithm correct though. My question is: What is the utility of proving P=NP if we can't find an algorithm that can solve any NP problem in polynomial time? • Maybe there are algorithms which have been proved to have optimal running time but weren't analyzed explicitly. – Andrew MacFie Aug 15 '17 at 13:19 • It appears that Norbert Blom in your reference (attemps to) prove quite the opposite, namely P unequal NP (quoting from the abstract: "This implies P not equal NP".) – user53923 Aug 15 '17 at 14:47 • Practical utility? None. The relevance of the P-NP question on practice is generally overstated (imho). It is, of course, of very much interest for complexity theory. The two things are not the same, even if some complexity theorists seem to forget that at times. – Raphael Aug 16 '17 at 19:52 2 Answers In short, if we prove $P=NP$, then we know a whole lot more about computation than we did before, even if we don't find the algorithm, and that was the objective behind research on $P=NP$ all along. It's much worse, because most researchers believe that $P\ne NP$, meaning that even when the proof is discovered, it means that we won't be able to find a polynomial-time algorithm for SAT, not because we're looking in the wrong places, but because no such algorithm exists. But that doesn't worry most complexity theorists, because the would-be algorithm for SAT that you describe is only a practical application, a spin-off, of the proof. And if you care about practical applications, then why are you working on $P=NP$? It's the most theoretical question in computer science! Rather, the motivation behind research in this direction is* to better understand computation, very much akin to why mathematicians care about the Riemann hypothesis, and why physicists build giant particle colliders, even though we already have giant databases of prime numbers and even though the discovery of new particles mostly does not help us build faster rockets or better fusion reactors. But what don't we understand about computation? We can build marvellous AI systems that recognize faces and Chinese characters, and predict the weather! Those are results of the form, this problem can be solved with so-and-so many resources, but on the flip side, one can ask, for this problem, what is the minimum amount of resources needed? The current state of affairs in the last regard is rather embaressing, because as far as we know: • SAT has a linear-time (!) algorithm** • SAT has an algorithm that uses $O(n^{1.802})$ time and $O(\sqrt{n})$ space • every language in $NP$ has a circuit with $5n$ gates. • The Succinct Circuit Satisfiability problem***, which is $NEXP$-Complete ($NEXP$ is the exponential-time version of $NP$), can be solved in polynomial time with a randomized algorithm with bounded error (i.e. it gives the wrong answer only with probability $\leq \frac{1}{n}$) • Everything that can be computed using a polynomial amount of memory can be solved using a polynomial amount of time. For example, the Quantified Boolean Formula problem, which is like SAT except instead of an $\exists$ quantifier, there are any number of $\exists x\colon\forall y\colon\exists z\colon\cdots$ quantifiers. Until these ridiculous scenarios are ruled out, we cannot honestly say that we understand computation in any depth. And that is the utility that you ask for, of why anybody at all works on $P=NP$. I encourage you to read this wonderful survey of Scott Aaronson where in section 1.2 he addresses all the usual objections, like, what if $P=NP$ but we can't find the algorithm, or what if we do, but it's exponent is hopeless, or... *as far as I can tell, as someone who is not a professional complexity theorist, but who did write his Master's thesis on the topic. ** We do not know whether nondeterminism gives you an advantage for solving SAT, but we do know, since 1983 [1] that nondeterminism gives you an advantage for some language, because there is a language $L$ solvable by a non-deterministic machine in $O(n)$ time but not by any deterministic machine in $O(n)$ time. In 2001, that result was improved [2] to a language in $NTime(n)$ but not in $DTime(n\sqrt{\log(n)})$. *** The Succinct Circuit Satisfiability problem is this: fix some encoding of Boolean circuits as binary strings. You are given a circuit on $n$ inputs. Interpret the truth table of this circuit as a binary string of length $2^n$. Interpret that string as representing a circuit. Is that circuit satisfiable? [1] Paul, Wolfgang J., et al. "On determinism versus non-determinism and related problems." Foundations of Computer Science, 1983., 24th Annual Symposium on. IEEE, 1983. [2] Santhanam, Rahul. "On separators, segregators and time versus space." Computational Complexity, 16th Annual IEEE Conference on, 2001.. IEEE, 2001. • Wait, what? How that linear time algorithm for SAT behaves? – rus9384 Aug 15 '17 at 14:18 • @rus9384 We don't have a candidate for that algorithm, I only mean to say that currently we cannot prove that it does not exist. Also, I should mention that the best bound is not linear, but it is $\omega(n\sqrt{\log^\star(n)})$, which is slightly better than linear. – Lieuwe Vinkhuijzen Aug 15 '17 at 14:44 • This means that SAT can't be solved in $O(n)$ time? Other answers on stackexchange said that no bound $\omega(n)$ is known. Is this very recent result? – rus9384 Aug 15 '17 at 20:30 • @rus9384 From (this cstheory post)[cstheory.stackexchange.com/q/1079/35749] we are redirected to "On determinism versus nondeterminism and Related Problems" by Paul et al., 1983, which shows $DTime(n)\ne NTime(n)$, and to "On Separators, Segregators and Time versus Space" by Santhanam, 2001, which shows the $n\sqrt{\log^\ast(n)}$ bound, among other things. A nondeterministic machine can solve SAT in linear time, hence. Of course I may be misinterpreting the results, which talk about sets and not about SAT, in which case I would appreciate that pointed out. – Lieuwe Vinkhuijzen Aug 16 '17 at 11:41 • "It's the most theoretical question in computer science!" -- Citation needed. – Raphael Aug 16 '17 at 19:53 It wouldn't have a direct practical utility. It would "merely" be a huge mathematical discovery with a lot of implications, more interesting than (say) Fermat's Last Theorem was. And of course, even if the proof itself doesn't provide a practical algorithm for NP problems (as opposed to a polynomial one: e.g. $O(N^{1000})$ is polynomial but useless in practice), it may well lead to a discovery of one in the future.
2019-07-16 04:30:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7834176421165466, "perplexity": 563.8741934538239}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00331.warc.gz"}
https://socratic.org/questions/how-do-you-integrate-e-2x-1-1
# How do you integrate (e^(2x-1))-1? $= \frac{1}{2} {e}^{2 x - 1} - x + C$ $\int \setminus \left({e}^{2 x - 1}\right) - 1 \setminus \mathrm{dx}$ $= \int \setminus \frac{d}{\mathrm{dx}} \left(\frac{1}{2} {e}^{2 x - 1}\right) - 1 \setminus \mathrm{dx}$ $= \frac{1}{2} {e}^{2 x - 1} - x + C$
2022-06-29 16:46:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9952974319458008, "perplexity": 9509.472080030446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103640328.37/warc/CC-MAIN-20220629150145-20220629180145-00498.warc.gz"}
https://quant.stackexchange.com/questions/58435/someone-help-me-understand-why-for-portfolio-variance-or-parametric-value-at-ris
# Someone help me understand why for portfolio variance or Parametric Value at Risk we have to compute the covariance matrix? I understand that portfolio variance is computed through $$w'Cw$$, where w is the vector of weights, $$C$$ being the covariance matrix. However, what I don't get is this: why can't this portfolio variance simply be calculated by observing the daily portfolio level returns and simply taking the variance of it? That would save the problem of having to compute a unwieldy large portfolio variance no? I am asking in the context of Value-At-Risk (Parametric Method) - it is so named the variance-covariance method because it uses the covariance matrix explicitly - what I don't understand is why can't we do it in similar fashion to historical simulation (where correlation is factored implictly by just taking daily portfolio value data) and assuming a distribution over it. This is clearly simpler, but no finance book has proposed doing this for parametric VaR? Consider you have positions in $$N$$ assets, with market values $$S$$, and that the daily PnL is acquired via multiplying the daily returns vector, which is a random vector with some unknown joint probability distribution. $$p = S^T R$$ You are interested in variance of $$p$$ for constant $$S$$: $$Var(p) = E[(p-E[p])^2] = E[(S^TR - E[S^TR])^2]$$ $$Var(p) = E[(p-E[p])^2] = S^T C S$$ where $$C$$ is the covariance matrix of $$R$$. The advantage of now applying a parametric model to $$R$$ is that you can achieve and derive many useful results in terms of the underlying assets, e.g. VaR allocation and optimization. The disadvantage is that you have to assume a (possibly incorrect) statistical joint distribution of returns and parameterize it. On the other hand you can obtain a non-parametric measure of VaR by evaluating historical values of $$p$$. This has the advantage of not requiring any statistical distribution being assumed for any parametric model of p (since we use empirical data), so maybe closer to truth. However its use to derive any meaningful results is limited - you only get a VaR metric for some confidence interval. The in between approach of applying a parametric model to $$p$$ seems to offer no advantage over either. You need to assert a statistical distribution which may be false, and you don't get any ability to evaluate useful results in terms of individual assets. I have only ever seen this applied where the VaR is extrapolated out to predict very small confidence values, e.g. the 99.97% confidence interval. The problem there is that even the slightest error in the assumed distribution (and most choose a normal!) will vastly affect the results in this tail, to the point where it is probably entirely unreliable.
2021-04-16 03:54:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7651717662811279, "perplexity": 421.73592251111523}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088471.40/warc/CC-MAIN-20210416012946-20210416042946-00414.warc.gz"}
https://nbviewer.jupyter.org/github/pambot/enigma-pandas/blob/master/enigma-pandas.ipynb
# The Enigma Guide to Avoiding an Actual Pandas Pandemonium¶ When you first start out using Pandas, it's often best to just get your feet wet and deal with problems as they come up. Then, the years pass, the amazing things you've been able to build with it start to accumulate, but you have a vague inkling that you keep making the same kinds of mistakes and that your code is running really slowly for what seems like pretty simple operations. This is when it's time to dig into the inner workings of Pandas and take your code to the next level. Like with any library, the best way to optimize your code is to understand what's going on underneath the syntax. ## Topics¶ • Writing good code • Common silent failures In [1]: # for neatness, it helps to keep all of your imports up top import sys import traceback import numba import numpy as np import pandas as pd import numpy.random as nr import matplotlib.pyplot as plt % matplotlib inline In [2]: # generate some fake data to play with data = { "day_of_week": ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"] * 1000, "booleans": [True, False] * 3500, "positive_ints": nr.randint(0, 100, size=7000), "mixed_ints": nr.randint(-100, 100, size=7000), "lat1": nr.randn(7000) * 30, "lon1": nr.randn(7000) * 30, "lat2": nr.randn(7000) * 30, "lon2": nr.randn(7000) * 30, } df_large = pd.DataFrame(data) In [3]: df_large.head() Out[3]: day_of_week booleans positive_ints mixed_ints lat1 lon1 lat2 lon2 0 Monday True 42 79 5.745216 -43.094330 15.617175 -24.790654 1 Tuesday False 52 -30 -18.486213 19.586532 17.345942 56.579815 2 Wednesday True 44 -69 3.417936 -33.305177 20.157805 -25.048502 3 Thursday False 64 -9 -31.167047 -37.542818 -11.478706 -55.032297 4 Friday True 6 -31 21.443933 -33.310642 -63.551866 33.354095 In [4]: small = { 'a': [1, 1], 'b': [2, 2] } df_small = pd.DataFrame(small) df_small Out[4]: a b 0 1 2 1 1 2 ## Writing good code¶ Before we do the "cool" stuff like writing faster and more memory-optimized code, we need to do it on a foundation of some fairly mundane-seeming coding best practices. These are the little things, such as naming things expressively and writing sanity checks, that will help keep your code maintainable and readable by your peers. ### Sanity checking is simple and totally worth it¶ Just because this is mostly just data analysis, and it might not make sense to put up a whole suite of unit tests for it, doesn't mean you can't do any kind of checks at all. Peppering your notebook code with assert can go a long way without much extra work. Above, we made a DataFrame df_large that contains numbers with some pre-defined rules. For example, you can check for data entry errors by trimming whitespace and checking that the number of entries stays the same: In [5]: large_copy = df_large.copy() In [6]: assert large_copy["day_of_week"].str.strip().unique().size == large_copy["day_of_week"].unique().size In [7]: large_copy.loc[0, "day_of_week"] = "Monday " assert large_copy["day_of_week"].str.strip().unique().size == large_copy["day_of_week"].unique().size --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-7-bb8c0d4a8066> in <module>() 1 large_copy.loc[0, "day_of_week"] = "Monday " 2 ----> 3 assert large_copy["day_of_week"].str.strip().unique().size == large_copy["day_of_week"].unique().size AssertionError: ### Use consistent indexing¶ Pandas grants you a lot of flexibility in indexing, but it can add up to a lot of confusion later if you're not disciplined about keeping a consistent style. This is one proposed standard (followed by some examples of indexing that technically produce the same effect): In [8]: # for getting columns, use a string or a list of strings for multiple columns # note: a one-column DataFrame and a Series are not the same thing one_column_series = df_large["mixed_ints"] two_column_df = df_large[["mixed_ints", "positive_ints"]] # for getting a 2D slice of data, use loc data_view_series = df_large.loc[:10, "positive_ints"] data_view_df = df_large.loc[:10, ["mixed_ints", "positive_ints"]] # for getting a subset of rows, also use loc row_subset_df = df_large.loc[:10, :] In [9]: # here's some alternatives for the above # one way is to use df.loc for everything, but it can look clunky one_column_series = df_large.loc[:, "mixed_ints"] two_column_df = df_large.loc[:, ["mixed_ints", "positive_ints"]] # you can use iloc, which is loc but with indexes, but it's not as clear # also, you're in trouble if you ever change the column order data_view_series = df_large.iloc[:10, 2] data_view_df = df_large.iloc[:10, [3, 2]] # you can get rows like you slice a list, but this can be confusing row_subset_df = df_large[:10] # why confusing? because df_large[10] actually gives you column 10, not row 10 ### But don't use chained indexing¶ What is chained indexing? It's when you separately index the columns and the rows, which will make two separate calls to __getitem__() (or worse, one call to __getitem__() and one to __setitem__() if you're making assignments, which we demonstrate below). It's not so bad if you're just indexing and not making assignments, but it's still not ideal from a readability standpoint because if you index the rows in one place and then index a column, unless you're very disciplined about variable naming, it's easy to lose track of what exactly you indexed. In [10]: data_view_series = df_large[:10]["mixed_ints"] data_view_df = df_large[:10][["mixed_ints", "positive_ints"]] # this is also chained indexing, but low-key row_subset_df = df_large[:10] data_view_df = row_subset_df[["mixed_ints", "positive_ints"]] ## Common silent failures¶ Even if you do all of the above, sometimes Pandas' flexibility can lull you into making mistakes that don't actually make you error out. These are particularly pernicious because you often don't realize something is wrong until something far downstream doesn't make sense, and it's very hard to trace back to what the cause was. ### View vs. Copy¶ A view and a copy of a DataFrame can look identical to you in terms of the values it contains, but a view references a piece of an existing DataFrame and a copy is a whole different DataFrame. If you change a view, you change the existing DataFrame, but if you change a copy, the original DataFrame is unaffected. Make sure you aren't modifying a view when you think you're modifying a copy and vice versa. It turns out that whether you're dealing with a copy or a view is very difficult to predict! Internally, Pandas tries to optimize by returning a view or a copy depending on the DataFrame and the actions you take. You can force Pandas to make a copy for you by using df.copy() and you can force Pandas to operate in place on a DataFrame by setting inplace=true when it's available. When to make a copy and when to use a view? It's hard to say for sure, but if your data is small or your resources are large and you want to go functional and stateless, you can try making a copy for every operation, like Spark would, since it's probably the safest way to do things. On the other hand, if you have lots of data and a regular laptop, you might want to operate in place to prevent your notebooks from crashing. In [11]: # intentionally making a copy will always make a copy small_copy = df_small.copy() small_copy Out[11]: a b 0 1 2 1 1 2 In [12]: # seeing inplace=true in the API reference means you have a choice to make a copy or not small_drop = small_copy.drop("b", axis=1) small_drop Out[12]: a 0 1 1 1 In [13]: # if you don't assign to anything when inplace=False (default), the operation doesn't affect the input small_copy.drop("b", axis=1) small_copy Out[13]: a b 0 1 2 1 1 2 In [14]: # if you do set inplace=True, the same operation will actually alter the input # fun fact: setting inplace=True will cause this to return None instead of a DataFrame small_copy.drop("b", axis=1, inplace=True) small_copy Out[14]: a 0 1 1 1 In [15]: # let's see what happens if you assign to a copy small_copy = df_small.copy() # you should always use loc for assignment small_copy.loc[0, 'b'] = 4 small_copy Out[15]: a b 0 1 4 1 1 2 In [16]: # original is unchanged - this is why making copies is handy df_small Out[16]: a b 0 1 2 1 1 2 In [17]: # do not use chained indexing for assignment small_copy[:1]['b'] = 4 /Users/pamelawu/.virtualenvs/link/lib/python3.6/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy ### Watch out for out of order processing¶ In Jupyter notebooks, it's almost unavoidable to change and re-process cells out of order - we know we shouldn't do it, but it always ends up happening. This is a subset of the view vs. copy problem because if you know that you're making a change that fundamentally alters the properties of a column, you should eat the memory cost and make a new copy, or something like this might happen where you run the latter two cells in this block over and over and see the max value get pretty unstable. In [18]: large_copy = df_large.copy() In [55]: large_copy.loc[0, "positive_ints"] = 120 large_copy["positive_ints"].max() Out[55]: 2673 In [56]: large_copy["positive_ints"] = large_copy["positive_ints"] * 3 large_copy["positive_ints"].max() Out[56]: 8019 ### Never set errors to "ignore"¶ Some Pandas methods allow you to ignore errors by default. This is almost always a bad idea because ignoring errors means it just puts your unparsed input in place of where the output should have been. Note in the following example that if you were not overly familiar with what Pandas outputs should be, seeing an output of array type might not seem that unusal to you, and you might just move on with your analysis, not knowing that something had gone wrong. In [21]: parsed_dates = pd.to_datetime(["10/11/2018", "01/30/1996", "04/15/9086"], format="%m/%d/%Y", errors="ignore") parsed_dates Out[21]: array(['10/11/2018', '01/30/1996', '04/15/9086'], dtype=object) In [22]: # suppressing this error because it's very large! try: pd.to_datetime(["10/11/2018", "01/30/1996", "04/15/9086"], format="%m/%d/%Y") except Exception: traceback.print_exc(limit=1) Traceback (most recent call last): File "/Users/pamelawu/.virtualenvs/link/lib/python3.6/site-packages/pandas/core/tools/datetimes.py", line 377, in _convert_listlike values, tz = conversion.datetime_to_datetime64(arg) TypeError: Unrecognized value type: <class 'str'> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<ipython-input-22-12145c38fd6e>", line 3, in <module> pd.to_datetime(["10/11/2018", "01/30/1996", "04/15/9086"], format="%m/%d/%Y") pandas._libs.tslibs.np_datetime.OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 9086-04-15 00:00:00 In [23]: # when the offending timestamp is removed, this is what the output is supposed to look like pd.to_datetime(["10/11/2018", "01/30/1996"], format="%m/%d/%Y") Out[23]: DatetimeIndex(['2018-10-11', '1996-01-30'], dtype='datetime64[ns]', freq=None) ### The object dtype can hide mixed types¶ Each Pandas column has a type, but there is an uber-type called object, which means each value is actually just a pointer to some arbitrary object. This allows Pandas to have a great deal of flexibility (i.e. columns of lists or dictionaries or whatever you want!), but it can result in silent fails. Spoiler alert: this won't be the first time object type causes us problems. I don't want to say you shouldn't use it, but once you're in production mode, you should definitely use it with caution. In [24]: # we start out with integer types for our small data small_copy = df_small.copy() small_copy.dtypes Out[24]: a int64 b int64 dtype: object In [25]: # reset ones of the column's dtype to object small_copy["b"] = small_copy["b"].astype("object") small_copy.dtypes Out[25]: a int64 b object dtype: object In [26]: # now let's set ourselves up for a problem small_copy["b"] = [4, "4"] small_copy Out[26]: a b 0 1 4 1 1 4 In [27]: # the unmodified column behaves as expected small_copy.drop_duplicates("a") Out[27]: a b 0 1 4 In [28]: # not only is this not what you wanted, but it would be totally buried in the data small_copy.drop_duplicates("b") Out[28]: a b 0 1 4 1 1 4 ### Tread carefully with Pandas schema inference¶ When you load in a big, mixed-type CSV and Pandas gives you the option to set low_memory=False when it encounters some data it doesn't know how to handle, what it's actually doing is just making that entire column object type so that the numbers it can convert to int64 or float64 get converted, but the stuff it can't convert just sit there as str. This makes the column values able to peacefully co-exist, for now. But once you try to do any operations on them, you'll see that Pandas was just trying to tell you all along that you can't assume all the values are numeric. Note: Remember, in Python, NaN is a float! So if your numeric column has them, cast them to float even if they're actually int. In [29]: mixed_df = pd.DataFrame({"mixed": [100] * 100 + ["-"] + [100] * 100, "ints": [100] * 201}) In [30]: mixed_df = pd.read_csv("test_load.csv", header=0) mixed_df.dtypes Out[30]: mixed object ints int64 dtype: object In [31]: # the best practices way is to specify schema and properly set your null values mixed_df.dtypes Out[31]: mixed float64 ints int64 dtype: object ## Speeding up¶ Now that you have great coding habits, it's time to try to up the performance. There's a range of things you can use, from vectorization to just-in-time compilation to get your code running faster. To measure bottlenecks and quantify performance gains, let's introduce timeit, a nifty Jupyter notebook tool for performance measurement. All you need to know is that putting %timeit before a single line of code will measure the runtime of that line, while putting %%timeit in a code block will measure the runtime for the whole block. In [32]: # example taken from StackOverflow: https://bit.ly/2V2UZYr def haversine(lat1, lon1, lat2, lon2): """Haversine calculates the distance between two points on a sphere.""" lat1, lon1, lat2, lon2 = map(np.deg2rad, [lat1, lon1, lat2, lon2]) dlat = lat2 - lat1 dlon = lon2 - lon1 a = np.sin(dlat/2)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon/2)**2 c = 2 * np.arcsin(np.sqrt(a)) return c In [33]: %%timeit haversine(100, -100, 50, -50) 14.3 µs ± 308 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [34]: large_copy = df_large.copy() ### Choosing the right way to iterate through rows¶ In [35]: %%timeit # iterrows is a generator that yields indices and rows dists = [] for i, r in large_copy.iterrows(): dists.append(haversine(r["lat1"], r["lon1"], r["lat2"], r["lon2"])) large_copy["spherical_dist"] = dists 736 ms ± 29.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [36]: %%timeit # a somewhat optimized way that borrows from functional programming large_copy["spherical_dist"] = large_copy.apply( lambda r: haversine(r["lat1"], r["lon1"], r["lat2"], r["lon2"]), axis=1 ) 387 ms ± 31.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [37]: # using vectorization is about 100-300X faster for this operation %timeit large_copy["spherical_dist"] = haversine(\ large_copy["lat1"], \ large_copy["lon1"], \ large_copy["lat2"], \ large_copy["lon2"] \ ) 2.17 ms ± 188 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) ### Some rules of thumb for iterating¶ Like with indexing, Pandas is flexible about how you want to go through the values of each row. The following are some rules of thumb: • If you want to apply the same transformation to each value of of a column, you should use vectorization. • If you need conditional vectorization, use boolean indexing • Also works on strings! i.e. Series.str.replace("remove_word", "") • You should only use apply for specific functions that can't be broadcast. i.e. pd.to_datetime • It's hard to think of valid use case examples for iterrows, probably because the only valid use cases are so complicated and situation-specific that they don't make good didactic examples. ### Boolean indexing, what's that again?¶ You probably don't need this reminder, but just in case, boolean indexing is a way of vectorizing indexing where you convert the index into a series of True and False values, and when you apply the index to a column or DataFrame, the values that get selected are the ones where the boolean index was True. In [38]: mondays = large_copy["day_of_week"] == "Monday" Out[38]: 0 True 1 False 2 False 3 False 4 False Name: day_of_week, dtype: bool In [39]: large_copy.loc[mondays, :].head() Out[39]: day_of_week booleans positive_ints mixed_ints lat1 lon1 lat2 lon2 spherical_dist 0 Monday True 42 79 5.745216 -43.094330 15.617175 -24.790654 0.357681 7 Monday False 18 24 3.998923 -35.926048 -15.603738 13.337621 0.916955 14 Monday True 88 -67 -19.885387 -29.692172 -18.729195 4.747911 0.566669 21 Monday False 58 -72 33.824496 -4.376150 -11.513267 17.162267 0.868354 28 Monday True 26 -14 -30.356374 -24.574991 -49.073527 40.504344 0.902022 ## Just-in-time compilation with Numba¶ What if you can't vectorize? Does this mean you're stuck with df.apply()? Not necessarily - if your code can be expressed as a combination of pure Python and Numpy arrays, you should give Numba a try and see if your code can be sped up for you. Writing Numba is nothing like writing Cython, which is a lot like writing a whole new programming language if you just know Python. Again, as long as your code can be expressed in pure Python and Numpy, it's literally just putting a couple of decorators on top of the existing functions. In [40]: # these are some functions that calculate if a given complex number is part of the Mandlebrot set # and visualizes the resulting fractal from trying every pixel coordinate # here's the unmodified functions and an estimate of their performance # example taken from the Numba docs: http://numba.pydata.org/numba-doc/0.15.1/examples.html def mandel(x, y, max_iters): """ Given the real and imaginary parts of a complex number, determine if it is a candidate for membership in the Mandelbrot set given a fixed number of iterations. """ i = 0 c = complex(x,y) z = 0.0j for i in range(max_iters): z = z*z + c if (z.real*z.real + z.imag*z.imag) >= 4: return i return 255 def create_fractal(min_x, max_x, min_y, max_y, image, iters): height = image.shape[0] width = image.shape[1] pixel_size_x = (max_x - min_x) / width pixel_size_y = (max_y - min_y) / height for x in range(width): real = min_x + x * pixel_size_x for y in range(height): imag = min_y + y * pixel_size_y color = mandel(real, imag, iters) image[y, x] = color return image image = np.zeros((500, 750), dtype=np.uint8) %timeit create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20) plt.imshow(create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)) plt.show() 1.29 s ± 59.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [41]: # here are the exact same functions but with the numba decorators on top # spoiler alert - it's way faster @numba.jit def mandel(x, y, max_iters): """ Given the real and imaginary parts of a complex number, determine if it is a candidate for membership in the Mandelbrot set given a fixed number of iterations. """ i = 0 c = complex(x,y) z = 0.0j for i in range(max_iters): z = z*z + c if (z.real*z.real + z.imag*z.imag) >= 4: return i return 255 @numba.jit def create_fractal(min_x, max_x, min_y, max_y, image, iters): height = image.shape[0] width = image.shape[1] pixel_size_x = (max_x - min_x) / width pixel_size_y = (max_y - min_y) / height for x in range(width): real = min_x + x * pixel_size_x for y in range(height): imag = min_y + y * pixel_size_y color = mandel(real, imag, iters) image[y, x] = color return image image = np.zeros((500, 750), dtype=np.uint8) %timeit create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20) plt.imshow(create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)) plt.show() 10.9 ms ± 497 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) The biggest offenders of memory problems in Pandas are probably • You have references still attached to variables, which means they don't get garbage collected • You have too many copies of DataFrames lying around • You could stand to do more in-place operations, which don't produce copies of your DataFrame • object dtypes take up a lot more memory than fixed dtypes ### Garbage collection¶ Garbage collection is the process by which Python frees up memory by releasing memory that is no longer useful to program. You can release the objects referenced by memory by removing the reference to that object. This flags the formerly referenced object for memory release. The best way to let garbage collection help you manage memory is to wrap whatever you can into functions. Variables declared in functions are only scoped to the function, so when the function is finished running, they get discarded. On the other hand, global variables (like large_copy) are kept around until the Python process ends (i.e. this notebook kernel is shut down). Even if you del a variable, it just decreases the reference by 1, but if the reference count isn't 0, the object referenced isn't actually deleted. That's why global variables can screw up what you think your memory is holding onto. Just for fun, you can peek into what a variable's reference count is by using sys.getrefcount(var_name). In [42]: # foo is a reference foo = [] sys.getrefcount(foo) # this temporarily bumps it up to 2 Out[42]: 2 In [43]: # yet another global reference bumps it up again # at this point, del will not garbage collect foo foo.append(foo) sys.getrefcount(foo) Out[43]: 3 ### object dtypes take up a lot of memory¶ It's those pesky object dtypes again! Not surprisingly, telling Pandas that you need to be able to store literally anything at any time somewhere means that it will pre-allocate a huge amount of initial memory for you for the thing you're storing. This is fine if you're storing something complex, but if you're storing something that could easily be represented more simply, you might want to see if you can change the dtype to something better for your situation. In [44]: large_copy = df_large.copy() In [45]: # there's a very nifty tool for seeing how much memory your DataFrames are using. large_copy.info(memory_usage="deep") <class 'pandas.core.frame.DataFrame'> RangeIndex: 7000 entries, 0 to 6999 Data columns (total 8 columns): day_of_week 7000 non-null object booleans 7000 non-null bool positive_ints 7000 non-null int64 mixed_ints 7000 non-null int64 lat1 7000 non-null float64 lon1 7000 non-null float64 lat2 7000 non-null float64 lon2 7000 non-null float64 dtypes: bool(1), float64(4), int64(2), object(1) memory usage: 773.5 KB In [46]: # a common practice when conserving memory is "downcasting" # like, if you know your integers don't need 64-bits, cast them down to 32-bits large_copy["positive_ints"] = large_copy["positive_ints"].astype(np.int32) large_copy.info(memory_usage="deep") <class 'pandas.core.frame.DataFrame'> RangeIndex: 7000 entries, 0 to 6999 Data columns (total 8 columns): day_of_week 7000 non-null object booleans 7000 non-null bool positive_ints 7000 non-null int32 mixed_ints 7000 non-null int64 lat1 7000 non-null float64 lon1 7000 non-null float64 lat2 7000 non-null float64 lon2 7000 non-null float64 dtypes: bool(1), float64(4), int32(1), int64(1), object(1) memory usage: 746.2 KB In [47]: # all str types are stored as object in Pandas because they can be any length # you can downcast string columns to a fixed-length str type # for example, this limits to 10 characters large_copy["day_of_week"] = large_copy["day_of_week"].astype("|S10") large_copy.info(memory_usage="deep") <class 'pandas.core.frame.DataFrame'> RangeIndex: 7000 entries, 0 to 6999 Data columns (total 8 columns): day_of_week 7000 non-null object booleans 7000 non-null bool positive_ints 7000 non-null int32 mixed_ints 7000 non-null int64 lat1 7000 non-null float64 lon1 7000 non-null float64 lat2 7000 non-null float64 lon2 7000 non-null float64 dtypes: bool(1), float64(4), int32(1), int64(1), object(1) memory usage: 636.8 KB In [48]: # but if you also know that there aren't many unique strings # you might want to try casting to category for the highest savings of all large_copy["day_of_week"] = large_copy["day_of_week"].astype("category") large_copy.info(memory_usage="deep") <class 'pandas.core.frame.DataFrame'> RangeIndex: 7000 entries, 0 to 6999 Data columns (total 8 columns): day_of_week 7000 non-null category booleans 7000 non-null bool positive_ints 7000 non-null int32 mixed_ints 7000 non-null int64 lat1 7000 non-null float64 lon1 7000 non-null float64 lat2 7000 non-null float64 lon2 7000 non-null float64 dtypes: bool(1), category(1), float64(4), int32(1), int64(1) memory usage: 315.2 KB In [ ]:
2019-05-21 11:29:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17694856226444244, "perplexity": 3866.214901818086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256314.52/warc/CC-MAIN-20190521102417-20190521124417-00552.warc.gz"}
https://jsuese.scu.edu.cn/html/2018/2/201601413.html
工程科学与技术   2018, Vol. 50 Issue (2): 112-117 An Improved Block Diagonalization Precoding Algorithm GAO Ming, SUN Chengyue, LIN Shaoxing, WANG Yong State Key Lab. of Integrated Services Networks,Xidian Univ.,Xi’an 710071,China Abstract: In order to improve the performance of block diagonalization precoding algorithm,an improved algorithm was proposed for multiuser multiple-input multiple-output (MIMO) downlink systems,which employs QR decomposition based on Givens transformation.In the block diagonalization precoding algorithm,the first half of the precoding matrix is required to solve the problem of multi-user interference,and the latter one to reduce the interference between the antennas of each user.The core algorithm of the first half matrix is orthogonalization algorithm,which will directly affect the bit error rate (BER) performance of the block diagonalization precoding algorithm.Therefore,the original orthogonalization algorithm was replaced by QR decomposition algorithm based on Givens transformation.Using this algorithm to solve the first half of the precoding matrix,the orthogonal basis of the interference matrix’s zero space can be obtained.By using the Givens transformation,the matrix with better orthogonality can be obtained,so as to further reduce the BER of the system.The simulation results showed that compared with the traditional block diagonalization algorithms,the complexity of the system was decreased and the BER performance can be improvedgreatly.Compared with the block diagonalization algorithms based on Gram-Schmidt orthogonalization,the BER performance can be improved by 3~5 dB,with a slight increase in the algorithm complexity. Key words: multiple-input multiple-output (MIMO)    precoding    block diagonalization    computational complexity    bit error rate (BER) 1 系统模型 ${{{y}}_k} = {{{H}}_k}{{{W}}_k}{{{x}}_k} + \sum\limits_{i = 1,i \ne k}^K {{{{H}}_i}{{{W}}_i}{{{x}}_i}} + {{{n}}_k}$ (1) 图1 多用户MIMO预编码系统模型 Fig. 1 Multiuser MIMO precoding system model 2 传统块对角化算法 2.1 求取传统块对角化算法的预编码矩阵 ${{{W}}^{{o}}}$ ${{\widetilde{ H}}_k} = {[{{H}}_{{1}}^{{T}},{{H}}_{{2}}^{{T}}, \cdots \!,\;{{H}}_{k - 1}^{{T}},{{H}}_{k + 1}^{{T}}, \cdots\! ,\;{{H}}_K^{{T}}]^{{T}}}$ (2) ${{\widetilde{ H}}_k} = {{\widetilde{ U}}_k}\left[ {\begin{array}{*{20}{c}} {{{{\widetilde{ \varSigma }}}_k}} & 0 \\ 0 & 0 \end{array}} \right]{\left[ {{\widetilde{ V}}_k^{(1)},{\widetilde{ V}}_k^{(0)}} \right]^{{H}}}$ (3) ${{W}}_k^{{o}} = {\widetilde{ V}}_k^{(0)}$ (4) ${{W}}_k^{{o}} = [{{W}}_1^{{o}},{{W}}_2^{{o}}, \cdots ,{{W}}_K^{{o}}]$ (5) 2.2 求取传统块对角化算法的预编码矩阵 ${{{W}}^{{g}}}$ ${{H}}_k^{{{eff}}} = {{{H}}_k}{{W}}_k^{{o}} = {{{U}}_k}{{{\varSigma }}_k}{[{{V}}_k^{(1)},{{V}}_k^{(0)}]^{{H}}}$ (6) ${{W}}_k^{{g}} = {{V}}_k^{(1)}$ (7) ${{{W}}^{{g}}} = {{diag}}\{ {{W}}_1^{{g}},{{W}}_2^{{g}}, \cdots ,{{W}}_K^{{g}}\}$ (8) ${{W}} = {{{W}}^{{o}}}{{{W}}^{{g}}}$ (9) ${{{B}}_k} = {{U}}_k^{{H}}$ (10) 3 改进块对角化算法 3.1 求取改进块对角化算法的预编码矩阵 ${{{W}}^{{o}}}$ ${{{H}}^{{H}}} = {{QR}}$ (11) \begin{aligned}[b] {{{H}}^{\text{†}} } = & {{{H}}^{{H}}}{({{H}}{{{H}}^{{H}}})^{ - 1}} = {{QR}}{({{{R}}^{{H}}}{{{Q}}^{{H}}}{{QR}})^{ - 1}} = \\& {{QR}}{{{R}}^{ - 1}}{({{{R}}^{{H}}})^{ - 1}} = {{Q}}{({{{R}}^{{H}}})^{ - 1}}\end{aligned} (12) ${({{{R}}^{{H}}})^{ - 1}} = {{L}} = [{{{L}}_1},{{{L}}_2}, \cdots ,{{{L}}_K}]$ ,其中, ${{{L}}_k} \in {\mathbb{C}^{{N_{ R}} \times {N_k}}}$ ${{L}} \in {\mathbb{C}^{{N_{ R}} \times {N_{ R}}}}$ 中用户 $k$ 所对应的子矩阵。由于 ${{H}}{{{H}}^{\text{†}} } = {{I}}$ ,可以看出对于任意用户 $k$ ,都有 ${{\widetilde{ H}}_k}{{Q}}{{{L}}_k} = 0$ ,这表明矩阵 ${{Q}}{{{L}}_k}$ 位于信道干扰矩阵 ${{\widetilde{ H}}_k}$ 的零空间内。此时,还需保证 ${{Q}}{{{L}}_k}$ 为酉矩阵,以不改变总的发射功率。 ${{Q}}{{{L}}_k} = {{\overline{ Q}}_k}{{\overline{ R}}_k}$ (13) ${{\widetilde{ H}}_k}{{Q}}{{{L}}_k} = {{\widetilde{ H}}_k}{{\overline{ Q}}_k}{{\overline{ R}}_k} = 0$ (14) ${{{W}}^{{o}}} = {{\overline{ Q}}_k}$ (15) ${{W}}_k^{{o}} = [{{W}}_1^{{o}},{{W}}_{{2}}^{{o}}, \cdots ,{{W}}_{{K}}^{{o}}]$ (16) ${{H}}{{{W}}^{{o}}} = {{diag}}\{ {{{H}}_1}{{W}}_{{1}}^{{o}},{{{H}}_1}{{W}}_2^{{o}}, \cdots ,{{{H}}_K}{{W}}_K^{{o}}\}$ (17) 3.2 求取改进块对角化算法的预编码矩阵 ${{{W}}^{{g}}}$ ${{H}}_k^{{eff}} = {{{H}}_k}{{W}}_k^{{o}} = {{{U}}_k}{{{\varSigma }}_k}{[{{V}}_k^{(1)},{{V}}_k^{(0)}]^{{H}}}$ (18) ${{W}}_k^{{g}} = {{V}}_k^{(1)}$ (19) ${{{W}}^{{g}}} = { diag}\{ {{W}}_1^{{g}},{{W}}_2^{{g}}, \cdots ,{{W}}_K^{{g}}\}$ (20) ${{W}} = {{{W}}^{{o}}}{{{W}}^{{g}}}$ (21) ${{{B}}_k} = {{U}}_k^{{H}}$ (22) 4 性能分析与仿真 4.1 复杂度分析 1)与 $m \times p$ 维复矩阵相乘所需浮点运算数为 $8nmp - 2np$ 2)矩阵奇异值分解所需浮点运算数为 $24n{m^2} +$ $48{n^2}m + 54{n^3}$ 3)基于修正Gram-Schmidt正交化的QR分解所需浮点运算数为 $8{n^2}m$ 4)基于Givens变换的QR分解所需浮点运算数为 $24{n^2}m - 8{n^3}$ 5) $n \times n$ 维实矩阵使用高斯消元法求逆,所需浮点运算数为 $4{n^3}/3$ \begin{aligned}[b]{\varPsi _{{{BD}}}} = & 24{K^2}{N_k}N_{{T}}^2 + (56{K^2} - 40K + 48)N_k^2{N_{{T}}} - \\& 2K{N_k}{N_{{T}}} + 54({K^3} - 3{K^2} + 4K - 1)N_k^3 - 2KN_k^2{{ = }}\\& O({K^2}{N_k}N_{{T}}^2)\end{aligned} (23) \begin{aligned} {\varPsi _{{{QR}} - {{GSO}} - {{BD}}}} = & 24K{N_k}N_{{T}}^2 + (24{K^2} + 56K)N_k^2{N_{{T}}} - 4K{N_k}{N_{{T}}} + \\& (4{K^3}/3 \!+\! 8{K^2} \!+\! 54K)N_k^3 - 2KN_k^2 \!=\! O(K{N_k}N_{{T}}^2)\end{aligned} (24) \begin{aligned}[b]{\varPsi _{{{QR}} \!-\! {{Givens}} \!-\! {{BD}}}} \!= & 24K{N_k}N_{{T}}^2 \!+\! (24{K^2} + 56K)N_k^2{N_{{T}}} - 4K{N_k}{N_{{T}}} + \\& (4{K^3}/3 \!+\! 24{K^2} \!+\! 48K)N_k^3 \!-\! 2KN_k^2 \!=\! O(K{N_k}N_{{T}}^2)\end{aligned} (25) 图2 BD与QR-Givens-BD复杂度比较 Fig. 2 Comparison of complexity between BD and QR-Givens-BD 图3 QR-GSO-BD与QR-Givens-BD复杂度比较 Fig. 3 Comparison of complexity between QR-GSO-BD and QR-Givens-BD 4.2 误码性能仿真 图4 误码率仿真结果( ${{{N}}_{{T}}}= 6,{{{N}}_{k}} = 2, {{K}}= 3$ ) Fig. 4 Simulation of bit error rate ( ${{{N}}_{{T}}}= 6,{{{N}}_{k}} = 2, {{K}}= 3$ ) 图5 误码率仿真结果( ${{{N}}_{{T}}}= 18,{{{N}}_{k}} = 3,{{K}}= 6$ ) Fig. 5 Simulation of bit error rate ( ${{{N}}_{{T}}}= 18,{{{N}}_{k}} = 3,{{K}}= 6$ ) 5 结 论 [1] Nguyen D, Nguyenle H, Lengoc T. Block-diagonalization precoding in a multiuser multicell MIMO system competition and coordination[J]. IEEE Transactions on Wireless Communications, 2014, 13(2): 968-981. DOI:10.1109/TWC.2013.010214.130724 [2] Zarei S, Gerstacker W, Schober R. Low-complexity widely-linear precoding for downlink large-scale MU-MISO systems[J]. IEEE Communications Letters, 2015, 19(4): 665-668. DOI:10.1109/LCOMM.2015.2392751 [3] Weingarten H, Steinberg Y, Shamai S. The capacity region of the Gussian MIMO broadcast channel[J]. IEEE Transactions on Information Theory, 2006, 52(9): 3936-3964. DOI:10.1109/TIT.2006.880064 [4] Spencer Q H, Swinlehurst A L, Haardt M. Zero-forcing methods for downlink spatial multiplexing in multi-user MIMO Channels[J]. IEEE Transactions on Signal Processing, 2004, 52(2): 461-471. DOI:10.1109/TSP.2003.821107 [5] Ling Cong. On the proximity factors of lattice reduction aided decoding[J]. IEEE Transactions on Signal Processing, 2011, 59(6): 2795-2808. DOI:10.1109/TSP.2011.2123889 [6] Hashem M, Khan A, Chung J. Lattice reduction aided with block diagonalization for multiuser MIMO systems[J]. EURASIP Journal on Wireless Communications, 2015, 2015(1): 1-9. [7] Zu K, Lamare R C, Haardt M. Generalized design of low-complexity block diagonalization type precoding algorithms for multiuser MIMO systems[J]. IEEE Transactions on Communications, 2013, 61(2): 4232-4242. [8] Chou C C, Wu J M. Low-complexity MIMO precoder design with LDLH channel decomposition [J]. IEEE Transactions on Vehicular Technology, 2011, 60(5): 2368-2372. DOI:10.1109/TVT.2011.2151889 [9] An Jie, Liu Yuanan, Liu Fang. A low complexity block diagonalization precoding method for multiuser MIMO downlink[J]. Journal of Computational Information Systems, 2012, 8(12): 5187-5194. [10] Wu Jian.Research on lattice reduction based low-complexity precoding technique for multi-user MIMO systems[D].Chengdu:University of Electronic Science and Technology of China,2015. 巫健.多用户MIMO系统中基于格基规约的低复杂度预编码技术研究[D].成都:电子科技大学,2015. [11] Wu Jian,Fang Shu,Li Lei,et al.QR decomposition and Gram Schmidt orthogonalization based low complexity multi-user MIMO precoding[C]//Proceedings of 10th International Conference on Wireless Communications,Networking and Mobile Computing.Beijing:IET,2014:61–66. [12] Golub G H,van Loan C F.矩阵计算:英文版[M].4版.北京:人民邮电出版社,2014:199–210. [13] Ren Yuwei, Song Yang, Su Xin. Low-complexity channel reconstruction methods based on SVD-ZF precoding in massive 3D-MIMO systems[J]. China Communications, 2015, 12(Supplement1): 49-57. [14] Fang Shu, Wu Jian, Lu Chengyi. Simplified QR-decomposition based and lattice reduction-assisted multi user multiple input multiple output precoding scheme[J]. IET Communications, 2016, 10(5): 586-593. DOI:10.1049/iet-com.2015.0643 [15] Ali-Khan M H, Chung J G, Lee M H. Lattice reduction aided with block diagonalization for multiuser MIMO systems[J]. EURASIP Journal on Wireless Communications, 2015, 2015(254): 1186-1195.
2022-07-04 07:02:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5069736242294312, "perplexity": 12084.609740615335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104354651.73/warc/CC-MAIN-20220704050055-20220704080055-00058.warc.gz"}
https://www.tensorflow.org/agents/tutorials/9_c51_tutorial
# DQN C51/Rainbow ## Introduction This example shows how to train a Categorical DQN (C51) agent on the Cartpole environment using the TF-Agents library. Make sure you take a look through the DQN tutorial as a prerequisite. This tutorial will assume familiarity with the DQN tutorial; it will mainly focus on the differences between DQN and C51. ## Setup If you haven't installed tf-agents yet, run: sudo apt-get update sudo apt-get install -y xvfb ffmpeg freeglut3-dev pip install 'imageio==2.4.0' pip install pyvirtualdisplay pip install tf-agents pip install pyglet from __future__ import absolute_import from __future__ import division from __future__ import print_function import base64 import imageio import IPython import matplotlib import matplotlib.pyplot as plt import PIL.Image import pyvirtualdisplay import tensorflow as tf from tf_agents.agents.categorical_dqn import categorical_dqn_agent from tf_agents.drivers import dynamic_step_driver from tf_agents.environments import suite_gym from tf_agents.environments import tf_py_environment from tf_agents.eval import metric_utils from tf_agents.metrics import tf_metrics from tf_agents.networks import categorical_q_network from tf_agents.policies import random_tf_policy from tf_agents.replay_buffers import tf_uniform_replay_buffer from tf_agents.trajectories import trajectory from tf_agents.utils import common # Set up a virtual display for rendering OpenAI gym environments. display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start() ## Hyperparameters env_name = "CartPole-v1" # @param {type:"string"} num_iterations = 15000 # @param {type:"integer"} initial_collect_steps = 1000 # @param {type:"integer"} collect_steps_per_iteration = 1 # @param {type:"integer"} replay_buffer_capacity = 100000 # @param {type:"integer"} fc_layer_params = (100,) batch_size = 64 # @param {type:"integer"} learning_rate = 1e-3 # @param {type:"number"} gamma = 0.99 log_interval = 200 # @param {type:"integer"} num_atoms = 51 # @param {type:"integer"} min_q_value = -20 # @param {type:"integer"} max_q_value = 20 # @param {type:"integer"} n_step_update = 2 # @param {type:"integer"} num_eval_episodes = 10 # @param {type:"integer"} eval_interval = 1000 # @param {type:"integer"} ## Environment Load the environment as before, with one for training and one for evaluation. Here we use CartPole-v1 (vs. CartPole-v0 in the DQN tutorial), which has a larger max reward of 500 rather than 200. train_py_env = suite_gym.load(env_name) train_env = tf_py_environment.TFPyEnvironment(train_py_env) eval_env = tf_py_environment.TFPyEnvironment(eval_py_env) ## Agent C51 is a Q-learning algorithm based on DQN. Like DQN, it can be used on any environment with a discrete action space. The main difference between C51 and DQN is that rather than simply predicting the Q-value for each state-action pair, C51 predicts a histogram model for the probability distribution of the Q-value: By learning the distribution rather than simply the expected value, the algorithm is able to stay more stable during training, leading to improved final performance. This is particularly true in situations with bimodal or even multimodal value distributions, where a single average does not provide an accurate picture. In order to train on probability distributions rather than on values, C51 must perform some complex distributional computations in order to calculate its loss function. But don't worry, all of this is taken care of for you in TF-Agents! To create a C51 Agent, we first need to create a CategoricalQNetwork. The API of the CategoricalQNetwork is the same as that of the QNetwork, except that there is an additional argument num_atoms. This represents the number of support points in our probability distribution estimates. (The above image includes 10 support points, each represented by a vertical blue bar.) As you can tell from the name, the default number of atoms is 51. categorical_q_net = categorical_q_network.CategoricalQNetwork( train_env.observation_spec(), train_env.action_spec(), num_atoms=num_atoms, fc_layer_params=fc_layer_params) We also need an optimizer to train the network we just created, and a train_step_counter variable to keep track of how many times the network was updated. Note that one other significant difference from vanilla DqnAgent is that we now need to specify min_q_value and max_q_value as arguments. These specify the most extreme values of the support (in other words, the most extreme of the 51 atoms on either side). Make sure to choose these appropriately for your particular environment. Here we use -20 and 20. optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate) train_step_counter = tf.Variable(0) agent = categorical_dqn_agent.CategoricalDqnAgent( train_env.time_step_spec(), train_env.action_spec(), categorical_q_network=categorical_q_net, optimizer=optimizer, min_q_value=min_q_value, max_q_value=max_q_value, n_step_update=n_step_update, td_errors_loss_fn=common.element_wise_squared_loss, gamma=gamma, train_step_counter=train_step_counter) agent.initialize() One last thing to note is that we also added an argument to use n-step updates with $$n$$ = 2. In single-step Q-learning ($$n$$ = 1), we only compute the error between the Q-values at the current time step and the next time step using the single-step return (based on the Bellman optimality equation). The single-step return is defined as: $$G_t = R_{t + 1} + \gamma V(s_{t + 1})$$ where we define $$V(s) = \max_a{Q(s, a)}$$. N-step updates involve expanding the standard single-step return function $$n$$ times: $$G_t^n = R_{t + 1} + \gamma R_{t + 2} + \gamma^2 R_{t + 3} + \dots + \gamma^n V(s_{t + n})$$ N-step updates enable the agent to bootstrap from further in the future, and with the right value of $$n$$, this often leads to faster learning. Although C51 and n-step updates are often combined with prioritized replay to form the core of the Rainbow agent, we saw no measurable improvement from implementing prioritized replay. Moreover, we find that when combining our C51 agent with n-step updates alone, our agent performs as well as other Rainbow agents on the sample of Atari environments we've tested. ## Metrics and Evaluation The most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode, and we usually average this over a few episodes. We can compute the average return metric as follows. def compute_avg_return(environment, policy, num_episodes=10): total_return = 0.0 for _ in range(num_episodes): time_step = environment.reset() episode_return = 0.0 while not time_step.is_last(): action_step = policy.action(time_step) time_step = environment.step(action_step.action) episode_return += time_step.reward total_return += episode_return avg_return = total_return / num_episodes return avg_return.numpy()[0] random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(), train_env.action_spec()) compute_avg_return(eval_env, random_policy, num_eval_episodes) # Please also see the metrics module for standard implementations of different # metrics. 25.1 ## Data Collection As in the DQN tutorial, set up the replay buffer and the initial data collection with the random policy. replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer( data_spec=agent.collect_data_spec, batch_size=train_env.batch_size, max_length=replay_buffer_capacity) def collect_step(environment, policy): time_step = environment.current_time_step() action_step = policy.action(time_step) next_time_step = environment.step(action_step.action) traj = trajectory.from_transition(time_step, action_step, next_time_step) # Add trajectory to the replay buffer for _ in range(initial_collect_steps): collect_step(train_env, random_policy) # This loop is so common in RL, that we provide standard implementations of # these. For more details see the drivers module. # Dataset generates trajectories with shape [BxTx...] where # T = n_step_update + 1. dataset = replay_buffer.as_dataset( num_parallel_calls=3, sample_batch_size=batch_size, num_steps=n_step_update + 1).prefetch(3) iterator = iter(dataset) WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/autograph/impl/api.py:377: ReplayBuffer.get_next (from tf_agents.replay_buffers.replay_buffer) is deprecated and will be removed in a future version. Instructions for updating: ## Training the agent The training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing. The following will take ~7 minutes to run. try: %%time except: pass # (Optional) Optimize by wrapping some of the code in a graph using TF function. agent.train = common.function(agent.train) # Reset the train step agent.train_step_counter.assign(0) # Evaluate the agent's policy once before training. avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes) returns = [avg_return] for _ in range(num_iterations): # Collect a few steps using collect_policy and save to the replay buffer. for _ in range(collect_steps_per_iteration): collect_step(train_env, agent.collect_policy) # Sample a batch of data from the buffer and update the agent's network. experience, unused_info = next(iterator) train_loss = agent.train(experience) step = agent.train_step_counter.numpy() if step % log_interval == 0: print('step = {0}: loss = {1}'.format(step, train_loss.loss)) if step % eval_interval == 0: avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes) print('step = {0}: Average Return = {1:.2f}'.format(step, avg_return)) returns.append(avg_return) WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py:1082: calling foldr_v2 (from tensorflow.python.ops.functional_ops) with back_prop=False is deprecated and will be removed in a future version. Instructions for updating: results = tf.foldr(fn, elems, back_prop=False) Use: step = 200: loss = 3.1652722358703613 step = 400: loss = 2.3220953941345215 step = 600: loss = 1.9085898399353027 step = 800: loss = 1.5957838296890259 step = 1000: loss = 1.373847484588623 step = 1000: Average Return = 466.70 step = 1200: loss = 1.2249349355697632 step = 1400: loss = 1.2289665937423706 step = 1600: loss = 1.225049614906311 step = 1800: loss = 1.4009439945220947 step = 2000: loss = 0.8110367059707642 step = 2000: Average Return = 310.30 step = 2200: loss = 0.8426725268363953 step = 2400: loss = 0.9993857145309448 step = 2600: loss = 0.7408146858215332 step = 2800: loss = 1.0472800731658936 step = 3000: loss = 0.8934259414672852 step = 3000: Average Return = 294.80 step = 3200: loss = 0.67853844165802 step = 3400: loss = 0.9168663024902344 step = 3600: loss = 0.6471030712127686 step = 3800: loss = 0.8118085861206055 step = 4000: loss = 0.7178002595901489 step = 4000: Average Return = 339.10 step = 4200: loss = 0.5277565717697144 step = 4400: loss = 0.6562362909317017 step = 4600: loss = 0.6893218755722046 step = 4800: loss = 0.6171445846557617 step = 5000: loss = 0.6233919262886047 step = 5000: Average Return = 192.00 step = 5200: loss = 0.5258955359458923 step = 5400: loss = 0.6037764549255371 step = 5600: loss = 0.6617163419723511 step = 5800: loss = 0.45471426844596863 step = 6000: loss = 0.5623942017555237 step = 6000: Average Return = 375.00 step = 6200: loss = 0.5260623097419739 step = 6400: loss = 0.5474383234977722 step = 6600: loss = 0.6723802089691162 step = 6800: loss = 0.4168206453323364 step = 7000: loss = 0.6093295812606812 step = 7000: Average Return = 396.20 step = 7200: loss = 0.5631401538848877 step = 7400: loss = 0.5302916765213013 step = 7600: loss = 0.4411312937736511 step = 7800: loss = 0.5489145517349243 step = 8000: loss = 0.4881543517112732 step = 8000: Average Return = 352.20 step = 8200: loss = 0.5519999265670776 step = 8400: loss = 0.4684922993183136 step = 8600: loss = 0.523332953453064 step = 8800: loss = 0.4230990409851074 step = 9000: loss = 0.5511386394500732 step = 9000: Average Return = 169.30 step = 9200: loss = 0.5994375944137573 step = 9400: loss = 0.3859468698501587 step = 9600: loss = 0.3768221437931061 step = 9800: loss = 0.3608618378639221 step = 10000: loss = 0.45109525322914124 step = 10000: Average Return = 159.40 step = 10200: loss = 0.4834355115890503 step = 10400: loss = 0.3417738378047943 step = 10600: loss = 0.42035162448883057 step = 10800: loss = 0.513039231300354 step = 11000: loss = 0.4203823208808899 step = 11000: Average Return = 329.90 step = 11200: loss = 0.532701849937439 step = 11400: loss = 0.34555840492248535 step = 11600: loss = 0.23318243026733398 step = 11800: loss = 0.373273640871048 step = 12000: loss = 0.4745432734489441 step = 12000: Average Return = 484.00 step = 12200: loss = 0.38893377780914307 step = 12400: loss = 0.45256471633911133 step = 12600: loss = 0.2996901571750641 step = 12800: loss = 0.44166380167007446 step = 13000: loss = 0.34164178371429443 step = 13000: Average Return = 329.70 step = 13200: loss = 0.45920413732528687 step = 13400: loss = 0.4424200654029846 step = 13600: loss = 0.48878079652786255 step = 13800: loss = 0.48222893476486206 step = 14000: loss = 0.3798040747642517 step = 14000: Average Return = 433.20 step = 14200: loss = 0.46709728240966797 step = 14400: loss = 0.24153408408164978 step = 14600: loss = 0.28913378715515137 step = 14800: loss = 0.36507582664489746 step = 15000: loss = 0.32009801268577576 step = 15000: Average Return = 141.00 ## Visualization ### Plots We can plot return vs global steps to see the performance of our agent. In Cartpole-v1, the environment gives a reward of +1 for every time step the pole stays up, and since the maximum number of steps is 500, the maximum possible return is also 500. steps = range(0, num_iterations + 1, eval_interval) plt.plot(steps, returns) plt.ylabel('Average Return') plt.xlabel('Step') plt.ylim(top=550) (-14.11999959945679, 550.0) ### Videos It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab. def embed_mp4(filename): """Embeds an mp4 file in the notebook.""" b64 = base64.b64encode(video) tag = ''' <video width="640" height="480" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video>'''.format(b64.decode()) return IPython.display.HTML(tag) The following code visualizes the agent's policy for a few episodes: num_episodes = 3 video_filename = 'imageio.mp4' with imageio.get_writer(video_filename, fps=60) as video: for _ in range(num_episodes): time_step = eval_env.reset() video.append_data(eval_py_env.render()) while not time_step.is_last(): action_step = agent.policy.action(time_step) time_step = eval_env.step(action_step.action) video.append_data(eval_py_env.render()) embed_mp4(video_filename) WARNING:root:IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (400, 600) to (400, 608) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to None (risking incompatibility). You may also see a FFMPEG warning concerning speedloss due to data not being aligned. [swscaler @ 0x56191e3c7880] Warning: data is not aligned! This can lead to a speed loss ` C51 tends to do slightly better than DQN on CartPole-v1, but the difference between the two agents becomes more and more significant in increasingly complex environments. For example, on the full Atari 2600 benchmark, C51 demonstrates a mean score improvement of 126% over DQN after normalizing with respect to a random agent. Additional improvements can be gained by including n-step updates. For a deeper dive into the C51 algorithm, see A Distributional Perspective on Reinforcement Learning (2017). [{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"Missing the information I need" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"Too complicated / too many steps" },{ "type": "thumb-down", "id": "outOfDate", "label":"Out of date" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }] [{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]
2022-08-18 17:16:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3785821795463562, "perplexity": 9196.100124933904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573242.55/warc/CC-MAIN-20220818154820-20220818184820-00436.warc.gz"}
https://www.gerad.ca/fr/papers/G-2007-74
Groupe d’études et de recherche en analyse des décisions # On Filtering for Singular Linear Systems with Random Abrupt Changes ## El-Kébir Boukas This paper deals with the class of continuous-time singular linear Markovian jump systems with totally and partially known transition jump rates. The filtering problem of this class of systems is tackled. New sufficient conditions for $\cal{H}_\infty$ filtering are developed. A design procedure for the $\cal{H}_\infty$ filter which guarantees that the dynamics of the filter error will be piecewise regular, impulse-free and stochastically stable with $\gamma$-disturbance rejection is proposed. It is shown that the addressed problem can be solved if the corresponding developed linear matrix inequalities (LMIs) with some constraints are feasible. A numerical example is employed to show the usefulness of the proposed results. , 24 pages
2018-02-18 20:18:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6152530908584595, "perplexity": 876.4315872190147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812259.18/warc/CC-MAIN-20180218192636-20180218212636-00743.warc.gz"}
https://chemistry.stackexchange.com/questions/117674/which-is-the-correct-definition-for-metamers-or-metamerism/117680
# Which is the correct definition for metamers (or metamerism)? I have heard two definitions of metamers. 1. Compounds having the same molecular formula but different number of carbon atoms on either side of the functional group. 2. Compounds having the same molecular formula but different alkyl groups on either side of the functional group. Which one of these is correct? Are pentan-2-one and 3-methylbutan-2-one metamers? • How are they different, to begin with? – Ivan Neretin Jul 4 at 12:20 • @IvanNeretin Different alkyl groups might mean a more branched alkyl group, but with same amount of carbon atoms. – user80708 Jul 4 at 12:22 • @IvanNeretin I asked a question in the second part of the question. I thought the answer will change according to which definition is correct. – user80708 Jul 4 at 12:23 • All right, so they are different, after all. Your example fits the second definition but not the first one. Well, hopefully someone will bring in the correct definition from IUPAC. – Ivan Neretin Jul 4 at 12:33 • I am pretty sure that it is the second definition,but I can't find a source for that right now. Lemme come back with a reliable source :) – YUSUF HASAN Jul 4 at 12:37 ## 1 Answer There is no IUPAC definition for metamers or metamerism in the gold book (not even one that makes it obsolete), and Wikipedia doesn't even have a real article about it, see Metamerism. In chemistry, the chemical property of having the same proportion of atomic components in different arrangements (obsolete, replaced with isomer). In organic chemistry, compounds having the same molecular formula but different number of carbon atoms (alkyl groups) on either side of functional group ( i.e., -O-,-S-, -NH-, -C(=O)-) are called metamers and the phenomenon is called metamerism. I cannot find any authoritative source for this, but there is a question discussing it on our platform: What is metamerism? Also related: Are methyl n-propyl ether and methyl iso-propyl ether metamers? Any of the two definitions cited in the question match the general definition of isomers. isomer (DOI: 10.1351/goldbook.I03289) One of several species (or molecular entities) that have the same atomic composition (molecular formula) but different line formulae or different stereochemical formulae and hence different physical and/or chemical properties. More particularly, they match the definition of constitutional isomers. constitutional isomerism (DOI: 10.1351/goldbook.C01285) Isomerism between structures differing in constitution and described by different line formulae e.g. $$\ce{CH3OCH3}$$ and $$\ce{CH3CH2OH}$$. Metamers/ metamerism is an archaic, ambiguous, and deprecated term in chemistry. Don't use it.
2019-08-21 23:12:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6673722863197327, "perplexity": 2240.7004238749373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00075.warc.gz"}
https://indico.cern.ch/event/218030/contributions/450609/
# EPS HEP 2013 Stockholm 17-24 July 2013 KTH and Stockholm University Campus Europe/Stockholm timezone ## Higher Spin Contributions to Holographic Hydrodynamics 20 Jul 2013, 11:00 23m F3 (KTH Campus) ### F3 #### KTH Campus Talk presentation Non-perturbative QFT and String Theory ### Speaker Prof. Dimitri Polyakov (CQUEST, Sogang University) ### Description We calculate the graviton's $\beta$-function in $AdS$ string-theoretic sigma-mod el, perturbed by vertex operators for Vasiliev's higher spin gauge fields in $AdS_5$. The result is given by $\beta_{mn}=R_{mn}-8T_{mn}(g,u)$ (with AdS radius set to 1 and the graviton polarized along the $AdS_5$ boundary), with the matter stress-energy tensor $T_{mn}$ given by that of conformal holographic fluid in $d=4$, evaluated at the gauge with the temperature given by $T={1\over{\pi}}$. The We show that the gradient expansion in hydrodynamics and the appropriate new transport coefficients are controlled by the higher spin operator algebra. ### Primary author Prof. Dimitri Polyakov (CQUEST, Sogang University) Slides
2020-10-27 01:19:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7661743760108948, "perplexity": 7685.627952037549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892710.59/warc/CC-MAIN-20201026234045-20201027024045-00191.warc.gz"}
https://community.rti.com/static/documentation/connext-dds/6.0.1/doc/manuals/recording_service/converter/converter_troubleshooting.html
5.4. Troubleshooting¶ When recording a database with Recording Service, there may be topics that have no associated table, because they were discovered but were filtered out by using the <allow_topic_name_filter> or <deny_topic_name_filter> tags in Topic Group or by defining Topics (that target specific topic names). While the topic will be present in the DCPSPublication table in the discovery file, it won’t have a corresponding table in the user-data files. If the same topics are not filtered in Converter (by using the same <allow_topic_name_filter> or <deny_topic_name_filter> tags, or Topics), then when Converter starts it will discover the topics without a table because they are available in the discovery information. When Converter attempts to create a stream reader for these topic(s), a failure message will be printed: ROUTERConnection_createStreamReaderAdapter:(adapter=StorageAdapterPlugin, retcode=0: Function returned NULL) These messages are harmless, they are just informing you that a table could not be found for the topic (in the example above, TopicNotRecorded).
2022-09-25 01:18:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2057507187128067, "perplexity": 2705.650700823472}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00386.warc.gz"}
http://crypto.stackexchange.com/tags/key-exchange/hot
# Tag Info 23 The SSL and TLS protocols (on which HTTPS is based) are designed in a way that no attacker (neither a passive nor an active one) can read anything of the encrypted part (if the cryptographic assumptions hold - and if you don't use the NONE cipher, which does no encryption). Of course, the attacker can read the negotiation part. But this part will not ... 17 Both RSA and Diffie-Hellman work with modular exponentiation. But they work in a different way: In RSA, there are two exponentiations which invert each other, i.e. we have $e$ and $d$ such that $(x^e)^d \equiv x$ for all $x$. E.g. if $\square^e$ is the encryption, $\square^d$ is the corresponding decryption. To create this pair of $e$ and $d$ (or derive one ... 17 The security of Diffie-Hellman depends upon the group in which DH is used, but not upon which generator is used for this group. See note 3.53 (chapter 3, page 103) of the Handbook of Applied Cryptography. In more details: for DH, we use a subgroup of size $q$ of the integers modulo $p$ (a big prime) with the multiplication as group operation. $q$ should be ... 13 The really great thing about Diffie-Hellman is how light it is, network-wise: both parties send each other a single message; neither has to wait for the message from the peer before beginning to computing his own message. If you can tolerate something heavier, you can have a look at what @Paŭlo describes; with $n$ participants, it requires $n-1$ messaging ... 13 I assume you're talking about SSL/TLS or a similar protocol. In these protocols there are two reasons to use Diffie-Hellman: Your certificate only supports signing Either it is an RSA certificate restricted to signing, or it uses an algorithm that doesn't support encryption, such as DSA or ECDSA. Forward security - What happens if the server's private key ... 12 Say you encrypt a message with a key $k$. With symmetric encryption (ie. symmetric ciphers), $k$ must be secret. The sender and recipient must agree (somehow) on $k$. No-one else can be allowed to find out $k$. Anyone else who finds out $k$, can decrypt all the messages encrypted with $k$. For that reason, symmetric ciphers are often called "secret key" ... 12 The standard Diffie-Hellman key exchange algorithm (or family of algorithms) works in an cyclic group with generator $g$, and relies on $${y_A}^{x_B} = (g^{x_A})^{x_B} = (g^{x_B})^{x_A} = {y_B}^{x_A},$$ where $y_A$ and $y_B$ are publicly transmitted, while $x_A$ and $x_B$ remain private. With three parties, we still have ((g^{x_A})^{x_B})^{x_C} = ... 9 You can do key agreement with asymmetric encryption. Any asymmetric encryption algorithm (post-quantum or not) can be used for key agreement: just choose a random key and encrypt it. Password Authenticated Key Exchange looks harder, because it cannot be applied on just any key exchange or asymmetric encryption scheme. The IPAKE framework can be applied on ... 9 Well, the advantages of static-ephemeral ECDH (and, they apply to DH as well): You get one-way authentication for free. That is, if Bob has Alice's public ECDH key, and uses it to talk to someone, Bob knows that that someone is Alice, without doing any further checks. Now, Alice has no idea who she's talking to; on the other hand, for some scenarios, ... 8 Well, it depends on the which protocol is being used. For WEP and WPA, the keys used are derived directly from the pre-shared keys; that means that as long as you know the pre-shared keys, you can immediately decrypt packets as well. On the other hand, WPA2 is somewhat stronger; the two sides exchange nonces to derive the keys. Hence, unless you listen ... 8 What is usually meant by "group encryption" is not what you are after. Group encryption algorithms strive to achieve the following: a given message is encrypted, and may be decrypted only if sufficiently many group members collaborate. This is not what you seek; what you want is a system such that a given message can be encrypted once and every member of the ... 7 The check $y_b^q = 1 \mod p$ is there to prevent two possible weaknesses: Suppose someone gave us (either because of a programmer error or deliberate attack) gave us a $y_b$ value of small order. If so, then someone listening in can guess the shared secret you derive. Suppose an attacker gave us a $y_b$ value with an order with a small factor $r$. Then, ... 7 An attack would be trivial if the seed of the RNG was only 32 bits; just enumerate the seeds, and test which matches the intercepted messages. That's easy. However the default Java Random class uses a 48-bit state and seed (which would still be attackable, though $2^{16}$ times less easily), and there are safe subclasses, thus use of Random does not imply ... 7 CRAM-MD5 is a protocol to demonstrate knowledge of a password. In the context of email, it is sometime used by an email client to authenticate to a POP, IMAP, or/and SMTP server. Basically, the password is used as the key of HMAC-MD5 in a challenge-response protocol. Among positive things there are to say about CRAM-MD5: The password is not exchanged in ... 7 There is nothing related to passwords in AES. AES uses 128-bit keys, i.e. sequences of 128 bits. How you come up with such a key is out of scope of AES. In some contexts, you want to generate these 128 bits in a deterministic way from a password (and possibly some publicly known contextual data, like a "salt"); this is a job for password hashing. In other ... 7 ElGamal appears to be used instead of Diffie-Hellman (or IES) in OpenPGP mostly because when that format was put together, there were some unresolved intellectual property issues surrounding both RSA and Diffie-Hellman, while ElGamal was unproblematic. This trend for ElGamal seems to stick around, mostly by force of habit, e.g. when switching to ... 7 ECDSA is a digial signature algorithm ECIES is an Intergrated Encryption scheme ECDH is a key secure key exchange algorithm. First you should understand what are the purpose of these algorithms. Digital signature algorithms are used to authenticate a digital content.A valid digital signature gives a recipient reason to believe that the message was created ... 6 Diffie-Hellman and RSA are distinct and do not use the same "trick". In Diffie-Hellman, commutativity is used: $(g^a)^b = (g^b)^a$. Both Alice and Bob do two modular exponentiations each (Alice chooses $a$, computes $g^a$ and sends it to Bob, receives $g^b$ from Bob, and finally computes $(g^b)^a$). Security relies on the difficulty of discrete logarithm: ... 6 Well, what SSL uses to negotiate the symmetric keys depends on the ciphersuite that both sides agree upon. By far, the most common method is that the client picks a random value (the premaster secret), and encrypts it with the server's RSA public key. However, it is not that unusual for the ciphersuite to specify that the client and the server agree upon a ... 6 Yes, it is. Because of the way public key crypto works, they wouldn't be able to decrypt it. First, realize that something encrypted with a public key can only be decrypted with the corresponding private key (or, depending on the algorithm, vice-versa). So lets say everyone (including the sniffer) has the server's public key. You encrypt something with it, ... 6 Handing keys in general is known as key management. Symmetric keys should be kept secret. Secret key is often used as a synonym for symmetric key. The establishment of symmetric keys can be performed in several ways: (Authenticated) Key Agreement (KA) Sending of an (authenticated) encrypted key, also known as key wrapping Derivation from a base key using ... 6 So your protocol goes like this: Alice generates a key pair $(a_{priv}, a_{pub})$ and sends $a_{pub}$ to Bob. Bob generates a key pair $(b_{priv}, b_{pub})$ and sends $b_{pub}$ to Alice. Alice generates a message $m$ and sends $Enc(Sign(m, a_{priv}), b_{pub})$ (or $Sign(Enc(m, b_{pub}), a_{priv})$, I'm not sure which of both is usually used by PGP) to Bob. ... 5 The mathematical problems behind DH and RSA are similar but not known to be directly related. It is still an open question if an oracle breaking DH can be used to construct another oracle that breaks RSA (or vice versa). It is mostly believed that the two problems are not reducible to each other in poly time. However, the complexity of the fastest known DH ... 5 "Is to be encrypted" is not a ultimate goal. You do not encrypt data for the sake of it; you encrypt data as a way to ensure a given security property, e.g. transmitting some data between two machines, without compromising the data confidentiality with regards to attackers who may spy on the transmission line (or even alter data in transit). If: your ... 5 The standard answer in the research literature is to use information-theoretically secure message authentication codes, typically universal hashing (aka Carter-Wegman authenticators). Of course, you could use computationally-secure message authentication codes, like CMAC or HMAC, if you wanted, though that would partly defeat one of the reasons for using ... 5 Fair exchange protocols aren't new by any means, but there is a lack of layman-friendly material out there, unfortunately. I think the high prevalence of theoretical cryptography in fair exchange protocols may be partially responsible for that. At any rate, here is the basic idea behind a fair exchange protocol. Suppose you have two parties, Alice and Bob, ... 5 Most likely, this 'shared secret' was actually an IKE "preshared key"; it is used to authenticate the two sides (and, for IKEv1, is stirred into the keys). It actually isn't used as a key (and hence someone learning that key cannot use it to listen in, unless they perform an active Man-in-the-Middle attack). I suspect the password is the authentication ... 5 One observation is that if we modify the problem so that $M, A, B$ are random invertible matrices, then it is easy to prove the security of the system. In fact, we can prove that the system is informationally secure; that is, for any observed $C_1, C_2$ pair, for any possible value of $K$, there is a unique set of values of $A, B, M$ that yield that $K$ ... 5 First, I am assuming, per https://security.stackexchange.com/questions/29172/what-changed-between-tls-and-dtls, that the client handshake protocol in DTLS is not different from that in TLS over TCP. This seems a safe bet since the client/server encrypted handshake protocol in OpenVPN's UDP implementation is the same as in standard TLS over TCP. I am not ... 5 Let’s take your questions in order. Note that I’m a physicist working in quantum cryptography, so my opinion on this might be biased 1. What about authentication ? The classical channel between Alice and Bob has to be authenticated in order for the protocol to work. Formally, this is a pre-requisite for quantum key distribution (QKD), and is not part of ... Only top voted, non community-wiki answers of a minimum length are eligible
2014-10-20 11:21:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5815644264221191, "perplexity": 1078.2064950980334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442497.30/warc/CC-MAIN-20141017005722-00370-ip-10-16-133-185.ec2.internal.warc.gz"}
https://enacademic.com/dic.nsf/enwiki/295870
# Time constant  Time constant In physics and engineering, the time constant usually denoted by the Greek letter "$au$", (tau), characterizes the frequency response of a first-order, linear time-invariant (LTI) system. Examples include electrical RC circuits and RL circuits. It is also used to characterize the frequency response of various signal processing systems &ndash; magnetic tapes, radio transmitters and receivers, record cutting and replay equipment, and digital filters &ndash; which can be modeled or approximated by first-order LTI systems. Other examples include time constant used in control systems for integral and derivative action controllers, which are often pneumatic, rather than electrical. Time constants are also used in 'lumped capacity method' analysis of thermal systems, for example when object is cooled down under the influence of convective cooling. Physically, the constant represents the time it takes the system's step response to reach approximately 63% of its final (asymptotic) value. Differential equation First order LTI systems are characterized by the differential equation:$\left\{dV over dt\right\} = - alpha V,$ where $alpha$ represents the exponential decay constant and "V" is a function of time "t":$V = V\left(t\right) ,$ The time constant is related to the exponential decay constant by:$au = \left\{ 1 over alpha \right\} ,$ General Solution The general solution to the differential equation is:$V\left(t\right) = V_o e^\left\{-alpha t\right\} = V_o e^\left\{-t / au\right\} ,$ where:$V_o = V\left(t=0\right) ,$ is the initial value of "V". Control Engineering The diagram below depicts the exponential function $y=Ae^\left\{at\right\}$ in the specific case where $a<0$, otherwise referred to as a "decaying" exponential function: Suppose:$y=Ae^\left\{-at\right\} = Ae^\left\{-\left\{t over au$ then:$au=\left\{ 1 over a\right\}$ The term $au$ (tau) is referred to as the "time constant" and can be used (as in this case) to indicate how rapidly an exponential function decays. Where: :t = time (generally always $t>0$ in control engineering):A = initial value (see "specific cases" below). pecific cases :1). Let $t=0$, hence $y=Ae^0$, and so $y=A$ :2). Let $t= au$, hence $y=Ae^\left\{-1\right\}$, ≈ $0.37 A$:3). Let $y=f\left(t\right)=Ae^\left\{-\left\{t over au$, and so $lim_\left\{t o infty\right\}f\left(t\right) = 0$ :4). Let $t=5 au$, hence $y=Ae^\left\{-5\right\}$, ≈ $0.0067 A$After a period of one time constant the function reaches e-1 = approximately 37% of its initial value. In case 4, after five time constants the function reaches a value less than 1% of its original. In most cases this 1% threshold is considered sufficient to assume that the function has decayed to zero - Hence in control engineering a stable system is mostly assumed to have settled after five time constants. Examples of time constants Time constants in electrical circuits In an RL circuit, the time constant "$au$" (in seconds) is :$au = \left\{ L over R \right\} ,$ where "R" is the resistance (in ohms) and "L" is the inductance (in henries). Similarly, in an RC circuit, the time constant "$au$" (in seconds) is::$au = R C ,$ where "R" is the resistance (in ohms) and "C" is the capacitance (in farads). Thermal time constant See discussion page. Time constants in neurobiology In an action potential (or even in a passive spread of signal) in a neuron, the time constant "$au$" is:$au = r_\left\{m\right\} c_\left\{m\right\} ,$ where "r"m is the resistance across the membrane and "c"m is the capacitance of the membrane. The resistance across the membrane is a function of the number of open ion channels and the capacitance is a function of the properties of the lipid bilayer. The time constant is used to describe the rise and fall of the action potential, where the rise is described by:$V\left(t\right) = V_\left\{max\right\} \left(1 - e^\left\{-t / au\right\}\right) ,$ and the fall is described by:$V\left(t\right) = V_\left\{max\right\} e^\left\{-t / au\right\} ,$ Where voltage is in millivolts, time is in seconds, and "$au$" is in seconds. Vmax is defined as the maximum voltage attained in the action potential, where :$V_\left\{max\right\} = r_\left\{m\right\}I ,$ where "r"m is the resistance across the membrane and "I" is the current flow. Setting for "t" = "$au$" for the rise sets "V"("t") equal to 0.63"V"max. This means that the time constant is the time elapsed after 63% of "V"max has been reached. Setting for "t" = "$au$" for the fall sets "V"("t") equal to 0.37"V"max, meaning that the time constant is the time elapsed after it has fallen to 37% of "V"max. The larger a time constant is, the slower the rise or fall of the potential of neuron. A long time constant can result in temporal summation, or the algebraic summation of repeated potentials. The half-life "T""HL" of a radioactive isotope is related to the exponential time constant "$au$" by:$T_\left\{HL\right\} = au cdot mathrm\left\{ln2\right\} ,$ tep Response with Non-Zero Initial Conditions If the motor is initially spinning at a constant speed (expressed by voltage), the time constant $au$ is 63% of $V_infty$ minus V"o". Therefore,:$V\left(t\right) = V_o + \left(V_\left\{infty\right\} - V_o\right) cdot \left(1 - e^\left\{-t / au\right\}\right) ,$ can be used where the initial and final voltages, respectively, are: :$V_o = V\left(t=0\right)$and:$V_\left\{infty\right\} = V\left(t=infty\right)$ tep Response from Rest From rest, the voltage equation is a simplification of the case with non-zero ICs. With an initial velocity of zero, V0 drops out and the resulting equation is::$V\left(t\right) = V_\left\{infty\right\} cdot \left(1 - e^\left\{-t / au \right\}\right) ,$ The time constant will remain the same for the same system regardless of the starting conditions. For example, if an electric motor reaches 63% of its final speed from rest in ⅛ of a second, it will also take ⅛ of a second for the motor to reach 63% of its final speed when started with some non-zero initial speed. Simply stated, a system will require a certain amount of time to reach its final, steady-state situation regardless of how close it is to that value when started. ee also *RC time constant *Cutoff frequency *EQ filter *Exponential decay *Length constant * [http://www.sengpielaudio.com/calculator-timeconstant.htm Conversion of time constant τ to cutoff frequency fc and vice versa] Wikimedia Foundation. 2010. ### Look at other dictionaries: • time constant — laiko pastovioji statusas T sritis automatika atitikmenys: angl. time constant vok. Zeitkonstante, f rus. постоянная времени, f pranc. constante de temps, f …   Automatikos terminų žodynas • time constant — vyksmo trukmės konstanta statusas T sritis Standartizacija ir metrologija apibrėžtis Laiko tarpas, per kurį pereinamąjį vyksmą apibūdinantis parametras pakinta e (e = 2,71828…) kartų. atitikmenys: angl. time constant vok. Zeitkonstante, f rus.… …   Penkiakalbis aiškinamasis metrologijos terminų žodynas • time constant — trukmės konstanta statusas T sritis fizika atitikmenys: angl. time constant vok. Zeitkonstante, f rus. постоянная времени, f pranc. constante de temps, f …   Fizikos terminų žodynas • time constant — noun 1. (electronics) the time required for the current or voltage in a circuit to rise or fall exponentially through approximately 63 per cent of its amplitude • Topics: ↑electronics • Hypernyms: ↑time interval, ↑interval • Hyponyms: ↑ …   Useful english dictionary • time constant — Elect. the time required for a changing quantity in a circuit, as voltage or current, to rise or fall approximately 0.632 of the difference between its old and new value after an impulse has been applied that induces such a change: equal in… …   Universalium • time constant — noun a) A characteristic of an exponential function, represented by τ in the function: :. b) The time in which a physical systems response to a step wise change in an external variable reaches approximately 63% of its final (asymptotic) value …   Wiktionary • time constant — noun Physics a quantity (in units of time) expressing the speed of response of a device or system …   English new terms dictionary • time constant — Смотри постоянная времени …   Энциклопедический словарь по металлургии • time constant of the aperiodic component — aperiodinio sando trukmės konstanta statusas T sritis radioelektronika atitikmenys: angl. time constant of the aperiodic component vok. Zeitkonstante aperiodischer Komponente, f; Zeitkonstante der Gleichkomponente, f rus. постоянная времени… …   Radioelektronikos terminų žodynas • time constant of an exponentially varying quantity — eksponentinio vyksmo trukmės konstanta statusas T sritis Standartizacija ir metrologija apibrėžtis Laiko tarpas, per kurį dydis pasiektų savo ribinę vertę, jei išlaikytų pradinę kitimo spartą. atitikmenys: angl. time constant of an exponentially… …   Penkiakalbis aiškinamasis metrologijos terminų žodynas
2019-10-17 15:12:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 44, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8578864336013794, "perplexity": 3923.704542878027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00543.warc.gz"}
http://aimpl.org/singularhiggs/3/
## 3. 2k integrable Hitchin equations 1.     Consider the space of “$2k$ integrable Hitchin equations”, with the Higgs fields possibly satisfying some extra conditions (e.g. maybe they commute, maybe they are in the same Borel, etc.). For instance, these can come from Higgs bundles on a higher dimensional space where the Higgs field satisfies the Simpson condition $\phi\wedge\phi=0$. If the higher dimensional space is of general type and we restrict to a curve in this space, we obtain two Higgs fields $(E,\phi_1,\phi_2)$ where $\phi_1$ and $\phi_2$ are required to commute. #### Problem 3.1. Formulate a non-abelian Hodge theorem and mirror symmetry for the moduli spaces of $2k$ integrable Hitchin systems. There is apparently some study of these in the sense of “generalised Hitchin systems”, coming from higher dimensional versions of Yang-Mills. Ward has constructed solution spaces (reference?). •     These $2k$ integrable Hitchin systems are supposed to correspond to 4d $\mathcal{N}=1$ SUSY quantum field theories. #### Problem 3.2. Classify exact solutions to these $2k$ integrable Hitchin equations, and interpret them in terms of 4d $\mathcal{N}=1$ SUSY theories using marked points on the Riemann surface $C$. Cite this as: AimPL: Singular geometry and Higgs bundles in string theory, available at http://aimpl.org/singularhiggs.
2019-05-24 14:55:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7400833368301392, "perplexity": 758.6283050607983}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257660.45/warc/CC-MAIN-20190524144504-20190524170504-00553.warc.gz"}
https://hal-mines-paristech.archives-ouvertes.fr/hal-01114315
# On weak Mellin transforms, second degree characters and the Riemann hypothesis Abstract : We say that a function f defined on R or Qp has a well defined weak Mellin transform (or weak zeta integral) if there exists some function $M_f(s)$ so that we have $Mell(\phi \star f,s) = Mell(\phi,s)M_f(s)$ for all test functions $\phi$ in $C_c^\infty(R^*)$ or $C_c^\infty(Q_p^*)$. We show that if $f$ is a non degenerate second degree character on R or Qp, as defined by Weil, then the weak Mellin transform of $f$ satisfies a functional equation and cancels only for $\Re(s) = 1/2$. We then show that if $f$ is a non degenerate second degree character defined on the adele ring $A_Q$, the same statement is equivalent to the Riemann hypothesis. Various generalizations are provided. Keywords : Document type : Preprints, Working Papers, ... https://hal-mines-paristech.archives-ouvertes.fr/hal-01114315 Contributor : Bruno Sauvalle Connect in order to contact the contributor Submitted on : Monday, February 9, 2015 - 4:18:18 PM Last modification on : Tuesday, July 21, 2020 - 3:18:52 AM Long-term archiving on: : Saturday, September 12, 2015 - 9:50:33 AM ### Files WMTSDCRH.pdf Files produced by the author(s) ### Identifiers • HAL Id : hal-01114315, version 1 • ARXIV : 1502.02633 ### Citation Bruno Sauvalle. On weak Mellin transforms, second degree characters and the Riemann hypothesis. 2015. ⟨hal-01114315⟩ ### Metrics Les métriques sont temporairement indisponibles
2022-01-24 23:27:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.568970799446106, "perplexity": 1448.588043906993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304686.15/warc/CC-MAIN-20220124220008-20220125010008-00480.warc.gz"}