text stringlengths 1 6.06k | source stringclasses 375
values | page int64 0 1.49k | book stringclasses 375
values | chunk_index int64 0 0 |
|---|---|---|---|---|
Advanced Algorithms March 22, 2022 Lecture 9: Solving LPs using Multiplicative Weights Notes by Ola Svensson1 In this lecture we do the following: • We describe the Multiplicative Weight Update (actually Hedge) method. • We then use this method to solve covering LPs. • This is a very fast and simple (i.e., very attract... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 0 | Lecture9 | 0 |
Last lecture we analyzed the case when ε = 1/2. The same proof gives the following Theorem 1 For any sequence of outcomes, duration T, and expert i ∈[N], # of WM mistakes ≤2(1 + ε) · (# of i’s mistakes) + O(log(N)/ε) . 1Disclaimer: These notes were written as notes for the lecturer. They have not been peer-reviewed and... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 0 | Lecture9 | 0 |
Proof [Sketch] The proof was done by defining a potential function: for each t = 1, . . . , T + 1, let Φ(t) = X i∈[N] w(t) i . We now lower bound the “final” potential Φ(T +1) using the number of mistakes of i. We then upper bound it in terms of our number of mistakes. Lower bound: The weight of expert i goes down by a... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 1 | Lecture9 | 0 |
strategy: randomization is often very good to limit the effect of adversaries. Allowing for randomized strategies leads to the following game with T days and N experts: For t = 1, . . . , T: 1. Each expert i ∈[N] gives some advice. 2. Allocator picks some distribution <unk>p(t) = p(t) 1 , . . . , p(t) N over the expert... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 1 | Lecture9 | 0 |
advice is m(t) i . Here, we have generalized the cost to be anything in [−1, 1] instead of only counting the number of mistakes. (A negative number means that it was profitable to follow that expert’s advice.) As we play a randomized strategy, the expected cost incurred at day t is thus X i∈[N] Pr[aggregator follows ex... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 2 | Lecture9 | 0 |
in the last section. It is parameterized by the “learning parameter” ε > 0: • Initially, assign each expert i a weight w(1) i of 1. (All experts are equally trustworthy in the beginning.) At each time t: • Pick the distribution p(t) i = w(t) i /Φ(t) where Φ(t) = P i∈[N] w(t) i . • After observing the cost vector, set w... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 2 | Lecture9 | 0 |
Lower bound on Φ(T +1): We lower bound the final potential as a function of i’s performance: Φ(T +1) = X j∈[N] w(T +1) j ≥w(T +1) i = exp(−ε T X t=1 m(t) i ) , where the last equality follows from that the initial weight of i was one and every day t his weight was updated by e−εm(t) i . Upper bound on Φ(T +1): We upper... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 3 | Lecture9 | 0 |
· exp ε2 −εp(T ) <unk>m(T ) ≤Φ(T −1) · exp ε2 −εp(T −1) <unk>m(T −1) · exp ε2 −εp(T ) <unk>m(T ) ... ≤Φ(1) · exp ε2T −ε T X t=1 <unk>p(t) <unk>m(t) ! = N · exp ε2T −ε T X t=1 <unk>p(t) <unk>m(t) ! , where for the equality we used that Φ(1) = N since each expert was initialized with a weight of 1. The above bounds give ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 3 | Lecture9 | 0 |
Taking (natural) logarithms, −ε T X t=1 m(t) i ≤ln(N) + ε2T −ε T X t=1 <unk>p(t) <unk>m(t) and the final result follows by dividing by ε and rearranging the terms. Remark The above proof may seem messy with all the inequalities. However, it follows a standard “framework”: upper and lower bound the potential function. T... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 4 | Lecture9 | 0 |
negative. Notice that both the set cover relaxation and the vertex cover relaxation that we saw in class were covering LPs. Let us now introduce an example of a covering LP that we will reuse later: minimize x1 + 2x2 (1) subject to x1 + 3x2 ≥2 2x1 + x2 ≥1 1 ≥x1, x2 ≥0 5 | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 4 | Lecture9 | 0 |
4.2 General idea The idea of using the Hedge method for linear programming is to associate an expert with each con- straint of the LP. In other words, the Hedge method will maintain a weight distribution over the set of constraints of a linear problem to solve, and to iteratively update those weights in a multiplicativ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 5 | Lecture9 | 0 |
p2 = 1 2, and we sum all the constraints: p1(x1 + 3x2) + p2(2x1 + x2) ≥p1 · 2 + p2 · 1 ⇔ 1.5x1 + 2x2 ≥1.5 1 ≥x1, x2 ≥0 By using the oracle, an optimal solution to this reduced problem is x1 = 1, x2 = 0 of cost 1. But is this a feasible solution to our original problem? By checking the constraints of the original LP: 2x... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 5 | Lecture9 | 0 |
4.3 Implementation of the oracle The oracle is given an objective function minimize Pn i=1 cixi and only one constraint that we can rewrite as Pn i=1 di xi ≥b (which is the weighted sum of all constraints). We also have 1 ≥xi ≥0 ∀i and ci, di ≥0 ∀i since it is a covering problem. The idea is to assign the maximum value... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 6 | Lecture9 | 0 |
, we wish to increase the weight of unsatisfied constraints and decrease the weight of satisfied constraints (in a smooth manner depending on the size of the violation or the slack). The Hedge algorithm for covering LPs thus becomes: • Assign each constraint i a weight w(1) i initialized to 1. At each time t: • Pick th... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 6 | Lecture9 | 0 |
Analysis. Since the analysis for the Hedge algorithm works with an adversarial construction of the cost vectors, it definitely holds for the cost vectors we constructed. Let ρ = max 1≤i≤m{max(bi, Ai1 −bi)} , be (an upper bound on) the width of our constructed cost vectors. By Corollary 3, we thus have for ε ∈[0, 1], T ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 7 | Lecture9 | 0 |
set cover we have that ρ ≤n and therefore, it is sufficient to set T = (4n2 ln m)/ε2 (this can in fact be improved by a better analysis to ≈n ln m/ε2). This gives a solution ̄x that satisfies P e∈S xe ≥1 −2ε for every set S and the cost of ̄x is at most that of an optimal LP solution. We can obtain a feasible (approxim... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Lecture9.pdf | 7 | Lecture9 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 1 Adapted from slides originally developed by Profs. Hill, Hoe, Falsafi and Wenisch of CMU, EPFL, Michigan, Wisconsin Emerging Memory II Fall 2021 Prof. Babak Falsafi https://parsa.epfl.ch/course-info/cs471/ CS471 | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 0 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 2 Where are we? u Storage class memory (SCM) § Crash consistency w/o logging u Computing in memory § Analytics – Mondrian Data Engine § DNNs - RERAM u Final Exam – Dec. 16th § All course material in-bounds u Poster Session – Dec. 23rd § Send A1-size PDF to instructors by Dec. 20th @ 8... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 1 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 3 Memory Hierarchy 3 Regs Caches (SRAM) Main memory (DRAM) SSD Hard disk Regs Caches (SRAM) Storage-Class Memory (SCM) SSD Hard disk Soon-to-be mainstream Today Main memory (DRAM) 3D Caches (DRAM) Faster Bigger | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 2 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 4 Recall: Horizontal vs. Vertical Organization CPU DRAM SCM CPU DRAM SCM Horizontal integration • User/OS must hide SCM latency Vertical integration (DRAM-cache) • SCM latency only exposed if data is not in DRAM We consider... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 3 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 5 What to do with stacked DRAM logic? u Recall: stacked DRAM has a memory stack, plus a logic “xPU” layer § How to use it effectively? | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 4 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 6 The Mondrian Data Engine Mario Drumond, Alexandros Daglis, Nooshin Mirzadeh, Dmitrii Ustiugov, Javier Picorel, Babak Falsafi, Boris Grot, Dionisios Pnevmatikatos | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 5 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 7 Data analytics take center stage u User data grows exponentially § Need to monetize data u In-memory data operators § Highly parallel § Low computational requirements u High energy footprints due to data movement DRAM Data movement dominates energy & limits performance | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 6 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 8 Minimizing movement costs u Move computations near memory § Addresses movement bottlenecks u Exposes high internal DRAM BW u Minimizes data movement energy cost u But DRAM is inflexible and cost-sensitive § Can’t meddle with internal structure DRAM Must adapt compute to DRAM to be c... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 7 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 9 In-memory data operators uMassive in-memory datasets §Little locality uModerate compute requirements uHighly data parallel Move lots of independent data to perform little computation | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 8 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 10 Cost of moving data DRAM CPU Memory access 640 pJ Fixed point Add 0.1 pJ Data access far more expensive than arithmetic operation | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 9 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 11 DRAM BW bottleneck 24 GB/s off-chip BW Memory Array Memory Array Row Buffer 100’s of GB/s internally DRAM CPU Internal DRAM BW presents big opportunity | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 10 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 12 Logic in DRAM is hard! uFabrication processes not compatible § DRAM is optimized for density § Logic is irregular, wire-intensive u90’s in-memory logic failed § DRAM is cost-sensitive § Bigger problem today DRAM Memory Array Memory Array Logic Early proposals too disruptive to DRAM... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 11 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 13 Logic Near-Memory Processing (NMP) u 3D logic/DRAM stack § Exposes internal BW to processing elements § But constrains logic layer’s area/power envelope 640 pJ 24 GB/s 150 pJ 128 GB/s DRAM Exploit the BW without data movement CPU | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 12 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 14 How to best exploit bandwidth? u DRAM internals optimized for density u DRAM accesses must activate rows § Single access activates KBs of data § Activations dominate access latency & energy u Can’t utilize internal BW with random access § Need to maintain many open rows § Complex b... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 13 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 15 NMP HW-algorithm co-design uAlgorithms: Must have sequential access §Even if we perform more work uHardware: Must leverage data parallelism §On a tight area/power budget HW-algorithm co-design necessary to get most out of NMP | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 14 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 16 Example data operator: Join u Iterates over a pair of tables to find matching keys u Major operation in data analytics Q: SELECT ... FROM A, B WHERE A.Key = B.Key Join A B Result C F A D B E A G Z C M E A C E | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 15 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 17 Baseline: CPU Hash Join u Best performing algorithm in CPU-centric systems u Performed in two phases: Partition & Probe 1. Partition generates cache sized partitions 2. Probe builds and queries cache resident hash tables Partition C F A D B E A D C E F B Probe H(x) E F B E F B Opti... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 16 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 18 NMP Hash Join To DRAM NMP DRAM C D F E A B C F A D B E H(X) Goal: maximum MLP • Limited by bookkeeping logic | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 17 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 19 NMP DRAM C D F E A B C F A D B E H(X) &C &F C F F C NMP Hash Join Minimum row buffer utilization To DRAM | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 18 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 20 To DRAM NMP DRAM C D F E A B C F A D B E H(X) &A &D NMP Hash Join Random accesses severely under-utilize NMP BW | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 19 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 21 Eliminate random access? u Insight: Use Sort Join § Performs mostly sequential accesses § But has higher algorithmic complexity u Trade algorithmic complexity for desirable access pattern O(n) random accesses O(n log n) sequential acesses H(x) C F A D D C A F A C D F D F C A Utiliz... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 20 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 22 NMP Sort Join: Sequential access base base NMP DRAM ACEG BDFH To DRAM Drop OoO logic • Reduces area/power of NMP Add stream buffer • Simple logic utilizes BW | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 21 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 23 base base NMP DRAM 2 0 1 3 ACEG 2 0 1 3 BDFH To DRAM &A &B NMP Sort Join: Sequential access | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 22 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 24 NMP DRAM base &A base &B 2 0 1 3 ACEG 2 0 1 3 BDFH &A + 0 &A + 1 To DRAM &B + 0 &B + 1 NMP Sort Join: Sequential access Good row buffer utilization | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 23 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 25 base &A base &B NMP DRAM 2 0 1 3 ACEG 2 0 1 3 BDFH &A + 0 &A + 1 &B + 0 &B + 1 To DRAM 3 3 4 4 1 2 1 2 &A + 0 &A + 1 &B + 0 &B + 1 NMP Sort Join: Sequential access | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 24 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 26 NMP DRAM base &A base &B 3 2 4 ACEG 3 2 4 BDFH To DRAM &A + 1 &B + 1 1 1 NMP Sort Join: Sequential access Sequential access moves bottleneck to compute | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 25 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 27 NMP Sort Join: Compute base &A base &B NMP DRAM 3 1 2 4 To DRAM 3 1 2 4 ACEG BDFH Use area/power budget for SIMD General purpose SIMD keeps up with internal BW | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 26 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 28 uBig data operators: §Scan §Join §Group By §Sort uMemory subsystem: u4 HMC stacks §20 GB/s external BW §128 GB/s internal BW uSimulated systems: uCPU-centric: ARM Cortex-A57 § 16 cores § 3-wide,128-entry ROB @ 2GHz uNMP: Mobile ARM core § 16 cores per stack § 3-wide, 48-entry ROB @... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 27 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 29 Evaluation: performance 1 10 100 Scan Sort Group by Join Speedup (log scale) Operator NMP Mondrian | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 28 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 30 Evaluation: performance 1 10 100 Scan Sort Group by Join Speedup (log scale) Operator NMP Mondrian Mondrian uses superior internal BW | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 29 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 31 Evaluation: performance 1 10 100 Scan Sort Group by Join Speedup (log scale) Operator NMP Mondrian NMP can’t utilize memory BW with random accesses | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 30 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 32 Evaluation: performance 1 10 100 Scan Sort Group by Join Speedup (log scale) Operator NMP Mondrian Mondrian BW utilization compensates for extra log(n) work | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 31 | 20_EmergingMemory2 | 0 |
CS 471 – Fall 2021 Lec. 20 - Slide 33 Summary 1. Challenges w/ Heterogeneous Hierarchies 2. TB-scale address spaces 3. Data movement uMoving near memory improves performance § But need to conform to DRAM constraints uMondrian introduces algorithm-hardware co-design § Adapt algorithms/HW to DRAM constraints § Sequential... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/20_EmergingMemory2.pdf | 32 | 20_EmergingMemory2 | 0 |
Advanced Probability and Applications EPFL - Spring Semester 2022-2023 Solutions to Homework 3 Exercise 1. a) In this case, P(1)({X1 ∈B1, X2 ∈B2}) = μ(B1) · μ(B2) = P(1)({X1 ∈B1}) · P(1)({X2 ∈B2}) The random variables X1 and X2 are therefore independent and identically distributed (i.i.d.). b) In this case, P(2)({X1 ∈B... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/sol3_1.pdf | 0 | sol3_1 | 0 |
1) = Z R dx1 1 √ 2π exp(−x2 1/2) 1 √ 2π exp(−(t −x1)2/2) = 1 √ 2π exp(−t2/2) Z R dx1 1 √ 2π exp(tx1 −x2 1) = 1 √ 2π exp(−t2/2) Z R dx1 1 √ 2π exp(−(x1 −t/2)2) exp(t2/4) = 1 √ 4π exp(−t2/4) Z R dx1 1 √π exp(−(x1 −t/2)2) The integral on the right-hand side is equal to 1, as the integrand is the pdf of a N(t/2, 1/2) rando... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/sol3_1.pdf | 0 | sol3_1 | 0 |
d) The only integer values of a for which E(Y ) and Var(Y ) are well-defined are non-negative values. For a = 0, we have Y = X0 = 1, so E(Y ) = 1 and Var(Y ) = 0. For a ≥1, we obtain by integration by parts: E(Y ) = E(Xa) = Z +∞ 0 xa λ exp(−λx) dx = Z +∞ 0 a λ xa−1 λ exp(−λx) dx = . . . = a! λa · 1 so E(Y 2) = E(X2a) =... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/sol3_1.pdf | 1 | sol3_1 | 0 |
first question is yes: take X such that P({X = +1}) = P({X = −1}) = 1 2 (verifying X ∼−X, Var(X) = 1 and Cov(X, Y ) = 1 2). e) The answer to the first question is no, but the one to the second is yes: consider Xn such that P({Xn = n}) = P({Xn = −n}) = 1 2n2 and P({Xn = 0}) = 1 − 1 n2 . Then Xn ∼−Xn and Var(Xn) = 1 for ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/sol3_1.pdf | 1 | sol3_1 | 0 |
Name: CS-471 Midterm Exam November 2, 2017 1 CS-471 Midterm Exam Solutions Please answer all questions. You have 105 minutes in total starting from now. Please write your name at the top of each page. Please write clearly and concisely. Show all work for full credit. Total number of pages: 10 Problem Points Short Answe... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/ama-midterm-2017-sol.pdf | 0 | ama-midterm-2017-sol | 0 |
Name: CS-471 Midterm Exam November 2, 2017 2 Short Answers (30 points) 1.a. Name two reasons why MapReduce is built on top of a distributed file system. (4 points) • Fault tolerance • To avoid excessive data movement (ship computation to data) 1.b. The following graph compares the area and energy consumption of duplica... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/ama-midterm-2017-sol.pdf | 1 | ama-midterm-2017-sol | 0 |
Name: CS-471 Midterm Exam November 2, 2017 3 1.c. Does a programmer have to reason about the CPU’s memory consistency model (i) when writing assembly? (ii) when using a synchronization library (e.g., POSIX)? Please justify your answers. (4 points) (i) The programmer needs to reason about memory consistency whenever the... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/ama-midterm-2017-sol.pdf | 2 | ama-midterm-2017-sol | 0 |
Name: CS-471 Midterm Exam November 2, 2017 4 1.e. The canonical performance metric of transactional systems is transactions per second. (i) Why is that metric not suitable for cycle-accurate simulation? (ii) Is IPC a good representative of transactions/second? Explain. (6 points) (i) Transactions per second is a coarse... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/ama-midterm-2017-sol.pdf | 3 | ama-midterm-2017-sol | 0 |
Name: CS-471 Midterm Exam November 2, 2017 5 Coherence (25 points) 2. Alice is a computer architect and has taken up the task of designing a coherence protocol for a special multiprocessor system that features a shared L2 cache built of a new dense technology. The implication of this new technology is that writes to th... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/ama-midterm-2017-sol.pdf | 4 | ama-midterm-2017-sol | 0 |
Name: CS-471 Midterm Exam November 2, 2017 6 M O S I BusRdX/— BusInv/— BusRd/— PrWr/BusRdX BusInv/— BusRdX/— PrEv/— PrRd/BusRd PrRd/— BusRd/— BusRdX/BusCache PrEv/BusWB BusInv/BusWB BusRdX/BusCache PrEv/BusWB BusRd/BusCache PrRd/— PrWr/— PrRd/— BusRd/BusCache PrWr/BusInv PrWr/BusInv | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/ama-midterm-2017-sol.pdf | 5 | ama-midterm-2017-sol | 0 |
Name: CS-471 Midterm Exam November 2, 2017 7 Memory Ordering (25 points) 3. SPARC’s default memory consistency model is TSO (total store order). In TSO, all stores must appear to have executed atomically and in program order. Load instructions can bypass older store instructions. SPARC also provides special fence instr... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/ama-midterm-2017-sol.pdf | 6 | ama-midterm-2017-sol | 0 |
Name: CS-471 Midterm Exam November 2, 2017 8 a) While fence speculation is in progress, all loads can proceed, marking the corresponding cache block in the L1 cache or entry in the SB as speculatively read. b) Speculation completes when there are no outstanding stores in the store buffer. All speculative bits in the L1... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/ama-midterm-2017-sol.pdf | 7 | ama-midterm-2017-sol | 0 |
Name: CS-471 Midterm Exam November 2, 2017 9 Synchronization (20 points) 5. Bob ran two different multi-threaded programs using similar locking mechanisms on two different multiprocessor systems, a single-socket (uniform memory access) and a multi-socket. One program is a traditional multithreaded OLTP workload and the... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/ama-midterm-2017-sol.pdf | 8 | ama-midterm-2017-sol | 0 |
Name: CS-471 Midterm Exam November 2, 2017 10 5b. Convince Bob about the validity of your reasoning, by explaining what happens at each of the points A, B, C, D, E, and F. (12 points) A & B: The scale-out workload exhibits low contention and performance increases as the number of threads within a socket increases. When... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/ama-midterm-2017-sol.pdf | 9 | ama-midterm-2017-sol | 0 |
Boolean Methods for Multi-level Logic Synthesis Giovanni De Micheli Integrated Systems Laboratory This presentation can be used for non-commercial purposes as long as this note and the copyright footers are not removed © Giovanni De Micheli – All rights reserved | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 0 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 2 Module 1 NObjectives LWhat are Boolean methods LHow to compute don’t care conditions M Controllability M Observability LBoolean transformations | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 1 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 3 Boolean methods NExploit Boolean properties of logic functions NUse don’t care conditions NMore complex algorithms LPotentially better solutions LHarder to reverse the transformations NUsed within most synthesis tools | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 2 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 4 External don’t care conditions NControllability don’t care set CDCin LInput patterns never produced by the environment at the network’s input NObservability don’t care set ODCout LInput patterns representing conditions when an output is not observed by the environment LRelative to each output ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 3 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 5 Example | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 4 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 6 Overall external don’t care set NSum the controllability don’t cares to each entry of the observability don’t care set vector | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 5 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 7 Internal don’t care conditions | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 6 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 8 Internal don’t care conditions NInduced by the network structure NControllability don’t care conditions: LPatterns never produced at the inputs of a sub-network NObservability don’t care conditions LPatterns such that the outputs of a sub-network are not observed | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 7 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 9 Example of optimization with don’t cares x = a’ + b y = abx + a’cx x = a’ + b y = ax + a’c NCDC of y includes ab’x + a’x’ NMinimize fy to obtain: gy = ax + a’c | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 8 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 10 Satisfiability don’t care conditions NInvariant of the network: x = fx →x ≠ fx Í SDC NSDC = ∑all internal nodes x Å fx NUseful to compute controllability don't cares | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 9 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 11 CDC Computation NMethod 1: Network traversal algorithm LConsider initial CDC = CDCin at the primary inputs LConsider different cutsets moving through the network from inputs to outputs LAs the cutset moves forward MConsider SDC contribution of the newly considered block MRemove unneeded varia... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 10 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 12 Example a b c d e x1 x4 x3 x2 z1 z2 a b c d e x1 x4 x3 x2 z1 z2 {d,e} {b,c} {b,a,x4} {x1,a,x4} {x1,x2,x3,x4} {d,b,c} | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 11 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 13 Example N Assume CDCin = x1’x4’ N Select vertex va L Contribution of va to CDCcut= a Å (x2 Å x3) L Updated CDCcut= x’1 x’4 + a Å (x2 Å x3) L Drop variables D = {x2, x3} by consensus: L CDCcut = x1’x4’ N Select vertex vb L Contribution to CDCcut: b Å (x1 + a). M Updated CDCcut = x1’x4’ + b Å (... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 12 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 14 CDC Computation CONTROLLABILITY(Gn(V,E) , CDCin) { C = VI; CDCcut = CDCin; foreach vertex vx Î V in topological order { C = C U vx; CDCcut = CDCcut + fx Å x; D = {v Î C s.t. all direct successors of v are in C} foreach vertex vy Î D CDCcut = Cy(CDCcut); C = C – D; }; CDCout = CDCcut; } | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 13 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 15 CDC Computation NMethod 2: range or image computation NConsider the function f expressing the behavior of the cutset variables in terms of primary inputs NCDCcut is the complement of the range of f when CDCin = 0 NCDCcut is the complement of the image of (CDCin)’ under f NThe range and image ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 14 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 16 Example b c d e b c 0 0 1 0 1 1 N range(f) = d range((b+c)|d=bc=1) + d’ range((b+c)|d=bc=0) N When d = 1, then bc = 1 →b + c = 1 is TAUTOLOGY N If I choose 1 as top entry in output vector: L the bottom entry is also 1. N When d = 0, then bc = 0 →b+c = {0,1} N If I choose 0 as top entry in out... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 15 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 17 Example a b c d e x1 x4 x3 x2 z1 z2 f = f1 f2 (x1 + a)(x4 + a) (x1 + a) + (x4 + a) x1x4 + a x1 + x4 + a = = | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 16 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 18 range(f) = d range(f2|(x1x4 + a)=1) + d’ range(f2|(x1x4 + a)=0) = d range(x1 + x4 + a|(x1x4 + a)=1) + d’ range(x1 + x4 + a|(x1x4 + a)=0) = d range(1) + d’ range(a’(x1 Å x4)) = de + d’(e + e’) = e + d’ NCDCout = (e + d’)’ = de’ = z1z2’ Example x1 x4 a f2 f1 f’1 | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 17 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 19 Example a b c d e x1 x4 x3 x2 z1 z2 f = f1 f2 (x1 + a)(x4 + a) (x1 + a) + (x4 + a) x1x4 + a x1 + x4 + a = = CDCin = x’1 x’4 | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 18 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 20 image(f) = d image(f2|(x1x4 + a)=1) + d’ image(f2|(x1x4 + a)=0) = d image(x1 + x4 + a|(x1x4 + a)=1) + d’ image(x1 + x4 + a|(x1x4 + a)=0) = d image(1) + d’ image(1) = de + d’e = e NCDCout = e’ = z2’ Example x1 x4 a f2 f1 f’1 | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 19 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 21 Observability analysis NComplementary to controllability LAnalyze network from outputs to inputs NMore complex because network has several outputs and observability depends on output NObservability may be understood in terms of perturbations LIf you flip the polarity of a signal at net x, and... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 20 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 22 Observability don’t care conditions NConditions under which a change in polarity of a signal x is not perceived at the output NIf there is an explicit representation of the function, the ODC is the complement of the Boolean difference ODC = ( ∂f / ∂x)’ NOften, the terminal behavior is describ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 21 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 23 Tree-network traversal NConsider network from outputs to input NAt root LODCout is given LIt may be empty NAt internal nodes: LLocal function y = fy(x) LODCx = (∂fy / ∂x )’ + ODCy NObservability don’t care set has two components: LObservability of the local function and observability of the n... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 22 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 24 Example e = b + c b = x1 + a1 c = x4 + a2 NAssume ODCout = ODCe = 0 NODCb = (∂fe/∂b)’ = (b + c)|b = 1 (b + c)|b = 0 = c NODCc = (∂fe/∂c)’ = b NODCx1 = ODCb + (∂fb/∂x1)’ = c + a1 b c e x1 x4 a2 a1 Å | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 23 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 25 Non-tree network traversal NGeneral networks have forks and fanout reconvergence NFor each fork point, the contribution to the ODC depends on both paths NNetwork traversal cannot be applied in a straightforward way NMore elaborate analysis is needed | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 24 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 26 Two-way fork NCompute ODC sets associated with edges NRecombine ODCs at fork point NTheorem: L ODCx = ODCx,y|x=x’ ODCx,z L ODCx = ODCx,z|x=x’ ODCx,y NMulti-way forks can be reduced to a sequence of two-way forks Å Å | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 25 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 27 Example a b c d e x1 x4 x3 x2 z1 z2 a e d c b x1 x3 x2 z1 x4 z2 ODCc =( ) b’ b ; ODCb =( ) c’ c ; ODCa,b = ( ) a’x4’ + x1 a + x4 + x1 ( ) c’ + x1 c + x1 = ODCa,c =( ) b’ + x4 b + x4 ( ) a’x1’ + x4 a + x1 + x4 = ODCa = ( ) x1x4 x1 + x4 = ODCa,c ( ) a’x1’ + x4 a + x1 + x4 ODCa,b|a=a’ ( ) a x4’ ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 26 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 28 Don’t care computation summary NControllability don’t cares are derived by image computation LRecursive algorithms and data structure are applied NObservability don’t cares are derived by backward traversal LExact and approximate computation LApproximate methods compute don’t care subsets | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 27 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 29 Transformations with don’t cares NBoolean simplification LGenerate local DC set for local functions LUse heuristic minimizer (e.g., Espresso) LMinimize the number of literals NBoolean substitution: LSimplify a function by adding one (ore more) inputs LEquivalent to simplification with global ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 28 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 30 Example – Boolean substitution NSubstitute q = a + cd into fh = a + bcd + e LObtain fh = a + bq + e NMethod LCompute SDC including q Å (a+cd) = q’a +q’cd + qa’(cd)’ LSimplify fh = a + bcd + e with DC = q’a + q’cd + qa’ (cd)’ LObtain fh = a + bq +e NResult LSimplified function has one fewer li... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 29 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 31 Simplification operator NCycle over the network blocks LCompute local don’t care conditions LMinimize NIssues: LDon’t care sets change as blocks are being simplified LIteration may not have a fixed point LIt would be efficient to parallelize some simplifications | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 30 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 32 Optimization and perturbation NMinimizing a function at a block x is the replacement of a local function fx with a new function gx NThis is equivalent to perturbing the network locally by L δx = fx Å gx NConditions for a feasible replacement LPerturbation bounded by local don’t care sets L δx... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 31 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 33 Example NNo external don’t care set. NReplace AND by wire: gx = a NAnalysis: Lδ = fx Å gx = ab Å a = ab’ LODCx = y’ = b’ + c’ Lδ = ab’ Í DCx = b’ + c’ Þ feasible! x y b c a z x y b c a z | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 32 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 34 Parallel simplification NParallel minimization of logic blocks is always possible when blocks are logically independent LPartitioned network NWithin a connected network, logic blocks affect each other NDoing parallel minimization is like introducing multiple perturbations LBut it is attractiv... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 33 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 35 Example NPerturbations at x and y are related because of the reconvergent fanout at z NCannot change simultaneously L ab into a L cb into c | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 34 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 36 Boolean relation model | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 35 | DT12 (ml bool) | 0 |
(c) Giovanni De Micheli 37 Boolean relation model NBoolean relation minimization is the correct approach to handle Boolean optimization at multiple vertices NNecessary steps LDerive equivalence classes for Boolean relation LUse relation minimizer NPractical considerations LHigh computational requirement to use Boolean ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/DT12 (ml bool).pdf | 36 | DT12 (ml bool) | 0 |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 4